instance_id
stringlengths
21
53
repo
stringclasses
188 values
language
stringclasses
1 value
pull_number
int64
20
148k
title
stringlengths
6
144
body
stringlengths
0
83.4k
created_at
stringdate
2015-09-25 03:17:17
2025-07-10 16:50:35
problem_statement
stringlengths
188
240k
hints_text
stringlengths
0
145k
resolved_issues
listlengths
1
6
base_commit
stringlengths
40
40
commit_to_review
dict
reference_review_comments
listlengths
1
62
merged_commit
stringlengths
40
40
merged_patch
stringlengths
297
9.87M
metadata
dict
xorbitsai__inference-1168@f3ada96
xorbitsai/inference
Python
1,168
FEAT: OAuth system supports api-key
The built-in permission system now supports API key authentication. Fixes #918
2024-03-21T02:28:24Z
Any further plan to add access key for rest API call As mentioned in title . Model services are exposed by http reuqest for many cases, seems there is no palce in system to add authentication in http request (like the bare token in openai ), or it's already in the system? Any related infomation/documentation is welcomed !!!
We have implemented [OAuth2 system](https://inference.readthedocs.io/en/latest/user_guide/auth_system.html) in #793 , access key will be supported in the next two to three versions. > We have implemented [OAuth2 system](https://inference.readthedocs.io/en/latest/user_guide/auth_system.html) in #793 , access key will be supported in the next two to three versions. It seems the OAuth2 system is designed for the model management (correct me if i m wrong). The access key i mentioned is for the api authentication , that would be great if that is also what you meant very appreciate the team's contribution
[ { "body": "As mentioned in title .\r\n\r\nModel services are exposed by http reuqest for many cases, seems there is no palce in system to add authentication in http request (like the bare token in openai ), or it's already in the system?\r\n\r\nAny related infomation/documentation is welcomed !!!", "number": 918, "title": "Any further plan to add access key for rest API call" } ]
31032869b5e1f9a0915723d9e689a3a55eb9c9d0
{ "head_commit": "f3ada96298b9d5f98f25547857e3b35452b5480a", "head_commit_message": "bug fix", "patch_to_review": "diff --git a/doc/source/development/contributing_environment.rst b/doc/source/development/contributing_environment.rst\nindex 80aa266dfb..25ee11cc71 100644\n--- a/doc/source/development/contributing_environment.rst\n+++ b/doc/source/development/contributing_environment.rst\n@@ -8,7 +8,7 @@ Creating a development environment\n Before proceeding with any code modifications, it's essential to set up the necessary environment for Xinference development,\n which includes familiarizing yourself with Git usage, establishing an isolated environment, installing Xinference, and compiling the frontend.\n \n-Getting startted with Git\n+Getting started with Git\n -------------------------\n \n Now that you have identified an issue you wish to resolve, an enhancement to incorporate, or documentation to enhance,\ndiff --git a/doc/source/locale/zh_CN/LC_MESSAGES/development/contributing_environment.po b/doc/source/locale/zh_CN/LC_MESSAGES/development/contributing_environment.po\nindex 8ced444523..7aa3765a32 100644\n--- a/doc/source/locale/zh_CN/LC_MESSAGES/development/contributing_environment.po\n+++ b/doc/source/locale/zh_CN/LC_MESSAGES/development/contributing_environment.po\n@@ -8,7 +8,7 @@ msgid \"\"\n msgstr \"\"\n \"Project-Id-Version: Xinference \\n\"\n \"Report-Msgid-Bugs-To: \\n\"\n-\"POT-Creation-Date: 2024-03-06 16:29+0800\\n\"\n+\"POT-Creation-Date: 2024-03-21 09:59+0800\\n\"\n \"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\\n\"\n \"Last-Translator: FULL NAME <EMAIL@ADDRESS>\\n\"\n \"Language: zh_CN\\n\"\n@@ -38,7 +38,7 @@ msgstr \"\"\n \"Xinference 以及前端部分的编译。\"\n \n #: ../../source/development/contributing_environment.rst:12\n-msgid \"Getting startted with Git\"\n+msgid \"Getting started with Git\"\n msgstr \"Git 的使用\"\n \n #: ../../source/development/contributing_environment.rst:14\ndiff --git a/xinference/api/oauth2/auth_service.py b/xinference/api/oauth2/auth_service.py\nindex 2f11c26f18..2dfa5a474b 100644\n--- a/xinference/api/oauth2/auth_service.py\n+++ b/xinference/api/oauth2/auth_service.py\n@@ -11,6 +11,7 @@\n # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n # See the License for the specific language governing permissions and\n # limitations under the License.\n+import re\n from datetime import timedelta\n from typing import List, Optional\n \n@@ -40,13 +41,33 @@ def __init__(self, auth_config_file: Optional[str]):\n def config(self):\n return self._config\n \n+ @staticmethod\n+ def is_legal_api_key(key: str):\n+ pattern = re.compile(\"^[sk]{2}-[a-zA-Z0-9]{48}$\")\n+ if re.match(pattern, key):\n+ return True\n+ else:\n+ return False\n+\n def init_auth_config(self):\n if self._auth_config_file:\n config: AuthStartupConfig = parse_file_as(\n path=self._auth_config_file, type_=AuthStartupConfig\n )\n+ total_keys = set()\n for user in config.user_config:\n user.password = get_password_hash(user.password)\n+ if len(set(user.api_keys)) != len(user.api_keys):\n+ raise ValueError(\"User has duplicate Api-Keys\")\n+ for api_key in user.api_keys:\n+ if not self.is_legal_api_key(api_key):\n+ raise ValueError(\n+ \"Api-Key should be a string started with 'sk-' with a total length of 51\"\n+ )\n+ if api_key in total_keys:\n+ raise ValueError(\"Api-Keys of different users have conflict\")\n+ else:\n+ total_keys.add(api_key)\n return config\n \n def __call__(\n@@ -67,22 +88,33 @@ def __call__(\n headers={\"WWW-Authenticate\": authenticate_value},\n )\n \n+ through_api_key = False\n+\n try:\n assert self._config is not None\n- payload = jwt.decode(\n- token,\n- self._config.auth_config.secret_key,\n- algorithms=[self._config.auth_config.algorithm],\n- options={\"verify_exp\": False}, # TODO: supports token expiration\n- )\n- username: str = payload.get(\"sub\")\n- if username is None:\n- raise credentials_exception\n- token_scopes = payload.get(\"scopes\", [])\n- token_data = TokenData(scopes=token_scopes, username=username)\n+ if self.is_legal_api_key(token):\n+ through_api_key = True\n+ else:\n+ payload = jwt.decode(\n+ token,\n+ self._config.auth_config.secret_key,\n+ algorithms=[self._config.auth_config.algorithm],\n+ options={\"verify_exp\": False}, # TODO: supports token expiration\n+ )\n+ username: str = payload.get(\"sub\")\n+ if username is None:\n+ raise credentials_exception\n+ token_scopes = payload.get(\"scopes\", [])\n+ token_data = TokenData(scopes=token_scopes, username=username)\n except (JWTError, ValidationError):\n raise credentials_exception\n- user = self.get_user(token_data.username)\n+ if not through_api_key:\n+ user = self.get_user(token_data.username)\n+ else:\n+ user = self.get_user_with_api_key(token)\n+ if user is None:\n+ raise credentials_exception\n+ token_data = TokenData(scopes=user.permissions, username=user.username)\n if user is None:\n raise credentials_exception\n if \"admin\" in token_data.scopes:\n@@ -102,6 +134,13 @@ def get_user(self, username: str) -> Optional[User]:\n return user\n return None\n \n+ def get_user_with_api_key(self, api_key: str) -> Optional[User]:\n+ for user in self._config.user_config:\n+ for key in user.api_keys:\n+ if api_key == key:\n+ return user\n+ return None\n+\n def authenticate_user(self, username: str, password: str):\n user = self.get_user(username)\n if not user:\ndiff --git a/xinference/api/oauth2/types.py b/xinference/api/oauth2/types.py\nindex 106680deac..deb5740a19 100644\n--- a/xinference/api/oauth2/types.py\n+++ b/xinference/api/oauth2/types.py\n@@ -23,6 +23,7 @@ class LoginUserForm(BaseModel):\n \n class User(LoginUserForm):\n permissions: List[str]\n+ api_keys: List[str]\n \n \n class AuthConfig(BaseModel):\ndiff --git a/xinference/client/restful/restful_client.py b/xinference/client/restful/restful_client.py\nindex ca5d8ef0a3..0e6c00f23a 100644\n--- a/xinference/client/restful/restful_client.py\n+++ b/xinference/client/restful/restful_client.py\n@@ -651,11 +651,13 @@ def translations(\n \n \n class Client:\n- def __init__(self, base_url):\n+ def __init__(self, base_url, api_key: Optional[str] = None):\n self.base_url = base_url\n- self._headers = {}\n+ self._headers: Dict[str, str] = {}\n self._cluster_authed = False\n self._check_cluster_authenticated()\n+ if api_key is not None:\n+ self._headers[\"Authorization\"] = f\"Bearer {api_key}\"\n \n def _set_token(self, token: Optional[str]):\n if not self._cluster_authed or token is None:\ndiff --git a/xinference/deploy/cmdline.py b/xinference/deploy/cmdline.py\nindex df620023e5..5e27fe15c1 100644\n--- a/xinference/deploy/cmdline.py\n+++ b/xinference/deploy/cmdline.py\n@@ -376,18 +376,27 @@ def worker(\n is_flag=True,\n help=\"Persist the model configuration to the filesystem, retains the model registration after server restarts.\",\n )\[email protected](\n+ \"--api-key\",\n+ \"-ak\",\n+ default=None,\n+ type=str,\n+ help=\"Api-Key for access xinference api with authorization.\",\n+)\n def register_model(\n endpoint: Optional[str],\n model_type: str,\n file: str,\n persist: bool,\n+ api_key: Optional[str],\n ):\n endpoint = get_endpoint(endpoint)\n with open(file) as fd:\n model = fd.read()\n \n- client = RESTfulClient(base_url=endpoint)\n- client._set_token(get_stored_token(endpoint, client))\n+ client = RESTfulClient(base_url=endpoint, api_key=api_key)\n+ if client._get_token() is None:\n+ client._set_token(get_stored_token(endpoint, client))\n client.register_model(\n model_type=model_type,\n model=model,\n@@ -408,15 +417,24 @@ def register_model(\n help=\"Type of model to unregister (default is 'LLM').\",\n )\n @click.option(\"--model-name\", \"-n\", type=str, help=\"Name of the model to unregister.\")\[email protected](\n+ \"--api-key\",\n+ \"-ak\",\n+ default=None,\n+ type=str,\n+ help=\"Api-Key for access xinference api with authorization.\",\n+)\n def unregister_model(\n endpoint: Optional[str],\n model_type: str,\n model_name: str,\n+ api_key: Optional[str],\n ):\n endpoint = get_endpoint(endpoint)\n \n- client = RESTfulClient(base_url=endpoint)\n- client._set_token(get_stored_token(endpoint, client))\n+ client = RESTfulClient(base_url=endpoint, api_key=api_key)\n+ if client._get_token() is None:\n+ client._set_token(get_stored_token(endpoint, client))\n client.unregister_model(\n model_type=model_type,\n model_name=model_name,\n@@ -437,15 +455,24 @@ def unregister_model(\n type=str,\n help=\"Filter by model type (default is 'LLM').\",\n )\[email protected](\n+ \"--api-key\",\n+ \"-ak\",\n+ default=None,\n+ type=str,\n+ help=\"Api-Key for access xinference api with authorization.\",\n+)\n def list_model_registrations(\n endpoint: Optional[str],\n model_type: str,\n+ api_key: Optional[str],\n ):\n from tabulate import tabulate\n \n endpoint = get_endpoint(endpoint)\n- client = RESTfulClient(base_url=endpoint)\n- client._set_token(get_stored_token(endpoint, client))\n+ client = RESTfulClient(base_url=endpoint, api_key=api_key)\n+ if client._get_token() is None:\n+ client._set_token(get_stored_token(endpoint, client))\n \n registrations = client.list_model_registrations(model_type=model_type)\n \n@@ -638,6 +665,13 @@ def list_model_registrations(\n type=bool,\n help=\"Whether or not to allow for custom models defined on the Hub in their own modeling files.\",\n )\[email protected](\n+ \"--api-key\",\n+ \"-ak\",\n+ default=None,\n+ type=str,\n+ help=\"Api-Key for access xinference api with authorization.\",\n+)\n @click.pass_context\n def model_launch(\n ctx,\n@@ -654,6 +688,7 @@ def model_launch(\n image_lora_load_kwargs: Optional[Tuple],\n image_lora_fuse_kwargs: Optional[Tuple],\n trust_remote_code: bool,\n+ api_key: Optional[str],\n ):\n kwargs = {}\n for i in range(0, len(ctx.args), 2):\n@@ -686,8 +721,9 @@ def model_launch(\n if size_in_billions is None or \"_\" in size_in_billions\n else int(size_in_billions)\n )\n- client = RESTfulClient(base_url=endpoint)\n- client._set_token(get_stored_token(endpoint, client))\n+ client = RESTfulClient(base_url=endpoint, api_key=api_key)\n+ if client._get_token() is None:\n+ client._set_token(get_stored_token(endpoint, client))\n \n model_uid = client.launch_model(\n model_name=model_name,\n@@ -718,12 +754,20 @@ def model_launch(\n type=str,\n help=\"Xinference endpoint.\",\n )\n-def model_list(endpoint: Optional[str]):\[email protected](\n+ \"--api-key\",\n+ \"-ak\",\n+ default=None,\n+ type=str,\n+ help=\"Api-Key for access xinference api with authorization.\",\n+)\n+def model_list(endpoint: Optional[str], api_key: Optional[str]):\n from tabulate import tabulate\n \n endpoint = get_endpoint(endpoint)\n- client = RESTfulClient(base_url=endpoint)\n- client._set_token(get_stored_token(endpoint, client))\n+ client = RESTfulClient(base_url=endpoint, api_key=api_key)\n+ if client._get_token() is None:\n+ client._set_token(get_stored_token(endpoint, client))\n \n llm_table = []\n embedding_table = []\n@@ -844,13 +888,22 @@ def model_list(endpoint: Optional[str]):\n required=True,\n help=\"The unique identifier (UID) of the model.\",\n )\[email protected](\n+ \"--api-key\",\n+ \"-ak\",\n+ default=None,\n+ type=str,\n+ help=\"Api-Key for access xinference api with authorization.\",\n+)\n def model_terminate(\n endpoint: Optional[str],\n model_uid: str,\n+ api_key: Optional[str],\n ):\n endpoint = get_endpoint(endpoint)\n- client = RESTfulClient(base_url=endpoint)\n- client._set_token(get_stored_token(endpoint, client))\n+ client = RESTfulClient(base_url=endpoint, api_key=api_key)\n+ if client._get_token() is None:\n+ client._set_token(get_stored_token(endpoint, client))\n client.terminate_model(model_uid=model_uid)\n \n \n@@ -873,15 +926,24 @@ def model_terminate(\n type=bool,\n help=\"Whether to stream the generated text. Use 'True' for streaming (default is True).\",\n )\[email protected](\n+ \"--api-key\",\n+ \"-ak\",\n+ default=None,\n+ type=str,\n+ help=\"Api-Key for access xinference api with authorization.\",\n+)\n def model_generate(\n endpoint: Optional[str],\n model_uid: str,\n max_tokens: int,\n stream: bool,\n+ api_key: Optional[str],\n ):\n endpoint = get_endpoint(endpoint)\n- client = RESTfulClient(base_url=endpoint)\n- client._set_token(get_stored_token(endpoint, client))\n+ client = RESTfulClient(base_url=endpoint, api_key=api_key)\n+ if client._get_token() is None:\n+ client._set_token(get_stored_token(endpoint, client))\n if stream:\n # TODO: when stream=True, RestfulClient cannot generate words one by one.\n # So use Client in temporary. The implementation needs to be changed to\n@@ -959,16 +1021,25 @@ async def generate_internal():\n type=bool,\n help=\"Whether to stream the chat messages. Use 'True' for streaming (default is True).\",\n )\[email protected](\n+ \"--api-key\",\n+ \"-ak\",\n+ default=None,\n+ type=str,\n+ help=\"Api-Key for access xinference api with authorization.\",\n+)\n def model_chat(\n endpoint: Optional[str],\n model_uid: str,\n max_tokens: int,\n stream: bool,\n+ api_key: Optional[str],\n ):\n # TODO: chat model roles may not be user and assistant.\n endpoint = get_endpoint(endpoint)\n- client = RESTfulClient(base_url=endpoint)\n- client._set_token(get_stored_token(endpoint, client))\n+ client = RESTfulClient(base_url=endpoint, api_key=api_key)\n+ if client._get_token() is None:\n+ client._set_token(get_stored_token(endpoint, client))\n \n chat_history: \"List[ChatCompletionMessage]\" = []\n if stream:\n@@ -1048,10 +1119,18 @@ async def chat_internal():\n \n @cli.command(\"vllm-models\", help=\"Query and display models compatible with vLLM.\")\n @click.option(\"--endpoint\", \"-e\", type=str, help=\"Xinference endpoint.\")\n-def vllm_models(endpoint: Optional[str]):\[email protected](\n+ \"--api-key\",\n+ \"-ak\",\n+ default=None,\n+ type=str,\n+ help=\"Api-Key for access xinference api with authorization.\",\n+)\n+def vllm_models(endpoint: Optional[str], api_key: Optional[str]):\n endpoint = get_endpoint(endpoint)\n- client = RESTfulClient(base_url=endpoint)\n- client._set_token(get_stored_token(endpoint, client))\n+ client = RESTfulClient(base_url=endpoint, api_key=api_key)\n+ if client._get_token() is None:\n+ client._set_token(get_stored_token(endpoint, client))\n vllm_models_dict = client.vllm_models()\n print(\"VLLM supported model families:\")\n chat_models = vllm_models_dict[\"chat\"]\n" }
[ { "diff_hunk": "@@ -67,22 +88,33 @@ def __call__(\n headers={\"WWW-Authenticate\": authenticate_value},\n )\n \n+ through_api_key = False\n+\n try:\n assert self._config is not None\n- payload = jwt.decode(\n- token,\n- self._config.auth_config.secret_key,\n- algorithms=[self._config.auth_config.algorithm],\n- options={\"verify_exp\": False}, # TODO: supports token expiration\n- )\n- username: str = payload.get(\"sub\")\n- if username is None:\n- raise credentials_exception\n- token_scopes = payload.get(\"scopes\", [])\n- token_data = TokenData(scopes=token_scopes, username=username)\n+ if self.is_legal_api_key(token):\n+ through_api_key = True\n+ else:\n+ payload = jwt.decode(\n+ token,\n+ self._config.auth_config.secret_key,\n+ algorithms=[self._config.auth_config.algorithm],\n+ options={\"verify_exp\": False}, # TODO: supports token expiration\n+ )\n+ username: str = payload.get(\"sub\")\n+ if username is None:\n+ raise credentials_exception\n+ token_scopes = payload.get(\"scopes\", [])\n+ token_data = TokenData(scopes=token_scopes, username=username)\n except (JWTError, ValidationError):\n raise credentials_exception\n- user = self.get_user(token_data.username)\n+ if not through_api_key:\n+ user = self.get_user(token_data.username)\n+ else:\n+ user = self.get_user_with_api_key(token)", "line": null, "original_line": 114, "original_start_line": 111, "path": "xinference/api/oauth2/auth_service.py", "start_line": null, "text": "@user1:\nJust let the `user` inside the `if else`." }, { "diff_hunk": "@@ -651,11 +651,13 @@ def translations(\n \n \n class Client:\n- def __init__(self, base_url):\n+ def __init__(self, base_url, api_key: Optional[str] = None):\n self.base_url = base_url\n- self._headers = {}\n+ self._headers: Dict[str, str] = {}\n self._cluster_authed = False\n self._check_cluster_authenticated()\n+ if api_key is not None:", "line": null, "original_line": 659, "original_start_line": null, "path": "xinference/client/restful/restful_client.py", "start_line": null, "text": "@user1:\n`if api_key is not None and self._cluster_authed`" }, { "diff_hunk": "@@ -67,22 +88,33 @@ def __call__(\n headers={\"WWW-Authenticate\": authenticate_value},\n )\n \n+ through_api_key = False\n+\n try:\n assert self._config is not None\n- payload = jwt.decode(\n- token,\n- self._config.auth_config.secret_key,\n- algorithms=[self._config.auth_config.algorithm],\n- options={\"verify_exp\": False}, # TODO: supports token expiration\n- )\n- username: str = payload.get(\"sub\")\n- if username is None:\n- raise credentials_exception\n- token_scopes = payload.get(\"scopes\", [])\n- token_data = TokenData(scopes=token_scopes, username=username)\n+ if self.is_legal_api_key(token):\n+ through_api_key = True\n+ else:\n+ payload = jwt.decode(\n+ token,\n+ self._config.auth_config.secret_key,\n+ algorithms=[self._config.auth_config.algorithm],\n+ options={\"verify_exp\": False}, # TODO: supports token expiration\n+ )\n+ username: str = payload.get(\"sub\")\n+ if username is None:\n+ raise credentials_exception\n+ token_scopes = payload.get(\"scopes\", [])\n+ token_data = TokenData(scopes=token_scopes, username=username)\n except (JWTError, ValidationError):\n raise credentials_exception\n- user = self.get_user(token_data.username)\n+ if not through_api_key:\n+ user = self.get_user(token_data.username)\n+ else:\n+ user = self.get_user_with_api_key(token)\n+ if user is None:\n+ raise credentials_exception\n+ token_data = TokenData(scopes=user.permissions, username=user.username)", "line": null, "original_line": 117, "original_start_line": 115, "path": "xinference/api/oauth2/auth_service.py", "start_line": null, "text": "@user1:\nDoes this lines be redundant?" }, { "diff_hunk": "@@ -67,22 +88,33 @@ def __call__(\n headers={\"WWW-Authenticate\": authenticate_value},\n )\n \n+ through_api_key = False", "line": null, "original_line": 91, "original_start_line": null, "path": "xinference/api/oauth2/auth_service.py", "start_line": null, "text": "@user1:\nWhy need this bool variable? Just use `self.is_legal_api_key(token)` is fine." }, { "diff_hunk": "@@ -40,13 +41,33 @@ def __init__(self, auth_config_file: Optional[str]):\n def config(self):\n return self._config\n \n+ @staticmethod\n+ def is_legal_api_key(key: str):\n+ pattern = re.compile(\"^[sk]{2}-[a-zA-Z0-9]{48}$\")\n+ if re.match(pattern, key):\n+ return True\n+ else:\n+ return False\n+\n def init_auth_config(self):\n if self._auth_config_file:\n config: AuthStartupConfig = parse_file_as(\n path=self._auth_config_file, type_=AuthStartupConfig\n )\n+ total_keys = set()\n for user in config.user_config:\n user.password = get_password_hash(user.password)\n+ if len(set(user.api_keys)) != len(user.api_keys):\n+ raise ValueError(\"User has duplicate Api-Keys\")", "line": null, "original_line": 61, "original_start_line": 60, "path": "xinference/api/oauth2/auth_service.py", "start_line": null, "text": "@user1:\nDoes this logic be duplicated with the code below.\r\nThe below code can detect the situation that having duplicate api-keys of a user." }, { "diff_hunk": "@@ -40,13 +41,33 @@ def __init__(self, auth_config_file: Optional[str]):\n def config(self):\n return self._config\n \n+ @staticmethod\n+ def is_legal_api_key(key: str):\n+ pattern = re.compile(\"^[sk]{2}-[a-zA-Z0-9]{48}$\")\n+ if re.match(pattern, key):\n+ return True\n+ else:\n+ return False\n+\n def init_auth_config(self):\n if self._auth_config_file:\n config: AuthStartupConfig = parse_file_as(\n path=self._auth_config_file, type_=AuthStartupConfig\n )\n+ total_keys = set()\n for user in config.user_config:\n user.password = get_password_hash(user.password)\n+ if len(set(user.api_keys)) != len(user.api_keys):\n+ raise ValueError(\"User has duplicate Api-Keys\")\n+ for api_key in user.api_keys:\n+ if not self.is_legal_api_key(api_key):\n+ raise ValueError(\n+ \"Api-Key should be a string started with 'sk-' with a total length of 51\"\n+ )\n+ if api_key in total_keys:\n+ raise ValueError(\"Api-Keys of different users have conflict\")", "line": null, "original_line": 68, "original_start_line": null, "path": "xinference/api/oauth2/auth_service.py", "start_line": null, "text": "@user1:\n`Duplicate api-keys exists, please check your configuration`" }, { "diff_hunk": "@@ -40,13 +41,33 @@ def __init__(self, auth_config_file: Optional[str]):\n def config(self):\n return self._config\n \n+ @staticmethod\n+ def is_legal_api_key(key: str):\n+ pattern = re.compile(\"^[sk]{2}-[a-zA-Z0-9]{48}$\")", "line": null, "original_line": 46, "original_start_line": null, "path": "xinference/api/oauth2/auth_service.py", "start_line": null, "text": "@user1:\n48 is too long. We do not have to be same with OpenAI.\r\nJust 10 is fine" }, { "diff_hunk": "@@ -40,13 +41,33 @@ def __init__(self, auth_config_file: Optional[str]):\n def config(self):\n return self._config\n \n+ @staticmethod\n+ def is_legal_api_key(key: str):\n+ pattern = re.compile(\"^[sk]{2}-[a-zA-Z0-9]{48}$\")\n+ if re.match(pattern, key):\n+ return True\n+ else:\n+ return False\n+\n def init_auth_config(self):\n if self._auth_config_file:\n config: AuthStartupConfig = parse_file_as(\n path=self._auth_config_file, type_=AuthStartupConfig\n )\n+ total_keys = set()", "line": null, "original_line": 57, "original_start_line": null, "path": "xinference/api/oauth2/auth_service.py", "start_line": null, "text": "@user1:\nRename the variable to `all_api_keys`" }, { "diff_hunk": "@@ -376,18 +376,27 @@ def worker(\n is_flag=True,\n help=\"Persist the model configuration to the filesystem, retains the model registration after server restarts.\",\n )\[email protected](\n+ \"--api-key\",\n+ \"-ak\",\n+ default=None,\n+ type=str,\n+ help=\"Api-Key for access xinference api with authorization.\",\n+)\n def register_model(\n endpoint: Optional[str],\n model_type: str,\n file: str,\n persist: bool,\n+ api_key: Optional[str],\n ):\n endpoint = get_endpoint(endpoint)\n with open(file) as fd:\n model = fd.read()\n \n- client = RESTfulClient(base_url=endpoint)\n- client._set_token(get_stored_token(endpoint, client))\n+ client = RESTfulClient(base_url=endpoint, api_key=api_key)\n+ if client._get_token() is None:", "line": null, "original_line": 398, "original_start_line": null, "path": "xinference/deploy/cmdline.py", "start_line": null, "text": "@user1:\nShould this be `if api_key is None` ?\r\nSame as all other places." } ]
5acf72f6438e18f3c739e436b8456b94eff3ea79
diff --git a/doc/source/development/contributing_environment.rst b/doc/source/development/contributing_environment.rst index 80aa266dfb..25ee11cc71 100644 --- a/doc/source/development/contributing_environment.rst +++ b/doc/source/development/contributing_environment.rst @@ -8,7 +8,7 @@ Creating a development environment Before proceeding with any code modifications, it's essential to set up the necessary environment for Xinference development, which includes familiarizing yourself with Git usage, establishing an isolated environment, installing Xinference, and compiling the frontend. -Getting startted with Git +Getting started with Git ------------------------- Now that you have identified an issue you wish to resolve, an enhancement to incorporate, or documentation to enhance, diff --git a/doc/source/locale/zh_CN/LC_MESSAGES/development/contributing_environment.po b/doc/source/locale/zh_CN/LC_MESSAGES/development/contributing_environment.po index 8ced444523..7aa3765a32 100644 --- a/doc/source/locale/zh_CN/LC_MESSAGES/development/contributing_environment.po +++ b/doc/source/locale/zh_CN/LC_MESSAGES/development/contributing_environment.po @@ -8,7 +8,7 @@ msgid "" msgstr "" "Project-Id-Version: Xinference \n" "Report-Msgid-Bugs-To: \n" -"POT-Creation-Date: 2024-03-06 16:29+0800\n" +"POT-Creation-Date: 2024-03-21 09:59+0800\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "Last-Translator: FULL NAME <EMAIL@ADDRESS>\n" "Language: zh_CN\n" @@ -38,7 +38,7 @@ msgstr "" "Xinference 以及前端部分的编译。" #: ../../source/development/contributing_environment.rst:12 -msgid "Getting startted with Git" +msgid "Getting started with Git" msgstr "Git 的使用" #: ../../source/development/contributing_environment.rst:14 diff --git a/xinference/api/oauth2/auth_service.py b/xinference/api/oauth2/auth_service.py index 2f11c26f18..7de97c1020 100644 --- a/xinference/api/oauth2/auth_service.py +++ b/xinference/api/oauth2/auth_service.py @@ -11,8 +11,9 @@ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. +import re from datetime import timedelta -from typing import List, Optional +from typing import List, Optional, Tuple from fastapi import Depends, HTTPException, status from fastapi.security import OAuth2PasswordBearer, SecurityScopes @@ -40,13 +41,30 @@ def __init__(self, auth_config_file: Optional[str]): def config(self): return self._config + @staticmethod + def is_legal_api_key(key: str) -> bool: + pattern = re.compile("^sk-[a-zA-Z0-9]{13}$") + return re.match(pattern, key) is not None + def init_auth_config(self): if self._auth_config_file: config: AuthStartupConfig = parse_file_as( path=self._auth_config_file, type_=AuthStartupConfig ) + all_api_keys = set() for user in config.user_config: user.password = get_password_hash(user.password) + for api_key in user.api_keys: + if not self.is_legal_api_key(api_key): + raise ValueError( + "Api-Key should be a string started with 'sk-' with a total length of 16" + ) + if api_key in all_api_keys: + raise ValueError( + "Duplicate api-keys exists, please check your configuration" + ) + else: + all_api_keys.add(api_key) return config def __call__( @@ -67,28 +85,30 @@ def __call__( headers={"WWW-Authenticate": authenticate_value}, ) - try: - assert self._config is not None - payload = jwt.decode( - token, - self._config.auth_config.secret_key, - algorithms=[self._config.auth_config.algorithm], - options={"verify_exp": False}, # TODO: supports token expiration - ) - username: str = payload.get("sub") - if username is None: + if self.is_legal_api_key(token): + user, token_scopes = self.get_user_and_scopes_with_api_key(token) + else: + try: + assert self._config is not None + payload = jwt.decode( + token, + self._config.auth_config.secret_key, + algorithms=[self._config.auth_config.algorithm], + options={"verify_exp": False}, # TODO: supports token expiration + ) + username: str = payload.get("sub") + if username is None: + raise credentials_exception + token_scopes = payload.get("scopes", []) + user = self.get_user(username) + except (JWTError, ValidationError): raise credentials_exception - token_scopes = payload.get("scopes", []) - token_data = TokenData(scopes=token_scopes, username=username) - except (JWTError, ValidationError): - raise credentials_exception - user = self.get_user(token_data.username) if user is None: raise credentials_exception - if "admin" in token_data.scopes: + if "admin" in token_scopes: return user for scope in security_scopes.scopes: - if scope not in token_data.scopes: + if scope not in token_scopes: raise HTTPException( status_code=status.HTTP_403_FORBIDDEN, detail="Not enough permissions", @@ -102,6 +122,15 @@ def get_user(self, username: str) -> Optional[User]: return user return None + def get_user_and_scopes_with_api_key( + self, api_key: str + ) -> Tuple[Optional[User], List]: + for user in self._config.user_config: + for key in user.api_keys: + if api_key == key: + return user, user.permissions + return None, [] + def authenticate_user(self, username: str, password: str): user = self.get_user(username) if not user: diff --git a/xinference/api/oauth2/types.py b/xinference/api/oauth2/types.py index 106680deac..deb5740a19 100644 --- a/xinference/api/oauth2/types.py +++ b/xinference/api/oauth2/types.py @@ -23,6 +23,7 @@ class LoginUserForm(BaseModel): class User(LoginUserForm): permissions: List[str] + api_keys: List[str] class AuthConfig(BaseModel): diff --git a/xinference/client/restful/restful_client.py b/xinference/client/restful/restful_client.py index ca5d8ef0a3..f712ff83a2 100644 --- a/xinference/client/restful/restful_client.py +++ b/xinference/client/restful/restful_client.py @@ -651,11 +651,13 @@ def translations( class Client: - def __init__(self, base_url): + def __init__(self, base_url, api_key: Optional[str] = None): self.base_url = base_url - self._headers = {} + self._headers: Dict[str, str] = {} self._cluster_authed = False self._check_cluster_authenticated() + if api_key is not None and self._cluster_authed: + self._headers["Authorization"] = f"Bearer {api_key}" def _set_token(self, token: Optional[str]): if not self._cluster_authed or token is None: diff --git a/xinference/client/tests/test_client_with_auth.py b/xinference/client/tests/test_client_with_auth.py index d64033dfa6..68a6bc3221 100644 --- a/xinference/client/tests/test_client_with_auth.py +++ b/xinference/client/tests/test_client_with_auth.py @@ -47,3 +47,57 @@ def test_client_auth(setup_with_auth): assert len(client.list_models()) == 1 client.terminate_model(model_uid=model_uid) assert len(client.list_models()) == 0 + + # test with api-key + client = RESTfulClient(endpoint, api_key="sk-wrongapikey12") + with pytest.raises(RuntimeError): + client.list_models() + + client = RESTfulClient(endpoint, api_key="sk-72tkvudyGLPMi") + assert len(client.list_models()) == 0 + + with pytest.raises(RuntimeError): + client.launch_model(model_name="bge-small-en-v1.5", model_type="embedding") + + client = RESTfulClient(endpoint, api_key="sk-ZOTLIY4gt9w11") + model_uid = client.launch_model( + model_name="bge-small-en-v1.5", model_type="embedding" + ) + model = client.get_model(model_uid=model_uid) + assert isinstance(model, RESTfulEmbeddingModelHandle) + + completion = model.create_embedding("write a poem.") + assert len(completion["data"][0]["embedding"]) == 384 + + with pytest.raises(RuntimeError): + client.terminate_model(model_uid=model_uid) + + client = RESTfulClient(endpoint, api_key="sk-3sjLbdwqAhhAF") + assert len(client.list_models()) == 1 + + # test with openai SDK + from openai import AuthenticationError, OpenAI, PermissionDeniedError + + client_ai = OpenAI(base_url=endpoint + "/v1", api_key="sk-wrongapikey12") + with pytest.raises(AuthenticationError): + client_ai.models.list() + + client_ai = OpenAI(base_url=endpoint + "/v1", api_key="sk-72tkvudyGLPMi") + assert len(client_ai.models.list().data) == 1 + with pytest.raises(PermissionDeniedError): + chat_completion = client_ai.embeddings.create( + model="bge-small-en-v1.5", + input="write a poem.", + ) + + client_ai = OpenAI(base_url=endpoint + "/v1", api_key="sk-ZOTLIY4gt9w11") + chat_completion = client_ai.embeddings.create( + model="bge-small-en-v1.5", + input="write a poem.", + ) + assert len(chat_completion.data[0].embedding) == 384 + + client_ai = OpenAI(base_url=endpoint + "/v1", api_key="sk-3sjLbdwqAhhAF") + client.terminate_model(model_uid) + assert len(client.list_models()) == 0 + assert len(client_ai.models.list().data) == 0 diff --git a/xinference/conftest.py b/xinference/conftest.py index 0d2822969d..1dfeae0f0e 100644 --- a/xinference/conftest.py +++ b/xinference/conftest.py @@ -261,12 +261,23 @@ def setup_with_auth(): if not cluster_health_check(supervisor_addr, max_attempts=10, sleep_interval=3): raise RuntimeError("Cluster is not available after multiple attempts") - user1 = User(username="user1", password="pass1", permissions=["admin"]) - user2 = User(username="user2", password="pass2", permissions=["models:list"]) + user1 = User( + username="user1", + password="pass1", + permissions=["admin"], + api_keys=["sk-3sjLbdwqAhhAF", "sk-0HCRO1rauFQDL"], + ) + user2 = User( + username="user2", + password="pass2", + permissions=["models:list"], + api_keys=["sk-72tkvudyGLPMi"], + ) user3 = User( username="user3", password="pass3", permissions=["models:list", "models:read", "models:start"], + api_keys=["sk-m6jEzEwmCc4iQ", "sk-ZOTLIY4gt9w11"], ) auth_config = AuthConfig( algorithm="HS256", diff --git a/xinference/deploy/cmdline.py b/xinference/deploy/cmdline.py index df620023e5..ca1633598d 100644 --- a/xinference/deploy/cmdline.py +++ b/xinference/deploy/cmdline.py @@ -376,18 +376,27 @@ def worker( is_flag=True, help="Persist the model configuration to the filesystem, retains the model registration after server restarts.", ) [email protected]( + "--api-key", + "-ak", + default=None, + type=str, + help="Api-Key for access xinference api with authorization.", +) def register_model( endpoint: Optional[str], model_type: str, file: str, persist: bool, + api_key: Optional[str], ): endpoint = get_endpoint(endpoint) with open(file) as fd: model = fd.read() - client = RESTfulClient(base_url=endpoint) - client._set_token(get_stored_token(endpoint, client)) + client = RESTfulClient(base_url=endpoint, api_key=api_key) + if api_key is None: + client._set_token(get_stored_token(endpoint, client)) client.register_model( model_type=model_type, model=model, @@ -408,15 +417,24 @@ def register_model( help="Type of model to unregister (default is 'LLM').", ) @click.option("--model-name", "-n", type=str, help="Name of the model to unregister.") [email protected]( + "--api-key", + "-ak", + default=None, + type=str, + help="Api-Key for access xinference api with authorization.", +) def unregister_model( endpoint: Optional[str], model_type: str, model_name: str, + api_key: Optional[str], ): endpoint = get_endpoint(endpoint) - client = RESTfulClient(base_url=endpoint) - client._set_token(get_stored_token(endpoint, client)) + client = RESTfulClient(base_url=endpoint, api_key=api_key) + if api_key is None: + client._set_token(get_stored_token(endpoint, client)) client.unregister_model( model_type=model_type, model_name=model_name, @@ -437,15 +455,24 @@ def unregister_model( type=str, help="Filter by model type (default is 'LLM').", ) [email protected]( + "--api-key", + "-ak", + default=None, + type=str, + help="Api-Key for access xinference api with authorization.", +) def list_model_registrations( endpoint: Optional[str], model_type: str, + api_key: Optional[str], ): from tabulate import tabulate endpoint = get_endpoint(endpoint) - client = RESTfulClient(base_url=endpoint) - client._set_token(get_stored_token(endpoint, client)) + client = RESTfulClient(base_url=endpoint, api_key=api_key) + if api_key is None: + client._set_token(get_stored_token(endpoint, client)) registrations = client.list_model_registrations(model_type=model_type) @@ -638,6 +665,13 @@ def list_model_registrations( type=bool, help="Whether or not to allow for custom models defined on the Hub in their own modeling files.", ) [email protected]( + "--api-key", + "-ak", + default=None, + type=str, + help="Api-Key for access xinference api with authorization.", +) @click.pass_context def model_launch( ctx, @@ -654,6 +688,7 @@ def model_launch( image_lora_load_kwargs: Optional[Tuple], image_lora_fuse_kwargs: Optional[Tuple], trust_remote_code: bool, + api_key: Optional[str], ): kwargs = {} for i in range(0, len(ctx.args), 2): @@ -686,8 +721,9 @@ def model_launch( if size_in_billions is None or "_" in size_in_billions else int(size_in_billions) ) - client = RESTfulClient(base_url=endpoint) - client._set_token(get_stored_token(endpoint, client)) + client = RESTfulClient(base_url=endpoint, api_key=api_key) + if api_key is None: + client._set_token(get_stored_token(endpoint, client)) model_uid = client.launch_model( model_name=model_name, @@ -718,12 +754,20 @@ def model_launch( type=str, help="Xinference endpoint.", ) -def model_list(endpoint: Optional[str]): [email protected]( + "--api-key", + "-ak", + default=None, + type=str, + help="Api-Key for access xinference api with authorization.", +) +def model_list(endpoint: Optional[str], api_key: Optional[str]): from tabulate import tabulate endpoint = get_endpoint(endpoint) - client = RESTfulClient(base_url=endpoint) - client._set_token(get_stored_token(endpoint, client)) + client = RESTfulClient(base_url=endpoint, api_key=api_key) + if api_key is None: + client._set_token(get_stored_token(endpoint, client)) llm_table = [] embedding_table = [] @@ -844,13 +888,22 @@ def model_list(endpoint: Optional[str]): required=True, help="The unique identifier (UID) of the model.", ) [email protected]( + "--api-key", + "-ak", + default=None, + type=str, + help="Api-Key for access xinference api with authorization.", +) def model_terminate( endpoint: Optional[str], model_uid: str, + api_key: Optional[str], ): endpoint = get_endpoint(endpoint) - client = RESTfulClient(base_url=endpoint) - client._set_token(get_stored_token(endpoint, client)) + client = RESTfulClient(base_url=endpoint, api_key=api_key) + if api_key is None: + client._set_token(get_stored_token(endpoint, client)) client.terminate_model(model_uid=model_uid) @@ -873,15 +926,24 @@ def model_terminate( type=bool, help="Whether to stream the generated text. Use 'True' for streaming (default is True).", ) [email protected]( + "--api-key", + "-ak", + default=None, + type=str, + help="Api-Key for access xinference api with authorization.", +) def model_generate( endpoint: Optional[str], model_uid: str, max_tokens: int, stream: bool, + api_key: Optional[str], ): endpoint = get_endpoint(endpoint) - client = RESTfulClient(base_url=endpoint) - client._set_token(get_stored_token(endpoint, client)) + client = RESTfulClient(base_url=endpoint, api_key=api_key) + if api_key is None: + client._set_token(get_stored_token(endpoint, client)) if stream: # TODO: when stream=True, RestfulClient cannot generate words one by one. # So use Client in temporary. The implementation needs to be changed to @@ -959,16 +1021,25 @@ async def generate_internal(): type=bool, help="Whether to stream the chat messages. Use 'True' for streaming (default is True).", ) [email protected]( + "--api-key", + "-ak", + default=None, + type=str, + help="Api-Key for access xinference api with authorization.", +) def model_chat( endpoint: Optional[str], model_uid: str, max_tokens: int, stream: bool, + api_key: Optional[str], ): # TODO: chat model roles may not be user and assistant. endpoint = get_endpoint(endpoint) - client = RESTfulClient(base_url=endpoint) - client._set_token(get_stored_token(endpoint, client)) + client = RESTfulClient(base_url=endpoint, api_key=api_key) + if api_key is None: + client._set_token(get_stored_token(endpoint, client)) chat_history: "List[ChatCompletionMessage]" = [] if stream: @@ -1048,10 +1119,18 @@ async def chat_internal(): @cli.command("vllm-models", help="Query and display models compatible with vLLM.") @click.option("--endpoint", "-e", type=str, help="Xinference endpoint.") -def vllm_models(endpoint: Optional[str]): [email protected]( + "--api-key", + "-ak", + default=None, + type=str, + help="Api-Key for access xinference api with authorization.", +) +def vllm_models(endpoint: Optional[str], api_key: Optional[str]): endpoint = get_endpoint(endpoint) - client = RESTfulClient(base_url=endpoint) - client._set_token(get_stored_token(endpoint, client)) + client = RESTfulClient(base_url=endpoint, api_key=api_key) + if api_key is None: + client._set_token(get_stored_token(endpoint, client)) vllm_models_dict = client.vllm_models() print("VLLM supported model families:") chat_models = vllm_models_dict["chat"]
{ "difficulty": "medium", "estimated_review_effort": 4, "problem_domain": "New Feature Additions" }
xorbitsai__inference-2079@5ae18b2
xorbitsai/inference
Python
2,079
Feat: Support internvl2 and internvl stream
2024-08-13T09:57:28Z
intervl2不支持流式请求 ### System Info / 系統信息 cuda 11.8 python3.9 xinference 0.14.0 ### Running Xinference with Docker? / 是否使用 Docker 运行 Xinfernece? - [X] docker / docker - [ ] pip install / 通过 pip install 安装 - [ ] installation from source / 从源码安装 ### Version info / 版本信息 0.14.0 ### The command used to start Xinference / 用以启动 xinference 的命令 docker run --name xinference-19997-0807 \ --shm-size="200g" \ -p 19997:9997 \ -e XINFERENCE_MODEL_SRC=modelscope \ --gpus all \ -v /home/llm/models:/llm/models \ -d xprobe/xinference:v0.14.0 xinference-local -H 0.0.0.0 docker ### Reproduction / 复现过程 webui流式请求 报错 内部日志如下 2024-08-07 08:43:45,385 xinference.api.restful_api 1 ERROR Chat completion stream got an error: [address=0.0.0.0:35579, pid=838] Chat with model internvl-chat does not support stream. Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/xinference/api/restful_api.py", line 1671, in stream_results iterator = await model.chat( File "/usr/local/lib/python3.10/dist-packages/xoscar/backends/context.py", line 231, in send return self._process_result_message(result) File "/usr/local/lib/python3.10/dist-packages/xoscar/backends/context.py", line 102, in _process_result_message raise message.as_instanceof_cause() File "/usr/local/lib/python3.10/dist-packages/xoscar/backends/pool.py", line 656, in send result = await self._run_coro(message.message_id, coro) File "/usr/local/lib/python3.10/dist-packages/xoscar/backends/pool.py", line 367, in _run_coro return await coro File "/usr/local/lib/python3.10/dist-packages/xoscar/api.py", line 384, in __on_receive__ return await super().__on_receive__(message) # type: ignore File "xoscar/core.pyx", line 558, in __on_receive__ raise ex File "xoscar/core.pyx", line 520, in xoscar.core._BaseActor.__on_receive__ async with self._lock: File "xoscar/core.pyx", line 521, in xoscar.core._BaseActor.__on_receive__ with debug_async_timeout('actor_lock_timeout', File "xoscar/core.pyx", line 526, in xoscar.core._BaseActor.__on_receive__ result = await result File "/usr/local/lib/python3.10/dist-packages/xinference/core/utils.py", line 45, in wrapped ret = await func(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/xinference/core/model.py", line 90, in wrapped_func ret = await fn(self, *args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/xoscar/api.py", line 462, in _wrapper r = await func(self, *args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/xinference/core/model.py", line 523, in chat response = await self._call_wrapper_json( File "/usr/local/lib/python3.10/dist-packages/xinference/core/model.py", line 393, in _call_wrapper_json return await self._call_wrapper("json", fn, *args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/xinference/core/model.py", line 114, in _async_wrapper return await fn(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/xinference/core/model.py", line 404, in _call_wrapper ret = await asyncio.to_thread(fn, *args, **kwargs) File "/usr/lib/python3.10/asyncio/threads.py", line 25, in to_thread return await loop.run_in_executor(None, func_call) File "/usr/lib/python3.10/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) File "/usr/local/lib/python3.10/dist-packages/xinference/model/llm/pytorch/intern_vl.py", line 307, in chat raise Exception( Exception: [address=0.0.0.0:35579, pid=838] Chat with model internvl-chat does not support stream. ### Expected behavior / 期待表现 期望能支持internvl2.0系列的流式请求
另,internvl2 的 usage 返回 tokens 都是 -1,希望能正确返回。 This issue is stale because it has been open for 7 days with no activity.
[ { "body": "### System Info / 系統信息\n\ncuda 11.8\r\npython3.9 \r\nxinference 0.14.0\n\n### Running Xinference with Docker? / 是否使用 Docker 运行 Xinfernece?\n\n- [X] docker / docker\n- [ ] pip install / 通过 pip install 安装\n- [ ] installation from source / 从源码安装\n\n### Version info / 版本信息\n\n0.14.0\n\n### The command used to start Xinference / 用以启动 xinference 的命令\n\ndocker run --name xinference-19997-0807 \\\r\n--shm-size=\"200g\" \\\r\n-p 19997:9997 \\\r\n-e XINFERENCE_MODEL_SRC=modelscope \\\r\n--gpus all \\\r\n-v /home/llm/models:/llm/models \\\r\n-d xprobe/xinference:v0.14.0 xinference-local -H 0.0.0.0\r\ndocker\n\n### Reproduction / 复现过程\n\nwebui流式请求 报错\r\n内部日志如下\r\n2024-08-07 08:43:45,385 xinference.api.restful_api 1 ERROR Chat completion stream got an error: [address=0.0.0.0:35579, pid=838] Chat with model internvl-chat does not support stream.\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.10/dist-packages/xinference/api/restful_api.py\", line 1671, in stream_results\r\n iterator = await model.chat(\r\n File \"/usr/local/lib/python3.10/dist-packages/xoscar/backends/context.py\", line 231, in send\r\n return self._process_result_message(result)\r\n File \"/usr/local/lib/python3.10/dist-packages/xoscar/backends/context.py\", line 102, in _process_result_message\r\n raise message.as_instanceof_cause()\r\n File \"/usr/local/lib/python3.10/dist-packages/xoscar/backends/pool.py\", line 656, in send\r\n result = await self._run_coro(message.message_id, coro)\r\n File \"/usr/local/lib/python3.10/dist-packages/xoscar/backends/pool.py\", line 367, in _run_coro\r\n return await coro\r\n File \"/usr/local/lib/python3.10/dist-packages/xoscar/api.py\", line 384, in __on_receive__\r\n return await super().__on_receive__(message) # type: ignore\r\n File \"xoscar/core.pyx\", line 558, in __on_receive__\r\n raise ex\r\n File \"xoscar/core.pyx\", line 520, in xoscar.core._BaseActor.__on_receive__\r\n async with self._lock:\r\n File \"xoscar/core.pyx\", line 521, in xoscar.core._BaseActor.__on_receive__\r\n with debug_async_timeout('actor_lock_timeout',\r\n File \"xoscar/core.pyx\", line 526, in xoscar.core._BaseActor.__on_receive__\r\n result = await result\r\n File \"/usr/local/lib/python3.10/dist-packages/xinference/core/utils.py\", line 45, in wrapped\r\n ret = await func(*args, **kwargs)\r\n File \"/usr/local/lib/python3.10/dist-packages/xinference/core/model.py\", line 90, in wrapped_func\r\n ret = await fn(self, *args, **kwargs)\r\n File \"/usr/local/lib/python3.10/dist-packages/xoscar/api.py\", line 462, in _wrapper\r\n r = await func(self, *args, **kwargs)\r\n File \"/usr/local/lib/python3.10/dist-packages/xinference/core/model.py\", line 523, in chat\r\n response = await self._call_wrapper_json(\r\n File \"/usr/local/lib/python3.10/dist-packages/xinference/core/model.py\", line 393, in _call_wrapper_json\r\n return await self._call_wrapper(\"json\", fn, *args, **kwargs)\r\n File \"/usr/local/lib/python3.10/dist-packages/xinference/core/model.py\", line 114, in _async_wrapper\r\n return await fn(*args, **kwargs)\r\n File \"/usr/local/lib/python3.10/dist-packages/xinference/core/model.py\", line 404, in _call_wrapper\r\n ret = await asyncio.to_thread(fn, *args, **kwargs)\r\n File \"/usr/lib/python3.10/asyncio/threads.py\", line 25, in to_thread\r\n return await loop.run_in_executor(None, func_call)\r\n File \"/usr/lib/python3.10/concurrent/futures/thread.py\", line 58, in run\r\n result = self.fn(*self.args, **self.kwargs)\r\n File \"/usr/local/lib/python3.10/dist-packages/xinference/model/llm/pytorch/intern_vl.py\", line 307, in chat\r\n raise Exception(\r\nException: [address=0.0.0.0:35579, pid=838] Chat with model internvl-chat does not support stream.\n\n### Expected behavior / 期待表现\n\n期望能支持internvl2.0系列的流式请求", "number": 2037, "title": "intervl2不支持流式请求" } ]
f5229a2354cd1592abdd8f3c757fc02be90a744c
{ "head_commit": "5ae18b28764237b353f12520b18eff114c26d36f", "head_commit_message": "fix bug", "patch_to_review": "diff --git a/xinference/model/llm/__init__.py b/xinference/model/llm/__init__.py\nindex 749596fdbe..93342e86b5 100644\n--- a/xinference/model/llm/__init__.py\n+++ b/xinference/model/llm/__init__.py\n@@ -129,7 +129,7 @@ def _install():\n from .pytorch.vicuna import VicunaPytorchChatModel\n from .pytorch.yi_vl import YiVLChatModel\n from .sglang.core import SGLANGChatModel, SGLANGModel\n- from .vllm.core import VLLMChatModel, VLLMModel\n+ from .vllm.core import VLLMChatModel, VLLMModel, VLLMVisionModel\n \n try:\n from .pytorch.omnilmm import OmniLMMModel\n@@ -147,7 +147,7 @@ def _install():\n ]\n )\n SGLANG_CLASSES.extend([SGLANGModel, SGLANGChatModel])\n- VLLM_CLASSES.extend([VLLMModel, VLLMChatModel])\n+ VLLM_CLASSES.extend([VLLMModel, VLLMChatModel, VLLMVisionModel])\n MLX_CLASSES.extend([MLXModel, MLXChatModel])\n TRANSFORMERS_CLASSES.extend(\n [\ndiff --git a/xinference/model/llm/llm_family.json b/xinference/model/llm/llm_family.json\nindex a01f63569b..8ab94cfb19 100644\n--- a/xinference/model/llm/llm_family.json\n+++ b/xinference/model/llm/llm_family.json\n@@ -7873,32 +7873,194 @@\n \"model_format\": \"pytorch\",\n \"model_size_in_billions\": 2,\n \"quantizations\": [\n- \"none\"\n+ \"4-bit\",\n+ \"8-bit\",\n+ \"none\"\n ],\n \"model_id\": \"OpenGVLab/Mini-InternVL-Chat-2B-V1-5\",\n- \"model_revision\": \"ce3f67acff17281bacbf4b156f402a0580fb9605\"\n+ \"model_revision\": \"ecbbd21dcf38caa74d925967b997167b0c7b3f47\"\n+ },\n+ {\n+ \"model_format\": \"pytorch\",\n+ \"model_size_in_billions\": 4,\n+ \"quantizations\": [\n+ \"4-bit\",\n+ \"8-bit\",\n+ \"none\"\n+ ],\n+ \"model_id\": \"OpenGVLab/Mini-InternVL-Chat-4B-V1-5\",\n+ \"model_revision\": \"ce1559ddf9d87f5130aa5233b0e93b95e4e4161a\"\n },\n {\n \"model_format\": \"pytorch\",\n \"model_size_in_billions\": 26,\n \"quantizations\": [\n- \"none\"\n+ \"8-bit\",\n+ \"none\"\n ],\n \"model_id\": \"OpenGVLab/InternVL-Chat-V1-5\",\n- \"model_revision\": \"e822119e5806946ce128043023a73d715ecabf8d\"\n+ \"model_revision\": \"9db32d9127cac0c85961e169d75da57a18a847b1\"\n+ }\n+ ],\n+ \"prompt_style\": {\n+ \"style_name\": \"INTERNVL\",\n+ \"system_prompt\": \"You are InternLM (书生·浦语), a helpful, honest, and harmless AI assistant developed by Shanghai AI Laboratory (上海人工智能实验室).\",\n+ \"roles\": [\n+ \"<|im_start|>user\",\n+ \"<|im_start|>assistant\"\n+ ],\n+ \"intra_message_sep\": \"<|im_end|>\",\n+ \"stop_token_ids\": [\n+ 2,\n+ 92543,\n+ 92542\n+ ],\n+ \"stop\": [\n+ \"</s>\",\n+ \"<|im_end|>\",\n+ \"<|im_start|>\"\n+ ]\n+ }\n+ },\n+ {\n+ \"version\": 1,\n+ \"context_length\": 32768,\n+ \"model_name\": \"internvl2\",\n+ \"model_lang\": [\n+ \"en\",\n+ \"zh\"\n+ ],\n+ \"model_ability\": [\n+ \"chat\",\n+ \"vision\"\n+ ],\n+ \"model_description\": \"InternVL 2 is an open-source multimodal large language model (MLLM) to bridge the capability gap between open-source and proprietary commercial models in multimodal understanding. \",\n+ \"model_specs\": [\n+ {\n+ \"model_format\": \"pytorch\",\n+ \"model_size_in_billions\": 1,\n+ \"quantizations\": [\n+ \"4-bit\",\n+ \"8-bit\",\n+ \"none\"\n+ ],\n+ \"model_id\": \"OpenGVLab/InternVL2-1B\",\n+ \"model_revision\": \"a9fc14aea824b6ea1d44f8778cad6b35512c4ce1\"\n },\n {\n \"model_format\": \"pytorch\",\n+ \"model_size_in_billions\": 2,\n+ \"quantizations\": [\n+ \"4-bit\",\n+ \"8-bit\",\n+ \"none\"\n+ ],\n+ \"model_id\": \"OpenGVLab/InternVL2-2B\",\n+ \"model_revision\": \"422ad7c6335917bfb514958233955512338485a6\"\n+ },\n+ {\n+ \"model_format\": \"awq\",\n+ \"model_size_in_billions\": 2,\n+ \"quantizations\": [\n+ \"Int4\"\n+ ],\n+ \"model_id\": \"OpenGVLab/InternVL2-2B-AWQ\",\n+ \"model_revision\": \"701bc3fc098a8a3b686b3b4135cfb77202be89e0\"\n+ },\n+ {\n+ \"model_format\": \"pytorch\",\n+ \"model_size_in_billions\": 4,\n+ \"quantizations\": [\n+ \"4-bit\",\n+ \"8-bit\",\n+ \"none\"\n+ ],\n+ \"model_id\": \"OpenGVLab/InternVL2-4B\",\n+ \"model_revision\": \"b50544dafada6c41e80bfde2f57cc9b0140fc21c\"\n+ },\n+ {\n+ \"model_format\": \"awq\",\n+ \"model_size_in_billions\": 4,\n+ \"quantizations\": [\n+ \"Int4\"\n+ ],\n+ \"model_id\": \"OpenGVLab/InternVL2-8B-AWQ\",\n+ \"model_revision\": \"9f1a4756b7ae18eb26d8a22b618dfc283e8193b3\"\n+ },\n+ {\n+ \"model_format\": \"pytorch\",\n+ \"model_size_in_billions\": 8,\n+ \"quantizations\": [\n+ \"4-bit\",\n+ \"8-bit\",\n+ \"none\"\n+ ],\n+ \"model_id\": \"OpenGVLab/InternVL2-8B\",\n+ \"model_revision\": \"3bfd3664dea4f3da628785f5125d30f889701253\"\n+ },\n+ {\n+ \"model_format\": \"pytorch\",\n+ \"model_size_in_billions\": 26,\n+ \"quantizations\": [\n+ \"4-bit\",\n+ \"8-bit\",\n+ \"none\"\n+ ],\n+ \"model_id\": \"OpenGVLab/InternVL2-26B\",\n+ \"model_revision\": \"b9f3c7e6d575b0115e076a3ffc46fd20b7586899\"\n+ },\n+ {\n+ \"model_format\": \"awq\",\n \"model_size_in_billions\": 26,\n \"quantizations\": [\n- \"Int8\"\n+ \"Int4\"\n+ ],\n+ \"model_id\": \"OpenGVLab/InternVL2-26B-AWQ\",\n+ \"model_revision\": \"469e0019ffd251e22ff6501a5c2321964e86ef0d\"\n+ },\n+ {\n+ \"model_format\": \"pytorch\",\n+ \"model_size_in_billions\": 40,\n+ \"quantizations\": [\n+ \"4-bit\",\n+ \"8-bit\",\n+ \"none\"\n+ ],\n+ \"model_id\": \"OpenGVLab/InternVL2-40B\",\n+ \"model_revision\": \"725a12063bb855c966e30a0617d0ccd9e870d772\"\n+ },\n+ {\n+ \"model_format\": \"awq\",\n+ \"model_size_in_billions\": 40,\n+ \"quantizations\": [\n+ \"Int4\"\n+ ],\n+ \"model_id\": \"OpenGVLab/InternVL2-40B-AWQ\",\n+ \"model_revision\": \"d92e140f6dfe8ea9679924c6a31898f42c4e1846\"\n+ },\n+ {\n+ \"model_format\": \"pytorch\",\n+ \"model_size_in_billions\": 76,\n+ \"quantizations\": [\n+ \"4-bit\",\n+ \"8-bit\",\n+ \"none\"\n+ ],\n+ \"model_id\": \"OpenGVLab/InternVL2-Llama3-76B\",\n+ \"model_revision\": \"cf7914905f78e9e3560ddbd6f5dfc39becac494f\"\n+ },\n+ {\n+ \"model_format\": \"awq\",\n+ \"model_size_in_billions\": 76,\n+ \"quantizations\": [\n+ \"Int4\"\n ],\n- \"model_id\": \"OpenGVLab/InternVL-Chat-V1-5-{quantization}\",\n- \"model_revision\": \"acaaed06937c603ab04f084216ecb0268160f538\"\n+ \"model_id\": \"OpenGVLab/InternVL2-Llama3-76B-AWQ\",\n+ \"model_revision\": \"1bc796bf80f2ebc7d6a14c15f55217a4600d50a4\"\n }\n ],\n \"prompt_style\": {\n- \"style_name\": \"INTERNLM2\",\n+ \"style_name\": \"INTERNVL\",\n \"system_prompt\": \"You are InternLM (书生·浦语), a helpful, honest, and harmless AI assistant developed by Shanghai AI Laboratory (上海人工智能实验室).\",\n \"roles\": [\n \"<|im_start|>user\",\n@@ -7906,10 +8068,14 @@\n ],\n \"intra_message_sep\": \"<|im_end|>\",\n \"stop_token_ids\": [\n+ 2,\n+ 92543,\n 92542\n ],\n \"stop\": [\n- \"<|im_end|>\"\n+ \"</s>\",\n+ \"<|im_end|>\",\n+ \"<|im_start|>\"\n ]\n }\n },\ndiff --git a/xinference/model/llm/llm_family_modelscope.json b/xinference/model/llm/llm_family_modelscope.json\nindex 37f415a1f1..251b1002ea 100644\n--- a/xinference/model/llm/llm_family_modelscope.json\n+++ b/xinference/model/llm/llm_family_modelscope.json\n@@ -4874,25 +4874,187 @@\n \"model_format\": \"pytorch\",\n \"model_size_in_billions\": 26,\n \"quantizations\": [\n- \"none\"\n+ \"4-bit\",\n+ \"8-bit\",\n+ \"none\"\n ],\n- \"model_hub\": \"modelscope\",\n- \"model_id\": \"AI-ModelScope/InternVL-Chat-V1-5\",\n+ \"model_hub\": \"modelscope\",\n+ \"model_id\": \"OpenGVLab/InternVL-Chat-V1-5\",\n+ \"model_revision\": \"master\"\n+ }\n+ ],\n+ \"prompt_style\": {\n+ \"style_name\": \"INTERNVL\",\n+ \"system_prompt\": \"You are InternLM (书生·浦语), a helpful, honest, and harmless AI assistant developed by Shanghai AI Laboratory (上海人工智能实验室).\",\n+ \"roles\": [\n+ \"<|im_start|>user\",\n+ \"<|im_start|>assistant\"\n+ ],\n+ \"intra_message_sep\": \"<|im_end|>\",\n+ \"stop_token_ids\": [\n+ 2,\n+ 92543,\n+ 92542\n+ ],\n+ \"stop\": [\n+ \"</s>\",\n+ \"<|im_end|>\",\n+ \"<|im_start|>\"\n+ ]\n+ }\n+ },\n+ {\n+ \"version\": 1,\n+ \"context_length\": 32768,\n+ \"model_name\": \"internvl2\",\n+ \"model_lang\": [\n+ \"en\",\n+ \"zh\"\n+ ],\n+ \"model_ability\": [\n+ \"chat\",\n+ \"vision\"\n+ ],\n+ \"model_description\": \"InternVL 2 is an open-source multimodal large language model (MLLM) to bridge the capability gap between open-source and proprietary commercial models in multimodal understanding. \",\n+ \"model_specs\": [\n+\n+ {\n+ \"model_format\": \"pytorch\",\n+ \"model_size_in_billions\": 1,\n+ \"quantizations\": [\n+ \"4-bit\",\n+ \"8-bit\",\n+ \"none\"\n+ ],\n+ \"model_hub\": \"modelscope\",\n+ \"model_id\": \"OpenGVLab/InternVL2-1B\",\n+ \"model_revision\": \"master\"\n+ },\n+ {\n+ \"model_format\": \"pytorch\",\n+ \"model_size_in_billions\": 2,\n+ \"quantizations\": [\n+ \"4-bit\",\n+ \"8-bit\",\n+ \"none\"\n+ ],\n+ \"model_hub\": \"modelscope\",\n+ \"model_id\": \"OpenGVLab/InternVL2-2B\",\n+ \"model_revision\": \"master\"\n+ },\n+ {\n+ \"model_format\": \"pytorch\",\n+ \"model_size_in_billions\": 2,\n+ \"quantizations\": [\n+ \"none\"\n+ ],\n+ \"model_hub\": \"modelscope\",\n+ \"model_id\": \"OpenGVLab/InternVL2-2B-AWQ\",\n+ \"model_revision\": \"master\"\n+ },\n+ {\n+ \"model_format\": \"pytorch\",\n+ \"model_size_in_billions\": 4,\n+ \"quantizations\": [\n+ \"4-bit\",\n+ \"8-bit\",\n+ \"none\"\n+ ],\n+ \"model_hub\": \"modelscope\",\n+ \"model_id\": \"OpenGVLab/InternVL2-4B\",\n+ \"model_revision\": \"master\"\n+ },\n+ {\n+ \"model_format\": \"pytorch\",\n+ \"model_size_in_billions\": 8,\n+ \"quantizations\": [\n+ \"4-bit\",\n+ \"8-bit\",\n+ \"none\"\n+ ],\n+ \"model_hub\": \"modelscope\",\n+ \"model_id\": \"OpenGVLab/InternVL2-8B\",\n+ \"model_revision\": \"master\"\n+ },\n+ {\n+ \"model_format\": \"pytorch\",\n+ \"model_size_in_billions\": 8,\n+ \"quantizations\": [\n+ \"none\"\n+ ],\n+ \"model_hub\": \"modelscope\",\n+ \"model_id\": \"OpenGVLab/InternVL2-8B-AWQ\",\n+ \"model_revision\": \"master\"\n+ },\n+ {\n+ \"model_format\": \"pytorch\",\n+ \"model_size_in_billions\": 26,\n+ \"quantizations\": [\n+ \"4-bit\",\n+ \"8-bit\",\n+ \"none\"\n+ ],\n+ \"model_hub\": \"modelscope\",\n+ \"model_id\": \"OpenGVLab/InternVL2-26B\",\n \"model_revision\": \"master\"\n },\n {\n \"model_format\": \"pytorch\",\n \"model_size_in_billions\": 26,\n \"quantizations\": [\n- \"Int8\"\n+ \"none\"\n+ ],\n+ \"model_hub\": \"modelscope\",\n+ \"model_id\": \"OpenGVLab/InternVL2-26B-AWQ\",\n+ \"model_revision\": \"master\"\n+ },\n+ {\n+ \"model_format\": \"pytorch\",\n+ \"model_size_in_billions\": 40,\n+ \"quantizations\": [\n+ \"4-bit\",\n+ \"8-bit\",\n+ \"none\"\n+ ],\n+ \"model_hub\": \"modelscope\",\n+ \"model_id\": \"OpenGVLab/InternVL2-40B\",\n+ \"model_revision\": \"master\"\n+ },\n+ {\n+ \"model_format\": \"pytorch\",\n+ \"model_size_in_billions\": 40,\n+ \"quantizations\": [\n+ \"none\"\n+ ],\n+ \"model_hub\": \"modelscope\",\n+ \"model_id\": \"OpenGVLab/InternVL2-40B-AWQ\",\n+ \"model_revision\": \"master\"\n+ },\n+ {\n+ \"model_format\": \"pytorch\",\n+ \"model_size_in_billions\": 76,\n+ \"quantizations\": [\n+ \"4-bit\",\n+ \"8-bit\",\n+ \"none\"\n+ ],\n+ \"model_hub\": \"modelscope\",\n+ \"model_id\": \"OpenGVLab/InternVL2-Llama3-76B\",\n+ \"model_revision\": \"master\"\n+ },\n+ {\n+ \"model_format\": \"pytorch\",\n+ \"model_size_in_billions\": 76,\n+ \"quantizations\": [\n+ \"none\"\n ],\n- \"model_hub\": \"modelscope\",\n- \"model_id\": \"AI-ModelScope/InternVL-Chat-V1-5-{quantization}\",\n+ \"model_hub\": \"modelscope\",\n+ \"model_id\": \"OpenGVLab/InternVL2-Llama3-76B-AWQ\",\n \"model_revision\": \"master\"\n }\n ],\n \"prompt_style\": {\n- \"style_name\": \"INTERNLM2\",\n+ \"style_name\": \"INTERNVL\",\n \"system_prompt\": \"You are InternLM (书生·浦语), a helpful, honest, and harmless AI assistant developed by Shanghai AI Laboratory (上海人工智能实验室).\",\n \"roles\": [\n \"<|im_start|>user\",\n@@ -4900,10 +5062,14 @@\n ],\n \"intra_message_sep\": \"<|im_end|>\",\n \"stop_token_ids\": [\n+ 2,\n+ 92543,\n 92542\n ],\n \"stop\": [\n- \"<|im_end|>\"\n+ \"</s>\",\n+ \"<|im_end|>\",\n+ \"<|im_start|>\"\n ]\n }\n },\ndiff --git a/xinference/model/llm/pytorch/core.py b/xinference/model/llm/pytorch/core.py\nindex ee5eb7ff70..09834155ab 100644\n--- a/xinference/model/llm/pytorch/core.py\n+++ b/xinference/model/llm/pytorch/core.py\n@@ -69,7 +69,7 @@\n \"yi-vl-chat\",\n \"deepseek-vl-chat\",\n \"internvl-chat\",\n- \"mini-internvl-chat\",\n+ \"internvl2\",\n \"cogvlm2\",\n \"MiniCPM-Llama3-V-2_5\",\n \"glm-4v\",\ndiff --git a/xinference/model/llm/pytorch/intern_vl.py b/xinference/model/llm/pytorch/intern_vl.py\nindex d9155f3b4b..6e8dfdc93d 100644\n--- a/xinference/model/llm/pytorch/intern_vl.py\n+++ b/xinference/model/llm/pytorch/intern_vl.py\n@@ -11,28 +11,25 @@\n # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n # See the License for the specific language governing permissions and\n # limitations under the License.\n-import base64\n import logging\n import time\n import uuid\n from concurrent.futures import ThreadPoolExecutor\n-from io import BytesIO\n-from typing import Dict, Iterator, List, Optional, Tuple, Union\n+from typing import Dict, Iterator, List, Optional, Union\n \n-import requests\n import torch\n-from PIL import Image\n \n-from ....model.utils import select_device\n from ....types import (\n ChatCompletion,\n ChatCompletionChunk,\n ChatCompletionMessage,\n Completion,\n CompletionChoice,\n+ CompletionChunk,\n CompletionUsage,\n )\n from ..llm_family import LLMFamilyV1, LLMSpecV1\n+from ..utils import _decode_image\n from .core import PytorchChatModel, PytorchGenerateConfig\n \n logger = logging.getLogger(__name__)\n@@ -41,6 +38,142 @@\n IMAGENET_STD = (0.229, 0.224, 0.225)\n \n \n+def _message_content_to_intern(content, image_cnt):\n+ if not isinstance(content, str):\n+ texts = []\n+ image_urls = []\n+ for c in content:\n+ c_type = c.get(\"type\")\n+ if c_type == \"text\":\n+ texts.append(c[\"text\"])\n+ elif c_type == \"image_url\":\n+ image_urls.append(c[\"image_url\"][\"url\"])\n+ image_futures = []\n+ with ThreadPoolExecutor() as executor:\n+ for image_url in image_urls:\n+ fut = executor.submit(_decode_image, image_url)\n+ image_futures.append(fut)\n+ images = [fut.result() for fut in image_futures]\n+ prefix = \"\"\n+ for i, _ in enumerate(images):\n+ prefix += f\"Image-{image_cnt + i + 1}: <image>\\n\\n\"\n+ text = prefix + \" \".join(texts)\n+ if len(images) == 0:\n+ return text, []\n+ else:\n+ return text, images\n+ return content, []\n+\n+\n+def _get_prompt_and_chat_history(\n+ prompt: Union[str, List[Dict]],\n+ chat_history: Optional[List[ChatCompletionMessage]] = None,\n+):\n+ # Convert openai history to intern vl history\n+ images = []\n+ history = []\n+ image_cnt = 0\n+ for h1, h2 in zip(*[iter(chat_history or [])] * 2):\n+ content1, img = _message_content_to_intern(h1[\"content\"], image_cnt)\n+ content2, _ = _message_content_to_intern(h2[\"content\"], image_cnt)\n+ history.append([content1, content2])\n+ images.extend(img)\n+ image_cnt += len(img)\n+\n+ question, img = _message_content_to_intern(prompt, image_cnt)\n+ images.extend(img)\n+ return question, history, images\n+\n+\n+def _build_transform(input_size=448):\n+ import torchvision.transforms as T\n+ from torchvision.transforms.functional import InterpolationMode\n+\n+ MEAN, STD = IMAGENET_MEAN, IMAGENET_STD\n+ transform = T.Compose(\n+ [\n+ T.Lambda(lambda img: img.convert(\"RGB\") if img.mode != \"RGB\" else img),\n+ T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC),\n+ T.ToTensor(),\n+ T.Normalize(mean=MEAN, std=STD),\n+ ]\n+ )\n+ return transform\n+\n+\n+def _find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size):\n+ best_ratio_diff = float(\"inf\")\n+ best_ratio = (1, 1)\n+ area = width * height\n+ for ratio in target_ratios:\n+ target_aspect_ratio = ratio[0] / ratio[1]\n+ ratio_diff = abs(aspect_ratio - target_aspect_ratio)\n+ if ratio_diff < best_ratio_diff:\n+ best_ratio_diff = ratio_diff\n+ best_ratio = ratio\n+ elif ratio_diff == best_ratio_diff:\n+ if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]:\n+ best_ratio = ratio\n+ return best_ratio\n+\n+\n+def _dynamic_preprocess(\n+ image, min_num=1, max_num=12, image_size=448, use_thumbnail=False\n+):\n+ orig_width, orig_height = image.size\n+ aspect_ratio = orig_width / orig_height\n+\n+ # calculate the existing image aspect ratio\n+ target_ratios = set(\n+ (i, j)\n+ for n in range(min_num, max_num + 1)\n+ for i in range(1, n + 1)\n+ for j in range(1, n + 1)\n+ if i * j <= max_num and i * j >= min_num\n+ )\n+ target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])\n+\n+ # find the closest aspect ratio to the target\n+ target_aspect_ratio = _find_closest_aspect_ratio(\n+ aspect_ratio, target_ratios, orig_width, orig_height, image_size\n+ )\n+\n+ # calculate the target width and height\n+ target_width = image_size * target_aspect_ratio[0]\n+ target_height = image_size * target_aspect_ratio[1]\n+ blocks = target_aspect_ratio[0] * target_aspect_ratio[1]\n+\n+ # resize the image\n+ resized_img = image.resize((target_width, target_height))\n+ processed_images = []\n+ for i in range(blocks):\n+ box = (\n+ (i % (target_width // image_size)) * image_size,\n+ (i // (target_width // image_size)) * image_size,\n+ ((i % (target_width // image_size)) + 1) * image_size,\n+ ((i // (target_width // image_size)) + 1) * image_size,\n+ )\n+ # split the image\n+ split_img = resized_img.crop(box)\n+ processed_images.append(split_img)\n+ assert len(processed_images) == blocks\n+ if use_thumbnail and len(processed_images) != 1:\n+ thumbnail_img = image.resize((image_size, image_size))\n+ processed_images.append(thumbnail_img)\n+ return processed_images\n+\n+\n+def _load_image(image_file, input_size=448, max_num=12):\n+ image = image_file.convert(\"RGB\")\n+ transform = _build_transform(input_size=input_size)\n+ images = _dynamic_preprocess(\n+ image, image_size=input_size, use_thumbnail=True, max_num=max_num\n+ )\n+ pixel_values = [transform(image) for image in images]\n+ pixel_values = torch.stack(pixel_values)\n+ return pixel_values\n+\n+\n class InternVLChatModel(PytorchChatModel):\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n@@ -52,249 +185,89 @@ def match(\n cls, model_family: \"LLMFamilyV1\", model_spec: \"LLMSpecV1\", quantization: str\n ) -> bool:\n family = model_family.model_family or model_family.model_name\n- if \"internvl\" in family.lower():\n- return True\n- return False\n+ if \"internvl\" not in family.lower():\n+ return False\n+ if \"pytorch\" not in model_spec.model_format:\n+ return False\n+ return True\n \n def _get_model_class(self):\n from transformers import AutoModel\n \n return AutoModel\n \n+ # Copy from InternVL page\n+ # reference: https://huggingface.co/OpenGVLab/InternVL2-8B\n+ def _split_model(self):\n+ import math\n+\n+ device_map = {}\n+ world_size = torch.cuda.device_count()\n+ # single gpu\n+ if world_size == 1:\n+ return None\n+ model_size = f\"{self.model_spec.model_size_in_billions}B\"\n+ num_layers = {\n+ \"1B\": 24,\n+ \"2B\": 24,\n+ \"4B\": 32,\n+ \"8B\": 32,\n+ \"26B\": 48,\n+ \"40B\": 60,\n+ \"76B\": 80,\n+ }[model_size]\n+ # Since the first GPU will be used for ViT, treat it as half a GPU.\n+ num_layers_per_gpu = math.ceil(num_layers / (world_size - 0.5))\n+ num_layers_per_gpu = [num_layers_per_gpu] * world_size\n+ num_layers_per_gpu[0] = math.ceil(num_layers_per_gpu[0] * 0.5)\n+ layer_cnt = 0\n+ for i, num_layer in enumerate(num_layers_per_gpu):\n+ for j in range(num_layer):\n+ device_map[f\"language_model.model.layers.{layer_cnt}\"] = i\n+ layer_cnt += 1\n+ device_map[\"vision_model\"] = 0\n+ device_map[\"mlp1\"] = 0\n+ device_map[\"language_model.model.tok_embeddings\"] = 0\n+ device_map[\"language_model.model.embed_tokens\"] = 0\n+ device_map[\"language_model.output\"] = 0\n+ device_map[\"language_model.model.norm\"] = 0\n+ device_map[\"language_model.lm_head\"] = 0\n+ device_map[f\"language_model.model.layers.{num_layers - 1}\"] = 0\n+ return device_map\n+\n def load(self, **kwargs):\n from transformers import AutoModel, AutoTokenizer\n- from transformers.generation import GenerationConfig\n \n if self._check_tensorizer_integrity():\n self._model, self._tokenizer = self._load_tensorizer()\n return\n \n- device = self._pytorch_model_config.get(\"device\", \"auto\")\n- device = select_device(device)\n- # for multiple GPU, set back to auto to make multiple devices work\n- device = \"auto\" if device == \"cuda\" else device\n-\n- self._tokenizer = AutoTokenizer.from_pretrained(\n- self.model_path,\n- trust_remote_code=True,\n- )\n+ device = self._split_model()\n \n kwargs = {\n \"torch_dtype\": torch.bfloat16,\n \"low_cpu_mem_usage\": True,\n \"trust_remote_code\": True,\n- \"device_map\": device,\n }\n \n- if \"int8\" in self.quantization.lower():\n+ if device is not None:\n+ kwargs[\"device_map\"] = device\n+\n+ if \"8-bit\" in self.quantization.lower():\n kwargs[\"load_in_8bit\"] = True\n- elif 2 == self.model_spec.model_size_in_billions:\n- kwargs.pop(\"device_map\")\n+ elif \"4-bit\" in self.quantization.lower():\n+ kwargs[\"load_in_4bit\"] = True\n \n self._model = AutoModel.from_pretrained(self.model_path, **kwargs).eval()\n \n- if \"int8\" not in self.quantization.lower():\n+ if device is None and \"none\" in self.quantization.lower():\n self._model.cuda()\n \n- # Specify hyperparameters for generation\n- self._model.generation_config = GenerationConfig.from_pretrained(\n+ self._tokenizer = AutoTokenizer.from_pretrained(\n self.model_path,\n trust_remote_code=True,\n+ use_fast=False,\n )\n- self._save_tensorizer()\n-\n- def _message_content_to_intern(self, content):\n- def _load_image(_url):\n- if _url.startswith(\"data:\"):\n- logging.info(\"Parse url by base64 decoder.\")\n- # https://platform.openai.com/docs/guides/vision/uploading-base-64-encoded-images\n- # e.g. f\"data:image/jpeg;base64,{base64_image}\"\n- _type, data = _url.split(\";\")\n- _, ext = _type.split(\"/\")\n- data = data[len(\"base64,\") :]\n- data = base64.b64decode(data.encode(\"utf-8\"))\n- return Image.open(BytesIO(data)).convert(\"RGB\")\n- else:\n- try:\n- response = requests.get(_url)\n- except requests.exceptions.MissingSchema:\n- return Image.open(_url).convert(\"RGB\")\n- else:\n- return Image.open(BytesIO(response.content)).convert(\"RGB\")\n-\n- if not isinstance(content, str):\n- texts = []\n- image_urls = []\n- for c in content:\n- c_type = c.get(\"type\")\n- if c_type == \"text\":\n- texts.append(c[\"text\"])\n- elif c_type == \"image_url\":\n- image_urls.append(c[\"image_url\"][\"url\"])\n- image_futures = []\n- with ThreadPoolExecutor() as executor:\n- for image_url in image_urls:\n- fut = executor.submit(_load_image, image_url)\n- image_futures.append(fut)\n- images = [fut.result() for fut in image_futures]\n- text = \" \".join(texts)\n- if len(images) == 0:\n- return text, None\n- else:\n- return text, images\n- return content, None\n-\n- def _history_content_to_intern(\n- self,\n- chat_history: List[ChatCompletionMessage],\n- IMG_START_TOKEN=\"<img>\",\n- IMG_END_TOKEN=\"</img>\",\n- IMG_CONTEXT_TOKEN=\"<IMG_CONTEXT>\",\n- ):\n- def _image_to_piexl_values(images):\n- load_images = []\n- for image in images:\n- if image.startswith(\"data:\"):\n- logging.info(\"Parse url by base64 decoder.\")\n- # https://platform.openai.com/docs/guides/vision/uploading-base-64-encoded-images\n- # e.g. f\"data:image/jpeg;base64,{base64_image}\"\n- _type, data = image.split(\";\")\n- _, ext = _type.split(\"/\")\n- data = data[len(\"base64,\") :]\n- data = base64.b64decode(data.encode(\"utf-8\"))\n- img = Image.open(BytesIO(data)).convert(\"RGB\")\n- pixel_value = (\n- self._load_image(img, max_num=6).to(torch.bfloat16).cuda()\n- )\n- load_images.append(pixel_value)\n- else:\n- try:\n- response = requests.get(image)\n- except requests.exceptions.MissingSchema:\n- img = Image.open(image).convert(\"RGB\")\n- else:\n- img = Image.open(BytesIO(response.content)).convert(\"RGB\")\n- pixel_value = (\n- self._load_image(img, max_num=6).to(torch.bfloat16).cuda()\n- )\n- load_images.append(pixel_value)\n- return torch.cat(tuple(load_images), dim=0)\n-\n- history: List[Tuple] = []\n- pixel_values = None\n- for i in range(0, len(chat_history), 2):\n- tmp = []\n- images: List[str] = []\n- user = chat_history[i][\"content\"]\n- if isinstance(user, List):\n- for content in user:\n- c_type = content.get(\"type\")\n- if c_type == \"text\":\n- tmp.append(content[\"text\"])\n- elif c_type == \"image_url\" and not history:\n- images.append(content[\"image_url\"][\"url\"])\n- if not history:\n- pixel_values = _image_to_piexl_values(images)\n- image_bs = pixel_values.shape[0]\n- image_tokens = (\n- IMG_START_TOKEN\n- + IMG_CONTEXT_TOKEN * self._model.num_image_token * image_bs\n- + IMG_END_TOKEN\n- )\n- tmp[0] = image_tokens + \"\\n\" + tmp[0]\n- else:\n- tmp.append(user)\n- tmp.append(chat_history[i + 1][\"content\"])\n- history.append(tuple(tmp))\n- return history, pixel_values\n-\n- def _find_closest_aspect_ratio(\n- self, aspect_ratio, target_ratios, width, height, image_size\n- ):\n- best_ratio_diff = float(\"inf\")\n- best_ratio = (1, 1)\n- area = width * height\n- for ratio in target_ratios:\n- target_aspect_ratio = ratio[0] / ratio[1]\n- ratio_diff = abs(aspect_ratio - target_aspect_ratio)\n- if ratio_diff < best_ratio_diff:\n- best_ratio_diff = ratio_diff\n- best_ratio = ratio\n- elif ratio_diff == best_ratio_diff:\n- if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]:\n- best_ratio = ratio\n- return best_ratio\n-\n- def _dynamic_preprocess(\n- self, image, min_num=1, max_num=6, image_size=448, use_thumbnail=False\n- ):\n- orig_width, orig_height = image.size\n- aspect_ratio = orig_width / orig_height\n-\n- # calculate the existing image aspect ratio\n- target_ratios = set(\n- (i, j)\n- for n in range(min_num, max_num + 1)\n- for i in range(1, n + 1)\n- for j in range(1, n + 1)\n- if i * j <= max_num and i * j >= min_num\n- )\n- target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])\n-\n- # find the closest aspect ratio to the target\n- target_aspect_ratio = self._find_closest_aspect_ratio(\n- aspect_ratio, target_ratios, orig_width, orig_height, image_size\n- )\n-\n- # calculate the target width and height\n- target_width = image_size * target_aspect_ratio[0]\n- target_height = image_size * target_aspect_ratio[1]\n- blocks = target_aspect_ratio[0] * target_aspect_ratio[1]\n-\n- # resize the image\n- resized_img = image.resize((target_width, target_height))\n- processed_images = []\n- for i in range(blocks):\n- box = (\n- (i % (target_width // image_size)) * image_size,\n- (i // (target_width // image_size)) * image_size,\n- ((i % (target_width // image_size)) + 1) * image_size,\n- ((i // (target_width // image_size)) + 1) * image_size,\n- )\n- # split the image\n- split_img = resized_img.crop(box)\n- processed_images.append(split_img)\n- assert len(processed_images) == blocks\n- if use_thumbnail and len(processed_images) != 1:\n- thumbnail_img = image.resize((image_size, image_size))\n- processed_images.append(thumbnail_img)\n- return processed_images\n-\n- def _build_transform(self, input_size):\n- import torchvision.transforms as T\n- from torchvision.transforms.functional import InterpolationMode\n-\n- MEAN, STD = IMAGENET_MEAN, IMAGENET_STD\n- transform = T.Compose(\n- [\n- T.Lambda(lambda img: img.convert(\"RGB\") if img.mode != \"RGB\" else img),\n- T.Resize(\n- (input_size, input_size), interpolation=InterpolationMode.BICUBIC\n- ),\n- T.ToTensor(),\n- T.Normalize(mean=MEAN, std=STD),\n- ]\n- )\n- return transform\n-\n- def _load_image(self, image_file, input_size=448, max_num=6):\n- transform = self._build_transform(input_size=input_size)\n- images = self._dynamic_preprocess(\n- image_file, image_size=input_size, use_thumbnail=True, max_num=max_num\n- )\n- pixel_values = [transform(image) for image in images]\n- pixel_values = torch.stack(pixel_values)\n- return pixel_values\n \n def chat(\n self,\n@@ -303,38 +276,82 @@ def chat(\n chat_history: Optional[List[ChatCompletionMessage]] = None,\n generate_config: Optional[PytorchGenerateConfig] = None,\n ) -> Union[ChatCompletion, Iterator[ChatCompletionChunk]]:\n- if generate_config and generate_config.get(\"stream\"):\n- raise Exception(\n- f\"Chat with model {self.model_family.model_name} does not support stream.\"\n- )\n- sanitized_config = {\n- \"num_beams\": 1,\n- \"max_new_tokens\": generate_config.get(\"max_tokens\", 512)\n+ generation_config = {\n+ \"max_new_tokens\": generate_config.get(\"max_tokens\", 1024)\n if generate_config\n- else 512,\n+ else 1024,\n \"do_sample\": False,\n }\n \n- content, image = self._message_content_to_intern(prompt)\n+ stream = (\n+ generate_config.get(\"stream\", False)\n+ if isinstance(generate_config, dict)\n+ else False\n+ )\n+ stream_options = (\n+ generate_config.get(\"stream_options\", None)\n+ if isinstance(generate_config, dict)\n+ else False\n+ )\n+ include_usage = (\n+ stream_options[\"include_usage\"]\n+ if isinstance(stream_options, dict)\n+ else False\n+ )\n+\n+ content, history, images = _get_prompt_and_chat_history(prompt, chat_history)\n \n- history = None\n- if chat_history:\n- history, pixel_values = self._history_content_to_intern(chat_history)\n+ num_patches_list = None\n+ if len(images) == 1:\n+ content = content.replace(\"Image-1: <image>\\n\\n\", \"<image>\\n\")\n+ history = [\n+ [item[0].replace(\"Image-1: <image>\\n\\n\", \"<image>\\n\"), item[1]]\n+ for item in history\n+ ]\n+ pixel_values = _load_image(images[-1], max_num=12).to(torch.bfloat16).cuda()\n+ elif len(images) > 1:\n+ pixel_values = [\n+ _load_image(img, max_num=12).to(torch.bfloat16).cuda() for img in images\n+ ]\n+ num_patches_list = [values.size(0) for values in pixel_values]\n+ pixel_values = torch.cat(pixel_values, dim=0)\n+ else:\n+ pixel_values = None\n+\n+ if stream:\n+ chunk = self._generate_stream(\n+ pixel_values,\n+ content,\n+ generation_config,\n+ num_patches_list,\n+ history,\n+ include_usage,\n+ )\n+ return self._to_chat_completion_chunks(chunk)\n else:\n- load_images = []\n- for img in image:\n- pixel_value = self._load_image(img, max_num=6).to(torch.bfloat16).cuda()\n- load_images.append(pixel_value)\n- pixel_values = torch.cat(tuple(load_images), dim=0)\n+ chunk = self._generate(\n+ pixel_values,\n+ content,\n+ generation_config,\n+ num_patches_list,\n+ history,\n+ )\n+ return self._to_chat_completion(chunk)\n \n- response, history = self._model.chat(\n+ def _generate(\n+ self, pixel_values, content, generation_config, num_patches_list, history\n+ ):\n+ response = self._model.chat(\n self._tokenizer,\n pixel_values,\n content,\n- sanitized_config,\n+ generation_config,\n+ num_patches_list=num_patches_list,\n history=history,\n- return_history=True,\n+ return_history=False,\n )\n+ prompt_tokens = self._get_input_tokens(content, history, num_patches_list)\n+ completion_tokens = self._get_output_tokens(response)\n chunk = Completion(\n id=str(uuid.uuid1()),\n object=\"text_completion\",\n@@ -346,7 +363,121 @@ def chat(\n )\n ],\n usage=CompletionUsage(\n- prompt_tokens=-1, completion_tokens=-1, total_tokens=-1\n+ prompt_tokens=prompt_tokens,\n+ completion_tokens=completion_tokens,\n+ total_tokens=prompt_tokens + completion_tokens,\n+ ),\n+ )\n+ return chunk\n+\n+ def _generate_stream(\n+ self,\n+ pixel_values,\n+ content,\n+ generation_config,\n+ num_patches_list,\n+ history,\n+ include_usage,\n+ ):\n+ from threading import Thread\n+\n+ from transformers import TextIteratorStreamer\n+\n+ # Initialize the streamer\n+ streamer = TextIteratorStreamer(\n+ self._tokenizer, skip_prompt=True, skip_special_tokens=True, timeout=10\n+ )\n+ # Define the generation configuration\n+ generation_config[\"streamer\"] = streamer\n+ # Start the model chat in a separate thread\n+ thread = Thread(\n+ target=self._model.chat,\n+ kwargs=dict(\n+ tokenizer=self._tokenizer,\n+ pixel_values=pixel_values,\n+ question=content,\n+ num_patches_list=num_patches_list,\n+ history=history,\n+ return_history=False,\n+ generation_config=generation_config,\n ),\n )\n- return self._to_chat_completion(chunk)\n+ thread.start()\n+\n+ completion_id = str(uuid.uuid1())\n+ prompt_tokens, completion_tokens, total_tokens = 0, 0, 0\n+ prompt_tokens = self._get_input_tokens(content, history, num_patches_list)\n+ # Loop through the streamer to get the new text as it is generated\n+ for i, new_text in enumerate(streamer):\n+ if new_text == self._model.conv_template.sep:\n+ break\n+ completion_choice = CompletionChoice(\n+ text=new_text, index=0, logprobs=None, finish_reason=None\n+ )\n+ chunk = CompletionChunk(\n+ id=completion_id,\n+ object=\"text_completion\",\n+ created=int(time.time()),\n+ model=self.model_uid,\n+ choices=[completion_choice],\n+ )\n+ completion_tokens = i\n+ total_tokens = prompt_tokens + completion_tokens\n+ completion_usage = CompletionUsage(\n+ prompt_tokens=prompt_tokens,\n+ completion_tokens=completion_tokens,\n+ total_tokens=total_tokens,\n+ )\n+ chunk[\"usage\"] = completion_usage\n+ yield chunk\n+ if include_usage:\n+ chunk = CompletionChunk(\n+ id=completion_id,\n+ object=\"text_completion\",\n+ created=int(time.time()),\n+ model=self.model_uid,\n+ choices=[],\n+ )\n+ chunk[\"usage\"] = CompletionUsage(\n+ prompt_tokens=prompt_tokens,\n+ completion_tokens=completion_tokens,\n+ total_tokens=total_tokens,\n+ )\n+ yield chunk\n+\n+ def _get_input_tokens(\n+ self,\n+ question,\n+ history,\n+ num_patches_list,\n+ IMG_START_TOKEN=\"<img>\",\n+ IMG_END_TOKEN=\"</img>\",\n+ IMG_CONTEXT_TOKEN=\"<IMG_CONTEXT>\",\n+ ):\n+ from ....thirdparty.internvl.conversation import get_conv_template\n+\n+ template = get_conv_template(self._model.template)\n+ template.system_message = self._model.system_message\n+\n+ history = [] if history is None else history\n+ for old_question, old_answer in history:\n+ template.append_message(template.roles[0], old_question)\n+ template.append_message(template.roles[1], old_answer)\n+ template.append_message(template.roles[0], question)\n+ template.append_message(template.roles[1], None)\n+ query = template.get_prompt()\n+\n+ for num_patches in num_patches_list or []:\n+ image_tokens = (\n+ IMG_START_TOKEN\n+ + IMG_CONTEXT_TOKEN * self._model.num_image_token * num_patches\n+ + IMG_END_TOKEN\n+ )\n+ query = query.replace(\"<image>\", image_tokens, 1)\n+\n+ model_inputs = self._tokenizer.encode(query, return_tensors=\"pt\")\n+ return len(model_inputs[0])\n+\n+ def _get_output_tokens(self, response):\n+ output_ids = self._tokenizer.encode(response, return_tensors=\"pt\")\n+ return len(output_ids[0])\ndiff --git a/xinference/model/llm/pytorch/utils.py b/xinference/model/llm/pytorch/utils.py\nindex 3e29472880..f2b82923f0 100644\n--- a/xinference/model/llm/pytorch/utils.py\n+++ b/xinference/model/llm/pytorch/utils.py\n@@ -42,7 +42,6 @@\n if TYPE_CHECKING:\n from ...llm.pytorch.core import PytorchModel\n \n-\n logger = logging.getLogger(__name__)\n \n \ndiff --git a/xinference/model/llm/utils.py b/xinference/model/llm/utils.py\nindex aadcbf9471..8b4f9e3f99 100644\n--- a/xinference/model/llm/utils.py\n+++ b/xinference/model/llm/utils.py\n@@ -11,14 +11,19 @@\n # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n # See the License for the specific language governing permissions and\n # limitations under the License.\n+import base64\n import functools\n import json\n import logging\n import os\n import time\n import uuid\n+from io import BytesIO\n from typing import AsyncGenerator, Dict, Iterator, List, Optional, Tuple, cast\n \n+import requests\n+from PIL import Image\n+\n from ...types import (\n SPECIAL_TOOL_PROMPT,\n ChatCompletion,\n@@ -60,7 +65,7 @@ def get_prompt(\n chat_history: List[ChatCompletionMessage],\n prompt_style: PromptStyleV1,\n tools: Optional[List[Dict]] = None,\n- ) -> str:\n+ ):\n \"\"\"\n Inspired by FastChat. Format chat history into a prompt according to the prompty style of\n different models.\n@@ -504,6 +509,37 @@ def get_role(role_name: str):\n else:\n ret += role\n return ret\n+ elif prompt_style.style_name == \"INTERNVL\":\n+ ret = [] # type: ignore\n+ images = [] # type: ignore\n+ for message in chat_history:\n+ role = get_role(message[\"role\"])\n+ content = message[\"content\"]\n+ if isinstance(content, str):\n+ ret.append(message) # type: ignore\n+ elif isinstance(content, list):\n+ text = \"\"\n+ image_urls = []\n+ for c in content:\n+ c_type = c.get(\"type\")\n+ if c_type == \"text\":\n+ text = c[\"text\"]\n+ elif c_type == \"image_url\":\n+ image_urls.append(c[\"image_url\"][\"url\"])\n+ image_futures = []\n+ from concurrent.futures import ThreadPoolExecutor\n+\n+ with ThreadPoolExecutor() as executor:\n+ for image_url in image_urls:\n+ fut = executor.submit(_decode_image, image_url)\n+ image_futures.append(fut)\n+ images = [fut.result() for fut in image_futures]\n+ if len(image_futures) == 0:\n+ msg = {\"role\": role, \"content\": text}\n+ else:\n+ msg = {\"role\": role, \"content\": f\"<image>\\n{text}\"}\n+ ret.append(msg)\n+ return (ret, images)\n else:\n raise ValueError(f\"Invalid prompt style: {prompt_style.style_name}\")\n \n@@ -885,3 +921,22 @@ def get_model_version(\n llm_family: LLMFamilyV1, llm_spec: LLMSpecV1, quantization: str\n ) -> str:\n return f\"{llm_family.model_name}--{llm_spec.model_size_in_billions}B--{llm_spec.model_format}--{quantization}\"\n+\n+\n+def _decode_image(_url):\n+ if _url.startswith(\"data:\"):\n+ logging.info(\"Parse url by base64 decoder.\")\n+ # https://platform.openai.com/docs/guides/vision/uploading-base-64-encoded-images\n+ # e.g. f\"data:image/jpeg;base64,{base64_image}\"\n+ _type, data = _url.split(\";\")\n+ _, ext = _type.split(\"/\")\n+ data = data[len(\"base64,\") :]\n+ data = base64.b64decode(data.encode(\"utf-8\"))\n+ return Image.open(BytesIO(data)).convert(\"RGB\")\n+ else:\n+ try:\n+ response = requests.get(_url)\n+ except requests.exceptions.MissingSchema:\n+ return Image.open(_url).convert(\"RGB\")\n+ else:\n+ return Image.open(BytesIO(response.content)).convert(\"RGB\")\ndiff --git a/xinference/model/llm/vllm/core.py b/xinference/model/llm/vllm/core.py\nindex eb633db0b7..6b13963d3c 100644\n--- a/xinference/model/llm/vllm/core.py\n+++ b/xinference/model/llm/vllm/core.py\n@@ -19,6 +19,7 @@\n import uuid\n from typing import (\n TYPE_CHECKING,\n+ Any,\n AsyncGenerator,\n Dict,\n Iterable,\n@@ -28,6 +29,8 @@\n Union,\n )\n \n+from transformers import AutoTokenizer\n+\n from ....types import (\n ChatCompletion,\n ChatCompletionChunk,\n@@ -86,6 +89,9 @@ class VLLMGenerateConfig(TypedDict, total=False):\n except ImportError:\n VLLM_INSTALLED = False\n \n+VLLM_SUPPORTED_VISION_MODEL_LIST: List[str] = [\n+ \"internvl2\",\n+]\n VLLM_SUPPORTED_MODELS = [\n \"llama-2\",\n \"llama-3\",\n@@ -158,6 +164,7 @@ class VLLMGenerateConfig(TypedDict, total=False):\n if VLLM_INSTALLED and vllm.__version__ > \"0.5.3\":\n VLLM_SUPPORTED_MODELS.append(\"llama-3.1\")\n VLLM_SUPPORTED_CHAT_MODELS.append(\"llama-3.1-instruct\")\n+ VLLM_SUPPORTED_CHAT_MODELS.append(\"internvl2\")\n \n \n class VLLMModel(LLM):\n@@ -383,7 +390,7 @@ def _convert_request_output_to_completion(\n \n async def async_generate(\n self,\n- prompt: str,\n+ prompt: Union[str, Dict[str, Any]],\n generate_config: Optional[Dict] = None,\n tools: object = False,\n ) -> Union[Completion, AsyncGenerator[CompletionChunk, None]]:\n@@ -520,6 +527,11 @@ class VLLMChatModel(VLLMModel, ChatModelMixin):\n def match(\n cls, llm_family: \"LLMFamilyV1\", llm_spec: \"LLMSpecV1\", quantization: str\n ) -> bool:\n+ if (\n+ llm_family.model_name in VLLM_SUPPORTED_VISION_MODEL_LIST\n+ or llm_family.model_family in VLLM_SUPPORTED_VISION_MODEL_LIST\n+ ):\n+ return False\n if llm_spec.model_format not in [\"pytorch\", \"gptq\", \"awq\"]:\n return False\n if llm_spec.model_format == \"pytorch\":\n@@ -606,3 +618,133 @@ async def async_chat(\n self.model_family, self.model_uid, c, tools\n )\n return self._to_chat_completion(c)\n+\n+\n+class VLLMVisionModel(VLLMModel, ChatModelMixin):\n+ def __init__(\n+ self,\n+ model_uid: str,\n+ model_family: \"LLMFamilyV1\",\n+ model_spec: \"LLMSpecV1\",\n+ quantization: str,\n+ model_path: str,\n+ model_config: Optional[VLLMModelConfig],\n+ peft_model: Optional[List[LoRA]] = None,\n+ ):\n+ super().__init__(\n+ model_uid,\n+ model_family,\n+ model_spec,\n+ quantization,\n+ model_path,\n+ model_config,\n+ peft_model,\n+ )\n+ self._tokenizer = None\n+\n+ def load(self):\n+ try:\n+ import vllm\n+ from vllm.engine.arg_utils import AsyncEngineArgs\n+ from vllm.engine.async_llm_engine import AsyncLLMEngine\n+ except ImportError:\n+ error_message = \"Failed to import module 'vllm'\"\n+ installation_guide = [\n+ \"Please make sure 'vllm' is installed. \",\n+ \"You can install it by `pip install vllm`\\n\",\n+ ]\n+\n+ raise ImportError(f\"{error_message}\\n\\n{''.join(installation_guide)}\")\n+\n+ if vllm.__version__ >= \"0.3.1\":\n+ # from vllm v0.3.1, it uses cupy as NCCL backend\n+ # in which cupy will fork a process\n+ # only for xoscar >= 0.3.0, new process is allowed in subpool\n+ # besides, xinference set start method as forkserver for unix\n+ # we need to set it to fork to make cupy NCCL work\n+ multiprocessing.set_start_method(\"fork\", force=True)\n+\n+ self._model_config = self._sanitize_model_config(self._model_config)\n+\n+ logger.info(\n+ f\"Loading {self.model_uid} with following model config: {self._model_config}\"\n+ )\n+\n+ engine_args = AsyncEngineArgs(\n+ model=self.model_path,\n+ **self._model_config,\n+ )\n+ self._engine = AsyncLLMEngine.from_engine_args(engine_args)\n+ self._tokenizer = AutoTokenizer.from_pretrained(\n+ self.model_path, trust_remote_code=True\n+ )\n+\n+ @classmethod\n+ def match(\n+ cls, llm_family: \"LLMFamilyV1\", llm_spec: \"LLMSpecV1\", quantization: str\n+ ) -> bool:\n+ if llm_spec.model_format != \"pytorch\":\n+ return False\n+ if llm_spec.model_format == \"pytorch\":\n+ if quantization != \"none\" and not (quantization is None):\n+ return False\n+ if isinstance(llm_family, CustomLLMFamilyV1):\n+ if llm_family.model_family not in VLLM_SUPPORTED_VISION_MODEL_LIST:\n+ return False\n+ else:\n+ if llm_family.model_name not in VLLM_SUPPORTED_VISION_MODEL_LIST:\n+ return False\n+ if \"chat\" not in llm_family.model_ability:\n+ return False\n+ return VLLM_INSTALLED\n+\n+ def _sanitize_chat_config(\n+ self,\n+ generate_config: Optional[Dict] = None,\n+ ) -> Dict:\n+ if not generate_config:\n+ generate_config = {}\n+ if self.model_family.prompt_style:\n+ if self.model_family.prompt_style.stop_token_ids:\n+ generate_config.setdefault(\n+ \"stop_token_ids\",\n+ self.model_family.prompt_style.stop_token_ids.copy(),\n+ )\n+ return generate_config\n+\n+ async def async_chat(\n+ self,\n+ prompt: str,\n+ system_prompt: Optional[str] = None,\n+ chat_history: Optional[List[ChatCompletionMessage]] = None,\n+ generate_config: Optional[Dict] = None,\n+ ) -> Union[ChatCompletion, AsyncGenerator[ChatCompletionChunk, None]]:\n+ # only support single image, waiting vllm support multi images\n+ assert self.model_family.prompt_style is not None\n+ prompt_style = self.model_family.prompt_style.copy()\n+ chat_history = chat_history or []\n+ messages, images = self.get_prompt(prompt, chat_history, prompt_style)\n+ prompt = self._tokenizer.apply_chat_template( # type: ignore\n+ messages, tokenize=False, add_generation_prompt=True\n+ )\n+ if len(images) == 0:\n+ inputs = {\n+ \"prompt\": prompt,\n+ }\n+ else:\n+ inputs = {\n+ \"prompt\": prompt,\n+ \"multi_modal_data\": {\"image\": images[-1]}, # type: ignore\n+ }\n+ generate_config = self._sanitize_chat_config(generate_config)\n+\n+ stream = generate_config.get(\"stream\", None)\n+\n+ if stream:\n+ agen = await self.async_generate(inputs, generate_config)\n+ assert isinstance(agen, AsyncGenerator)\n+ return self._async_to_chat_completion_chunks(agen)\n+ else:\n+ c = await self.async_generate(inputs, generate_config)\n+ assert not isinstance(c, AsyncGenerator)\n+ return self._to_chat_completion(c)\ndiff --git a/xinference/thirdparty/internvl/__init__.py b/xinference/thirdparty/internvl/__init__.py\nnew file mode 100644\nindex 0000000000..e69de29bb2\ndiff --git a/xinference/thirdparty/internvl/conversation.py b/xinference/thirdparty/internvl/conversation.py\nnew file mode 100644\nindex 0000000000..2fe37ad08c\n--- /dev/null\n+++ b/xinference/thirdparty/internvl/conversation.py\n@@ -0,0 +1,393 @@\n+\"\"\"\n+Conversation prompt templates.\n+\n+We kindly request that you import fastchat instead of copying this file if you wish to use it.\n+If you have changes in mind, please contribute back so the community can benefit collectively and continue to maintain these valuable templates.\n+\"\"\"\n+\n+import dataclasses\n+from enum import IntEnum, auto\n+from typing import Any, Dict, List, Tuple, Union\n+\n+\n+class SeparatorStyle(IntEnum):\n+ \"\"\"Separator styles.\"\"\"\n+\n+ ADD_COLON_SINGLE = auto()\n+ ADD_COLON_TWO = auto()\n+ ADD_COLON_SPACE_SINGLE = auto()\n+ NO_COLON_SINGLE = auto()\n+ NO_COLON_TWO = auto()\n+ ADD_NEW_LINE_SINGLE = auto()\n+ LLAMA2 = auto()\n+ CHATGLM = auto()\n+ CHATML = auto()\n+ CHATINTERN = auto()\n+ DOLLY = auto()\n+ RWKV = auto()\n+ PHOENIX = auto()\n+ ROBIN = auto()\n+ FALCON_CHAT = auto()\n+ CHATGLM3 = auto()\n+ INTERNVL_ZH = auto()\n+ MPT = auto()\n+\n+\[email protected]\n+class Conversation:\n+ \"\"\"A class that manages prompt templates and keeps all conversation history.\"\"\"\n+\n+ # The name of this template\n+ name: str\n+ # The template of the system prompt\n+ system_template: str = '{system_message}'\n+ # The system message\n+ system_message: str = ''\n+ # The names of two roles\n+ roles: Tuple[str] = ('USER', 'ASSISTANT')\n+ # All messages. Each item is (role, message).\n+ messages: List[List[str]] = ()\n+ # The number of few shot examples\n+ offset: int = 0\n+ # The separator style and configurations\n+ sep_style: SeparatorStyle = SeparatorStyle.ADD_COLON_SINGLE\n+ sep: str = '\\n'\n+ sep2: str = None\n+ # Stop criteria (the default one is EOS token)\n+ stop_str: Union[str, List[str]] = None\n+ # Stops generation if meeting any token in this list\n+ stop_token_ids: List[int] = None\n+\n+ def get_prompt(self) -> str:\n+ \"\"\"Get the prompt for generation.\"\"\"\n+ system_prompt = self.system_template.format(system_message=self.system_message)\n+ if self.sep_style == SeparatorStyle.ADD_COLON_SINGLE:\n+ ret = system_prompt + self.sep\n+ for role, message in self.messages:\n+ if message:\n+ ret += role + ': ' + message + self.sep\n+ else:\n+ ret += role + ':'\n+ return ret\n+ elif self.sep_style == SeparatorStyle.ADD_COLON_TWO:\n+ seps = [self.sep, self.sep2]\n+ ret = system_prompt + seps[0]\n+ for i, (role, message) in enumerate(self.messages):\n+ if message:\n+ ret += role + ': ' + message + seps[i % 2]\n+ else:\n+ ret += role + ':'\n+ return ret\n+ elif self.sep_style == SeparatorStyle.ADD_COLON_SPACE_SINGLE:\n+ ret = system_prompt + self.sep\n+ for role, message in self.messages:\n+ if message:\n+ ret += role + ': ' + message + self.sep\n+ else:\n+ ret += role + ': ' # must be end with a space\n+ return ret\n+ elif self.sep_style == SeparatorStyle.ADD_NEW_LINE_SINGLE:\n+ ret = '' if system_prompt == '' else system_prompt + self.sep\n+ for role, message in self.messages:\n+ if message:\n+ ret += role + '\\n' + message + self.sep\n+ else:\n+ ret += role + '\\n'\n+ return ret\n+ elif self.sep_style == SeparatorStyle.NO_COLON_SINGLE:\n+ ret = system_prompt\n+ for role, message in self.messages:\n+ if message:\n+ ret += role + message + self.sep\n+ else:\n+ ret += role\n+ return ret\n+ elif self.sep_style == SeparatorStyle.NO_COLON_TWO:\n+ seps = [self.sep, self.sep2]\n+ ret = system_prompt\n+ for i, (role, message) in enumerate(self.messages):\n+ if message:\n+ ret += role + message + seps[i % 2]\n+ else:\n+ ret += role\n+ return ret\n+ elif self.sep_style == SeparatorStyle.RWKV:\n+ ret = system_prompt\n+ for i, (role, message) in enumerate(self.messages):\n+ if message:\n+ ret += (\n+ role\n+ + ': '\n+ + message.replace('\\r\\n', '\\n').replace('\\n\\n', '\\n')\n+ )\n+ ret += '\\n\\n'\n+ else:\n+ ret += role + ':'\n+ return ret\n+ elif self.sep_style == SeparatorStyle.LLAMA2:\n+ seps = [self.sep, self.sep2]\n+ if self.system_message:\n+ ret = system_prompt\n+ else:\n+ ret = '[INST] '\n+ for i, (role, message) in enumerate(self.messages):\n+ tag = self.roles[i % 2]\n+ if message:\n+ if i == 0:\n+ ret += message + ' '\n+ else:\n+ ret += tag + ' ' + message + seps[i % 2]\n+ else:\n+ ret += tag\n+ return ret\n+ elif self.sep_style == SeparatorStyle.CHATGLM:\n+ # source: https://huggingface.co/THUDM/chatglm-6b/blob/1d240ba371910e9282298d4592532d7f0f3e9f3e/modeling_chatglm.py#L1302-L1308\n+ # source2: https://huggingface.co/THUDM/chatglm2-6b/blob/e186c891cf64310ac66ef10a87e6635fa6c2a579/modeling_chatglm.py#L926\n+ round_add_n = 1 if self.name == 'chatglm2' else 0\n+ if system_prompt:\n+ ret = system_prompt + self.sep\n+ else:\n+ ret = ''\n+\n+ for i, (role, message) in enumerate(self.messages):\n+ if i % 2 == 0:\n+ ret += f'[Round {i//2 + round_add_n}]{self.sep}'\n+\n+ if message:\n+ ret += f'{role}:{message}{self.sep}'\n+ else:\n+ ret += f'{role}:'\n+ return ret\n+ elif self.sep_style == SeparatorStyle.CHATML:\n+ ret = '' if system_prompt == '' else system_prompt + self.sep + '\\n'\n+ for role, message in self.messages:\n+ if message:\n+ ret += role + '\\n' + message + self.sep + '\\n'\n+ else:\n+ ret += role + '\\n'\n+ return ret\n+ elif self.sep_style == SeparatorStyle.CHATGLM3:\n+ ret = ''\n+ if self.system_message:\n+ ret += system_prompt\n+ for role, message in self.messages:\n+ if message:\n+ ret += role + '\\n' + ' ' + message\n+ else:\n+ ret += role\n+ return ret\n+ elif self.sep_style == SeparatorStyle.CHATINTERN:\n+ # source: https://huggingface.co/internlm/internlm-chat-7b-8k/blob/bd546fa984b4b0b86958f56bf37f94aa75ab8831/modeling_internlm.py#L771\n+ seps = [self.sep, self.sep2]\n+ ret = system_prompt\n+ for i, (role, message) in enumerate(self.messages):\n+ # if i % 2 == 0:\n+ # ret += \"<s>\"\n+ if message:\n+ ret += role + ':' + message + seps[i % 2] + '\\n'\n+ else:\n+ ret += role + ':'\n+ return ret\n+ elif self.sep_style == SeparatorStyle.DOLLY:\n+ seps = [self.sep, self.sep2]\n+ ret = system_prompt\n+ for i, (role, message) in enumerate(self.messages):\n+ if message:\n+ ret += role + ':\\n' + message + seps[i % 2]\n+ if i % 2 == 1:\n+ ret += '\\n\\n'\n+ else:\n+ ret += role + ':\\n'\n+ return ret\n+ elif self.sep_style == SeparatorStyle.PHOENIX:\n+ ret = system_prompt\n+ for role, message in self.messages:\n+ if message:\n+ ret += role + ': ' + '<s>' + message + '</s>'\n+ else:\n+ ret += role + ': ' + '<s>'\n+ return ret\n+ elif self.sep_style == SeparatorStyle.ROBIN:\n+ ret = system_prompt + self.sep\n+ for role, message in self.messages:\n+ if message:\n+ ret += role + ':\\n' + message + self.sep\n+ else:\n+ ret += role + ':\\n'\n+ return ret\n+ elif self.sep_style == SeparatorStyle.FALCON_CHAT:\n+ ret = ''\n+ if self.system_message:\n+ ret += system_prompt + self.sep\n+ for role, message in self.messages:\n+ if message:\n+ ret += role + ': ' + message + self.sep\n+ else:\n+ ret += role + ':'\n+\n+ return ret\n+ elif self.sep_style == SeparatorStyle.INTERNVL_ZH:\n+ seps = [self.sep, self.sep2]\n+ ret = self.system_message + seps[0]\n+ for i, (role, message) in enumerate(self.messages):\n+ if message:\n+ ret += role + ': ' + message + seps[i % 2]\n+ else:\n+ ret += role + ':'\n+ return ret\n+ elif self.sep_style == SeparatorStyle.MPT:\n+ ret = system_prompt + self.sep\n+ for role, message in self.messages:\n+ if message:\n+ if type(message) is tuple:\n+ message, _, _ = message\n+ ret += role + message + self.sep\n+ else:\n+ ret += role\n+ return ret\n+ else:\n+ raise ValueError(f'Invalid style: {self.sep_style}')\n+\n+ def set_system_message(self, system_message: str):\n+ \"\"\"Set the system message.\"\"\"\n+ self.system_message = system_message\n+\n+ def append_message(self, role: str, message: str):\n+ \"\"\"Append a new message.\"\"\"\n+ self.messages.append([role, message])\n+\n+ def update_last_message(self, message: str):\n+ \"\"\"Update the last output.\n+\n+ The last message is typically set to be None when constructing the prompt,\n+ so we need to update it in-place after getting the response from a model.\n+ \"\"\"\n+ self.messages[-1][1] = message\n+\n+ def to_gradio_chatbot(self):\n+ \"\"\"Convert the conversation to gradio chatbot format.\"\"\"\n+ ret = []\n+ for i, (role, msg) in enumerate(self.messages[self.offset :]):\n+ if i % 2 == 0:\n+ ret.append([msg, None])\n+ else:\n+ ret[-1][-1] = msg\n+ return ret\n+\n+ def to_openai_api_messages(self):\n+ \"\"\"Convert the conversation to OpenAI chat completion format.\"\"\"\n+ ret = [{'role': 'system', 'content': self.system_message}]\n+\n+ for i, (_, msg) in enumerate(self.messages[self.offset :]):\n+ if i % 2 == 0:\n+ ret.append({'role': 'user', 'content': msg})\n+ else:\n+ if msg is not None:\n+ ret.append({'role': 'assistant', 'content': msg})\n+ return ret\n+\n+ def copy(self):\n+ return Conversation(\n+ name=self.name,\n+ system_template=self.system_template,\n+ system_message=self.system_message,\n+ roles=self.roles,\n+ messages=[[x, y] for x, y in self.messages],\n+ offset=self.offset,\n+ sep_style=self.sep_style,\n+ sep=self.sep,\n+ sep2=self.sep2,\n+ stop_str=self.stop_str,\n+ stop_token_ids=self.stop_token_ids,\n+ )\n+\n+ def dict(self):\n+ return {\n+ 'template_name': self.name,\n+ 'system_message': self.system_message,\n+ 'roles': self.roles,\n+ 'messages': self.messages,\n+ 'offset': self.offset,\n+ }\n+\n+\n+# A global registry for all conversation templates\n+conv_templates: Dict[str, Conversation] = {}\n+\n+\n+def register_conv_template(template: Conversation, override: bool = False):\n+ \"\"\"Register a new conversation template.\"\"\"\n+ if not override:\n+ assert (\n+ template.name not in conv_templates\n+ ), f'{template.name} has been registered.'\n+\n+ conv_templates[template.name] = template\n+\n+\n+def get_conv_template(name: str) -> Conversation:\n+ \"\"\"Get a conversation template.\"\"\"\n+ return conv_templates[name].copy()\n+\n+\n+# Both Hermes-2 and internlm2-chat are chatml-format conversation templates. The difference\n+# is that during training, the preprocessing function for the Hermes-2 template doesn't add\n+# <s> at the beginning of the tokenized sequence, while the internlm2-chat template does.\n+# Therefore, they are completely equivalent during inference.\n+register_conv_template(\n+ Conversation(\n+ name='Hermes-2',\n+ system_template='<|im_start|>system\\n{system_message}',\n+ # note: The new system prompt was not used here to avoid changes in benchmark performance.\n+ # system_message='我是书生·万象,英文名是InternVL,是由上海人工智能实验室、清华大学及多家合作单位联合开发的多模态大语言模型。',\n+ system_message='你是由上海人工智能实验室联合商汤科技开发的书生多模态大模型,英文名叫InternVL, 是一个有用无害的人工智能助手。',\n+ roles=('<|im_start|>user\\n', '<|im_start|>assistant\\n'),\n+ sep_style=SeparatorStyle.MPT,\n+ sep='<|im_end|>',\n+ stop_token_ids=[\n+ 2,\n+ 6,\n+ 7,\n+ 8,\n+ ],\n+ stop_str='<|endoftext|>',\n+ )\n+)\n+\n+\n+register_conv_template(\n+ Conversation(\n+ name='internlm2-chat',\n+ system_template='<|im_start|>system\\n{system_message}',\n+ # note: The new system prompt was not used here to avoid changes in benchmark performance.\n+ # system_message='我是书生·万象,英文名是InternVL,是由上海人工智能实验室、清华大学及多家合作单位联合开发的多模态大语言模型。',\n+ system_message='你是由上海人工智能实验室联合商汤科技开发的书生多模态大模型,英文名叫InternVL, 是一个有用无害的人工智能助手。',\n+ roles=('<|im_start|>user\\n', '<|im_start|>assistant\\n'),\n+ sep_style=SeparatorStyle.MPT,\n+ sep='<|im_end|>',\n+ stop_token_ids=[\n+ 2,\n+ 92543,\n+ 92542\n+ ]\n+ )\n+)\n+\n+\n+register_conv_template(\n+ Conversation(\n+ name='phi3-chat',\n+ system_template='<|system|>\\n{system_message}',\n+ # note: The new system prompt was not used here to avoid changes in benchmark performance.\n+ # system_message='我是书生·万象,英文名是InternVL,是由上海人工智能实验室、清华大学及多家合作单位联合开发的多模态大语言模型。',\n+ system_message='你是由上海人工智能实验室联合商汤科技开发的书生多模态大模型,英文名叫InternVL, 是一个有用无害的人工智能助手。',\n+ roles=('<|user|>\\n', '<|assistant|>\\n'),\n+ sep_style=SeparatorStyle.MPT,\n+ sep='<|end|>',\n+ stop_token_ids=[\n+ 2,\n+ 32000,\n+ 32007\n+ ]\n+ )\n+)\n" }
[ { "diff_hunk": "@@ -28,6 +29,8 @@\n Union,\n )\n \n+from transformers import AutoTokenizer", "line": null, "original_line": 32, "original_start_line": null, "path": "xinference/model/llm/vllm/core.py", "start_line": null, "text": "@user1:\n`transformers` is an optional dependency, so we may need to put the import into function." }, { "diff_hunk": "@@ -606,3 +618,133 @@ async def async_chat(\n self.model_family, self.model_uid, c, tools\n )\n return self._to_chat_completion(c)\n+\n+\n+class VLLMVisionModel(VLLMModel, ChatModelMixin):\n+ def __init__(\n+ self,\n+ model_uid: str,\n+ model_family: \"LLMFamilyV1\",\n+ model_spec: \"LLMSpecV1\",\n+ quantization: str,\n+ model_path: str,\n+ model_config: Optional[VLLMModelConfig],\n+ peft_model: Optional[List[LoRA]] = None,\n+ ):\n+ super().__init__(\n+ model_uid,\n+ model_family,\n+ model_spec,\n+ quantization,\n+ model_path,\n+ model_config,\n+ peft_model,\n+ )\n+ self._tokenizer = None\n+\n+ def load(self):\n+ try:\n+ import vllm\n+ from vllm.engine.arg_utils import AsyncEngineArgs\n+ from vllm.engine.async_llm_engine import AsyncLLMEngine\n+ except ImportError:\n+ error_message = \"Failed to import module 'vllm'\"\n+ installation_guide = [\n+ \"Please make sure 'vllm' is installed. \",\n+ \"You can install it by `pip install vllm`\\n\",\n+ ]\n+\n+ raise ImportError(f\"{error_message}\\n\\n{''.join(installation_guide)}\")\n+\n+ if vllm.__version__ >= \"0.3.1\":\n+ # from vllm v0.3.1, it uses cupy as NCCL backend\n+ # in which cupy will fork a process\n+ # only for xoscar >= 0.3.0, new process is allowed in subpool\n+ # besides, xinference set start method as forkserver for unix\n+ # we need to set it to fork to make cupy NCCL work\n+ multiprocessing.set_start_method(\"fork\", force=True)\n+\n+ self._model_config = self._sanitize_model_config(self._model_config)\n+\n+ logger.info(\n+ f\"Loading {self.model_uid} with following model config: {self._model_config}\"\n+ )\n+\n+ engine_args = AsyncEngineArgs(\n+ model=self.model_path,\n+ **self._model_config,\n+ )\n+ self._engine = AsyncLLMEngine.from_engine_args(engine_args)\n+ self._tokenizer = AutoTokenizer.from_pretrained(", "line": null, "original_line": 678, "original_start_line": null, "path": "xinference/model/llm/vllm/core.py", "start_line": null, "text": "@user1:\nOnly add a tokenizer beyond the VLLMModel? Why not calling the base load method first?" }, { "diff_hunk": "@@ -606,3 +618,133 @@ async def async_chat(\n self.model_family, self.model_uid, c, tools\n )\n return self._to_chat_completion(c)\n+\n+\n+class VLLMVisionModel(VLLMModel, ChatModelMixin):\n+ def __init__(\n+ self,\n+ model_uid: str,\n+ model_family: \"LLMFamilyV1\",\n+ model_spec: \"LLMSpecV1\",\n+ quantization: str,\n+ model_path: str,\n+ model_config: Optional[VLLMModelConfig],\n+ peft_model: Optional[List[LoRA]] = None,\n+ ):\n+ super().__init__(\n+ model_uid,\n+ model_family,\n+ model_spec,\n+ quantization,\n+ model_path,\n+ model_config,\n+ peft_model,\n+ )\n+ self._tokenizer = None\n+\n+ def load(self):\n+ try:\n+ import vllm\n+ from vllm.engine.arg_utils import AsyncEngineArgs\n+ from vllm.engine.async_llm_engine import AsyncLLMEngine\n+ except ImportError:\n+ error_message = \"Failed to import module 'vllm'\"\n+ installation_guide = [\n+ \"Please make sure 'vllm' is installed. \",\n+ \"You can install it by `pip install vllm`\\n\",\n+ ]\n+\n+ raise ImportError(f\"{error_message}\\n\\n{''.join(installation_guide)}\")\n+\n+ if vllm.__version__ >= \"0.3.1\":\n+ # from vllm v0.3.1, it uses cupy as NCCL backend\n+ # in which cupy will fork a process\n+ # only for xoscar >= 0.3.0, new process is allowed in subpool\n+ # besides, xinference set start method as forkserver for unix\n+ # we need to set it to fork to make cupy NCCL work\n+ multiprocessing.set_start_method(\"fork\", force=True)\n+\n+ self._model_config = self._sanitize_model_config(self._model_config)\n+\n+ logger.info(\n+ f\"Loading {self.model_uid} with following model config: {self._model_config}\"\n+ )\n+\n+ engine_args = AsyncEngineArgs(\n+ model=self.model_path,\n+ **self._model_config,\n+ )\n+ self._engine = AsyncLLMEngine.from_engine_args(engine_args)\n+ self._tokenizer = AutoTokenizer.from_pretrained(\n+ self.model_path, trust_remote_code=True\n+ )\n+\n+ @classmethod\n+ def match(\n+ cls, llm_family: \"LLMFamilyV1\", llm_spec: \"LLMSpecV1\", quantization: str\n+ ) -> bool:\n+ if llm_spec.model_format != \"pytorch\":\n+ return False\n+ if llm_spec.model_format == \"pytorch\":\n+ if quantization != \"none\" and not (quantization is None):\n+ return False\n+ if isinstance(llm_family, CustomLLMFamilyV1):\n+ if llm_family.model_family not in VLLM_SUPPORTED_VISION_MODEL_LIST:\n+ return False\n+ else:\n+ if llm_family.model_name not in VLLM_SUPPORTED_VISION_MODEL_LIST:\n+ return False\n+ if \"chat\" not in llm_family.model_ability:", "line": null, "original_line": 697, "original_start_line": null, "path": "xinference/model/llm/vllm/core.py", "start_line": null, "text": "@user1:\nShall we consider the `vision` ability?" }, { "diff_hunk": "@@ -346,7 +363,121 @@ def chat(\n )\n ],\n usage=CompletionUsage(\n- prompt_tokens=-1, completion_tokens=-1, total_tokens=-1\n+ prompt_tokens=prompt_tokens,\n+ completion_tokens=completion_tokens,\n+ total_tokens=prompt_tokens + completion_tokens,\n+ ),\n+ )\n+ return chunk\n+\n+ def _generate_stream(\n+ self,\n+ pixel_values,\n+ content,\n+ generation_config,\n+ num_patches_list,\n+ history,\n+ include_usage,\n+ ):\n+ from threading import Thread\n+\n+ from transformers import TextIteratorStreamer\n+\n+ # Initialize the streamer\n+ streamer = TextIteratorStreamer(\n+ self._tokenizer, skip_prompt=True, skip_special_tokens=True, timeout=10\n+ )\n+ # Define the generation configuration\n+ generation_config[\"streamer\"] = streamer\n+ # Start the model chat in a separate thread\n+ thread = Thread(\n+ target=self._model.chat,\n+ kwargs=dict(\n+ tokenizer=self._tokenizer,\n+ pixel_values=pixel_values,\n+ question=content,\n+ num_patches_list=num_patches_list,\n+ history=history,\n+ return_history=False,\n+ generation_config=generation_config,\n ),\n )\n- return self._to_chat_completion(chunk)\n+ thread.start()\n+\n+ completion_id = str(uuid.uuid1())\n+ prompt_tokens, completion_tokens, total_tokens = 0, 0, 0\n+ prompt_tokens = self._get_input_tokens(content, history, num_patches_list)", "line": null, "original_line": 409, "original_start_line": null, "path": "xinference/model/llm/pytorch/intern_vl.py", "start_line": null, "text": "@user1:\nIs it possible that we generate the prompt, and turn it into token ids, so we can calculate the prompt_tokens, the benefit is that we don't need to tokenize twice." } ]
e1afe8f15c1d0e974077fcafca1d2eab1a66f24f
diff --git a/xinference/model/llm/__init__.py b/xinference/model/llm/__init__.py index 7b41e0b776..6d06b0ecaa 100644 --- a/xinference/model/llm/__init__.py +++ b/xinference/model/llm/__init__.py @@ -127,7 +127,7 @@ def _install(): from .transformers.minicpmv26 import MiniCPMV26Model from .transformers.qwen_vl import QwenVLChatModel from .transformers.yi_vl import YiVLChatModel - from .vllm.core import VLLMChatModel, VLLMModel + from .vllm.core import VLLMChatModel, VLLMModel, VLLMVisionModel try: from .transformers.omnilmm import OmniLMMModel @@ -145,7 +145,7 @@ def _install(): ] ) SGLANG_CLASSES.extend([SGLANGModel, SGLANGChatModel]) - VLLM_CLASSES.extend([VLLMModel, VLLMChatModel]) + VLLM_CLASSES.extend([VLLMModel, VLLMChatModel, VLLMVisionModel]) MLX_CLASSES.extend([MLXModel, MLXChatModel]) TRANSFORMERS_CLASSES.extend( [ diff --git a/xinference/model/llm/llm_family.json b/xinference/model/llm/llm_family.json index 066c1d77a9..dafba3aa42 100644 --- a/xinference/model/llm/llm_family.json +++ b/xinference/model/llm/llm_family.json @@ -7083,32 +7083,195 @@ "model_format": "pytorch", "model_size_in_billions": 2, "quantizations": [ - "none" + "4-bit", + "8-bit", + "none" ], "model_id": "OpenGVLab/Mini-InternVL-Chat-2B-V1-5", - "model_revision": "ce3f67acff17281bacbf4b156f402a0580fb9605" + "model_revision": "ecbbd21dcf38caa74d925967b997167b0c7b3f47" + }, + { + "model_format": "pytorch", + "model_size_in_billions": 4, + "quantizations": [ + "4-bit", + "8-bit", + "none" + ], + "model_id": "OpenGVLab/Mini-InternVL-Chat-4B-V1-5", + "model_revision": "ce1559ddf9d87f5130aa5233b0e93b95e4e4161a" }, { "model_format": "pytorch", "model_size_in_billions": 26, "quantizations": [ - "none" + "4-bit", + "8-bit", + "none" ], "model_id": "OpenGVLab/InternVL-Chat-V1-5", - "model_revision": "e822119e5806946ce128043023a73d715ecabf8d" + "model_revision": "9db32d9127cac0c85961e169d75da57a18a847b1" + } + ], + "prompt_style": { + "style_name": "INTERNVL", + "system_prompt": "You are InternLM (书生·浦语), a helpful, honest, and harmless AI assistant developed by Shanghai AI Laboratory (上海人工智能实验室).", + "roles": [ + "<|im_start|>user", + "<|im_start|>assistant" + ], + "intra_message_sep": "<|im_end|>", + "stop_token_ids": [ + 2, + 92543, + 92542 + ], + "stop": [ + "</s>", + "<|im_end|>", + "<|im_start|>" + ] + } + }, + { + "version": 1, + "context_length": 32768, + "model_name": "internvl2", + "model_lang": [ + "en", + "zh" + ], + "model_ability": [ + "chat", + "vision" + ], + "model_description": "InternVL 2 is an open-source multimodal large language model (MLLM) to bridge the capability gap between open-source and proprietary commercial models in multimodal understanding. ", + "model_specs": [ + { + "model_format": "pytorch", + "model_size_in_billions": 1, + "quantizations": [ + "4-bit", + "8-bit", + "none" + ], + "model_id": "OpenGVLab/InternVL2-1B", + "model_revision": "a9fc14aea824b6ea1d44f8778cad6b35512c4ce1" }, { "model_format": "pytorch", + "model_size_in_billions": 2, + "quantizations": [ + "4-bit", + "8-bit", + "none" + ], + "model_id": "OpenGVLab/InternVL2-2B", + "model_revision": "422ad7c6335917bfb514958233955512338485a6" + }, + { + "model_format": "awq", + "model_size_in_billions": 2, + "quantizations": [ + "Int4" + ], + "model_id": "OpenGVLab/InternVL2-2B-AWQ", + "model_revision": "701bc3fc098a8a3b686b3b4135cfb77202be89e0" + }, + { + "model_format": "pytorch", + "model_size_in_billions": 4, + "quantizations": [ + "4-bit", + "8-bit", + "none" + ], + "model_id": "OpenGVLab/InternVL2-4B", + "model_revision": "b50544dafada6c41e80bfde2f57cc9b0140fc21c" + }, + { + "model_format": "awq", + "model_size_in_billions": 4, + "quantizations": [ + "Int4" + ], + "model_id": "OpenGVLab/InternVL2-8B-AWQ", + "model_revision": "9f1a4756b7ae18eb26d8a22b618dfc283e8193b3" + }, + { + "model_format": "pytorch", + "model_size_in_billions": 8, + "quantizations": [ + "4-bit", + "8-bit", + "none" + ], + "model_id": "OpenGVLab/InternVL2-8B", + "model_revision": "3bfd3664dea4f3da628785f5125d30f889701253" + }, + { + "model_format": "pytorch", + "model_size_in_billions": 26, + "quantizations": [ + "4-bit", + "8-bit", + "none" + ], + "model_id": "OpenGVLab/InternVL2-26B", + "model_revision": "b9f3c7e6d575b0115e076a3ffc46fd20b7586899" + }, + { + "model_format": "awq", "model_size_in_billions": 26, "quantizations": [ - "Int8" + "Int4" + ], + "model_id": "OpenGVLab/InternVL2-26B-AWQ", + "model_revision": "469e0019ffd251e22ff6501a5c2321964e86ef0d" + }, + { + "model_format": "pytorch", + "model_size_in_billions": 40, + "quantizations": [ + "4-bit", + "8-bit", + "none" + ], + "model_id": "OpenGVLab/InternVL2-40B", + "model_revision": "725a12063bb855c966e30a0617d0ccd9e870d772" + }, + { + "model_format": "awq", + "model_size_in_billions": 40, + "quantizations": [ + "Int4" + ], + "model_id": "OpenGVLab/InternVL2-40B-AWQ", + "model_revision": "d92e140f6dfe8ea9679924c6a31898f42c4e1846" + }, + { + "model_format": "pytorch", + "model_size_in_billions": 76, + "quantizations": [ + "4-bit", + "8-bit", + "none" + ], + "model_id": "OpenGVLab/InternVL2-Llama3-76B", + "model_revision": "cf7914905f78e9e3560ddbd6f5dfc39becac494f" + }, + { + "model_format": "awq", + "model_size_in_billions": 76, + "quantizations": [ + "Int4" ], - "model_id": "OpenGVLab/InternVL-Chat-V1-5-{quantization}", - "model_revision": "acaaed06937c603ab04f084216ecb0268160f538" + "model_id": "OpenGVLab/InternVL2-Llama3-76B-AWQ", + "model_revision": "1bc796bf80f2ebc7d6a14c15f55217a4600d50a4" } ], "prompt_style": { - "style_name": "INTERNLM2", + "style_name": "INTERNVL", "system_prompt": "You are InternLM (书生·浦语), a helpful, honest, and harmless AI assistant developed by Shanghai AI Laboratory (上海人工智能实验室).", "roles": [ "<|im_start|>user", @@ -7116,10 +7279,14 @@ ], "intra_message_sep": "<|im_end|>", "stop_token_ids": [ + 2, + 92543, 92542 ], "stop": [ - "<|im_end|>" + "</s>", + "<|im_end|>", + "<|im_start|>" ] } }, diff --git a/xinference/model/llm/llm_family_modelscope.json b/xinference/model/llm/llm_family_modelscope.json index ba9ec1cbb2..b9eae2252d 100644 --- a/xinference/model/llm/llm_family_modelscope.json +++ b/xinference/model/llm/llm_family_modelscope.json @@ -4709,25 +4709,187 @@ "model_format": "pytorch", "model_size_in_billions": 26, "quantizations": [ - "none" + "4-bit", + "8-bit", + "none" ], - "model_hub": "modelscope", - "model_id": "AI-ModelScope/InternVL-Chat-V1-5", + "model_hub": "modelscope", + "model_id": "OpenGVLab/InternVL-Chat-V1-5", + "model_revision": "master" + } + ], + "prompt_style": { + "style_name": "INTERNVL", + "system_prompt": "You are InternLM (书生·浦语), a helpful, honest, and harmless AI assistant developed by Shanghai AI Laboratory (上海人工智能实验室).", + "roles": [ + "<|im_start|>user", + "<|im_start|>assistant" + ], + "intra_message_sep": "<|im_end|>", + "stop_token_ids": [ + 2, + 92543, + 92542 + ], + "stop": [ + "</s>", + "<|im_end|>", + "<|im_start|>" + ] + } + }, + { + "version": 1, + "context_length": 32768, + "model_name": "internvl2", + "model_lang": [ + "en", + "zh" + ], + "model_ability": [ + "chat", + "vision" + ], + "model_description": "InternVL 2 is an open-source multimodal large language model (MLLM) to bridge the capability gap between open-source and proprietary commercial models in multimodal understanding. ", + "model_specs": [ + + { + "model_format": "pytorch", + "model_size_in_billions": 1, + "quantizations": [ + "4-bit", + "8-bit", + "none" + ], + "model_hub": "modelscope", + "model_id": "OpenGVLab/InternVL2-1B", + "model_revision": "master" + }, + { + "model_format": "pytorch", + "model_size_in_billions": 2, + "quantizations": [ + "4-bit", + "8-bit", + "none" + ], + "model_hub": "modelscope", + "model_id": "OpenGVLab/InternVL2-2B", + "model_revision": "master" + }, + { + "model_format": "pytorch", + "model_size_in_billions": 2, + "quantizations": [ + "none" + ], + "model_hub": "modelscope", + "model_id": "OpenGVLab/InternVL2-2B-AWQ", + "model_revision": "master" + }, + { + "model_format": "pytorch", + "model_size_in_billions": 4, + "quantizations": [ + "4-bit", + "8-bit", + "none" + ], + "model_hub": "modelscope", + "model_id": "OpenGVLab/InternVL2-4B", + "model_revision": "master" + }, + { + "model_format": "pytorch", + "model_size_in_billions": 8, + "quantizations": [ + "4-bit", + "8-bit", + "none" + ], + "model_hub": "modelscope", + "model_id": "OpenGVLab/InternVL2-8B", + "model_revision": "master" + }, + { + "model_format": "pytorch", + "model_size_in_billions": 8, + "quantizations": [ + "none" + ], + "model_hub": "modelscope", + "model_id": "OpenGVLab/InternVL2-8B-AWQ", + "model_revision": "master" + }, + { + "model_format": "pytorch", + "model_size_in_billions": 26, + "quantizations": [ + "4-bit", + "8-bit", + "none" + ], + "model_hub": "modelscope", + "model_id": "OpenGVLab/InternVL2-26B", "model_revision": "master" }, { "model_format": "pytorch", "model_size_in_billions": 26, "quantizations": [ - "Int8" + "none" + ], + "model_hub": "modelscope", + "model_id": "OpenGVLab/InternVL2-26B-AWQ", + "model_revision": "master" + }, + { + "model_format": "pytorch", + "model_size_in_billions": 40, + "quantizations": [ + "4-bit", + "8-bit", + "none" + ], + "model_hub": "modelscope", + "model_id": "OpenGVLab/InternVL2-40B", + "model_revision": "master" + }, + { + "model_format": "pytorch", + "model_size_in_billions": 40, + "quantizations": [ + "none" + ], + "model_hub": "modelscope", + "model_id": "OpenGVLab/InternVL2-40B-AWQ", + "model_revision": "master" + }, + { + "model_format": "pytorch", + "model_size_in_billions": 76, + "quantizations": [ + "4-bit", + "8-bit", + "none" + ], + "model_hub": "modelscope", + "model_id": "OpenGVLab/InternVL2-Llama3-76B", + "model_revision": "master" + }, + { + "model_format": "pytorch", + "model_size_in_billions": 76, + "quantizations": [ + "none" ], - "model_hub": "modelscope", - "model_id": "AI-ModelScope/InternVL-Chat-V1-5-{quantization}", + "model_hub": "modelscope", + "model_id": "OpenGVLab/InternVL2-Llama3-76B-AWQ", "model_revision": "master" } ], "prompt_style": { - "style_name": "INTERNLM2", + "style_name": "INTERNVL", "system_prompt": "You are InternLM (书生·浦语), a helpful, honest, and harmless AI assistant developed by Shanghai AI Laboratory (上海人工智能实验室).", "roles": [ "<|im_start|>user", @@ -4735,10 +4897,14 @@ ], "intra_message_sep": "<|im_end|>", "stop_token_ids": [ + 2, + 92543, 92542 ], "stop": [ - "<|im_end|>" + "</s>", + "<|im_end|>", + "<|im_start|>" ] } }, diff --git a/xinference/model/llm/transformers/core.py b/xinference/model/llm/transformers/core.py index 7e7bc609b6..60d35c19d2 100644 --- a/xinference/model/llm/transformers/core.py +++ b/xinference/model/llm/transformers/core.py @@ -61,7 +61,7 @@ "yi-vl-chat", "deepseek-vl-chat", "internvl-chat", - "mini-internvl-chat", + "internvl2", "cogvlm2", "MiniCPM-Llama3-V-2_5", "MiniCPM-V-2.6", diff --git a/xinference/model/llm/transformers/intern_vl.py b/xinference/model/llm/transformers/intern_vl.py index d9155f3b4b..dedac5b1bd 100644 --- a/xinference/model/llm/transformers/intern_vl.py +++ b/xinference/model/llm/transformers/intern_vl.py @@ -11,28 +11,25 @@ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. -import base64 import logging import time import uuid from concurrent.futures import ThreadPoolExecutor -from io import BytesIO -from typing import Dict, Iterator, List, Optional, Tuple, Union +from typing import Dict, Iterator, List, Optional, Union -import requests import torch -from PIL import Image -from ....model.utils import select_device from ....types import ( ChatCompletion, ChatCompletionChunk, ChatCompletionMessage, Completion, CompletionChoice, + CompletionChunk, CompletionUsage, ) from ..llm_family import LLMFamilyV1, LLMSpecV1 +from ..utils import _decode_image from .core import PytorchChatModel, PytorchGenerateConfig logger = logging.getLogger(__name__) @@ -41,6 +38,142 @@ IMAGENET_STD = (0.229, 0.224, 0.225) +def _message_content_to_intern(content, image_cnt): + if not isinstance(content, str): + texts = [] + image_urls = [] + for c in content: + c_type = c.get("type") + if c_type == "text": + texts.append(c["text"]) + elif c_type == "image_url": + image_urls.append(c["image_url"]["url"]) + image_futures = [] + with ThreadPoolExecutor() as executor: + for image_url in image_urls: + fut = executor.submit(_decode_image, image_url) + image_futures.append(fut) + images = [fut.result() for fut in image_futures] + prefix = "" + for i, _ in enumerate(images): + prefix += f"Image-{image_cnt + i + 1}: <image>\n\n" + text = prefix + " ".join(texts) + if len(images) == 0: + return text, [] + else: + return text, images + return content, [] + + +def _get_prompt_and_chat_history( + prompt: Union[str, List[Dict]], + chat_history: Optional[List[ChatCompletionMessage]] = None, +): + # Convert openai history to intern vl history + images = [] + history = [] + image_cnt = 0 + for h1, h2 in zip(*[iter(chat_history or [])] * 2): + content1, img = _message_content_to_intern(h1["content"], image_cnt) + content2, _ = _message_content_to_intern(h2["content"], image_cnt) + history.append([content1, content2]) + images.extend(img) + image_cnt += len(img) + + question, img = _message_content_to_intern(prompt, image_cnt) + images.extend(img) + return question, history, images + + +def _build_transform(input_size=448): + import torchvision.transforms as T + from torchvision.transforms.functional import InterpolationMode + + MEAN, STD = IMAGENET_MEAN, IMAGENET_STD + transform = T.Compose( + [ + T.Lambda(lambda img: img.convert("RGB") if img.mode != "RGB" else img), + T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC), + T.ToTensor(), + T.Normalize(mean=MEAN, std=STD), + ] + ) + return transform + + +def _find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size): + best_ratio_diff = float("inf") + best_ratio = (1, 1) + area = width * height + for ratio in target_ratios: + target_aspect_ratio = ratio[0] / ratio[1] + ratio_diff = abs(aspect_ratio - target_aspect_ratio) + if ratio_diff < best_ratio_diff: + best_ratio_diff = ratio_diff + best_ratio = ratio + elif ratio_diff == best_ratio_diff: + if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]: + best_ratio = ratio + return best_ratio + + +def _dynamic_preprocess( + image, min_num=1, max_num=12, image_size=448, use_thumbnail=False +): + orig_width, orig_height = image.size + aspect_ratio = orig_width / orig_height + + # calculate the existing image aspect ratio + target_ratios = set( + (i, j) + for n in range(min_num, max_num + 1) + for i in range(1, n + 1) + for j in range(1, n + 1) + if i * j <= max_num and i * j >= min_num + ) + target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1]) + + # find the closest aspect ratio to the target + target_aspect_ratio = _find_closest_aspect_ratio( + aspect_ratio, target_ratios, orig_width, orig_height, image_size + ) + + # calculate the target width and height + target_width = image_size * target_aspect_ratio[0] + target_height = image_size * target_aspect_ratio[1] + blocks = target_aspect_ratio[0] * target_aspect_ratio[1] + + # resize the image + resized_img = image.resize((target_width, target_height)) + processed_images = [] + for i in range(blocks): + box = ( + (i % (target_width // image_size)) * image_size, + (i // (target_width // image_size)) * image_size, + ((i % (target_width // image_size)) + 1) * image_size, + ((i // (target_width // image_size)) + 1) * image_size, + ) + # split the image + split_img = resized_img.crop(box) + processed_images.append(split_img) + assert len(processed_images) == blocks + if use_thumbnail and len(processed_images) != 1: + thumbnail_img = image.resize((image_size, image_size)) + processed_images.append(thumbnail_img) + return processed_images + + +def _load_image(image_file, input_size=448, max_num=12): + image = image_file.convert("RGB") + transform = _build_transform(input_size=input_size) + images = _dynamic_preprocess( + image, image_size=input_size, use_thumbnail=True, max_num=max_num + ) + pixel_values = [transform(image) for image in images] + pixel_values = torch.stack(pixel_values) + return pixel_values + + class InternVLChatModel(PytorchChatModel): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) @@ -52,249 +185,89 @@ def match( cls, model_family: "LLMFamilyV1", model_spec: "LLMSpecV1", quantization: str ) -> bool: family = model_family.model_family or model_family.model_name - if "internvl" in family.lower(): - return True - return False + if "internvl" not in family.lower(): + return False + if "pytorch" not in model_spec.model_format: + return False + return True def _get_model_class(self): from transformers import AutoModel return AutoModel + # Copy from InternVL page + # reference: https://huggingface.co/OpenGVLab/InternVL2-8B + def _split_model(self): + import math + + device_map = {} + world_size = torch.cuda.device_count() + # single gpu + if world_size == 1: + return None + model_size = f"{self.model_spec.model_size_in_billions}B" + num_layers = { + "1B": 24, + "2B": 24, + "4B": 32, + "8B": 32, + "26B": 48, + "40B": 60, + "76B": 80, + }[model_size] + # Since the first GPU will be used for ViT, treat it as half a GPU. + num_layers_per_gpu = math.ceil(num_layers / (world_size - 0.5)) + num_layers_per_gpu = [num_layers_per_gpu] * world_size + num_layers_per_gpu[0] = math.ceil(num_layers_per_gpu[0] * 0.5) + layer_cnt = 0 + for i, num_layer in enumerate(num_layers_per_gpu): + for j in range(num_layer): + device_map[f"language_model.model.layers.{layer_cnt}"] = i + layer_cnt += 1 + device_map["vision_model"] = 0 + device_map["mlp1"] = 0 + device_map["language_model.model.tok_embeddings"] = 0 + device_map["language_model.model.embed_tokens"] = 0 + device_map["language_model.output"] = 0 + device_map["language_model.model.norm"] = 0 + device_map["language_model.lm_head"] = 0 + device_map[f"language_model.model.layers.{num_layers - 1}"] = 0 + return device_map + def load(self, **kwargs): from transformers import AutoModel, AutoTokenizer - from transformers.generation import GenerationConfig if self._check_tensorizer_integrity(): self._model, self._tokenizer = self._load_tensorizer() return - device = self._pytorch_model_config.get("device", "auto") - device = select_device(device) - # for multiple GPU, set back to auto to make multiple devices work - device = "auto" if device == "cuda" else device - - self._tokenizer = AutoTokenizer.from_pretrained( - self.model_path, - trust_remote_code=True, - ) + device = self._split_model() kwargs = { "torch_dtype": torch.bfloat16, "low_cpu_mem_usage": True, "trust_remote_code": True, - "device_map": device, } - if "int8" in self.quantization.lower(): + if device is not None: + kwargs["device_map"] = device + + if "8-bit" in self.quantization.lower(): kwargs["load_in_8bit"] = True - elif 2 == self.model_spec.model_size_in_billions: - kwargs.pop("device_map") + elif "4-bit" in self.quantization.lower(): + kwargs["load_in_4bit"] = True self._model = AutoModel.from_pretrained(self.model_path, **kwargs).eval() - if "int8" not in self.quantization.lower(): + if device is None and "none" in self.quantization.lower(): self._model.cuda() - # Specify hyperparameters for generation - self._model.generation_config = GenerationConfig.from_pretrained( + self._tokenizer = AutoTokenizer.from_pretrained( self.model_path, trust_remote_code=True, + use_fast=False, ) - self._save_tensorizer() - - def _message_content_to_intern(self, content): - def _load_image(_url): - if _url.startswith("data:"): - logging.info("Parse url by base64 decoder.") - # https://platform.openai.com/docs/guides/vision/uploading-base-64-encoded-images - # e.g. f"data:image/jpeg;base64,{base64_image}" - _type, data = _url.split(";") - _, ext = _type.split("/") - data = data[len("base64,") :] - data = base64.b64decode(data.encode("utf-8")) - return Image.open(BytesIO(data)).convert("RGB") - else: - try: - response = requests.get(_url) - except requests.exceptions.MissingSchema: - return Image.open(_url).convert("RGB") - else: - return Image.open(BytesIO(response.content)).convert("RGB") - - if not isinstance(content, str): - texts = [] - image_urls = [] - for c in content: - c_type = c.get("type") - if c_type == "text": - texts.append(c["text"]) - elif c_type == "image_url": - image_urls.append(c["image_url"]["url"]) - image_futures = [] - with ThreadPoolExecutor() as executor: - for image_url in image_urls: - fut = executor.submit(_load_image, image_url) - image_futures.append(fut) - images = [fut.result() for fut in image_futures] - text = " ".join(texts) - if len(images) == 0: - return text, None - else: - return text, images - return content, None - - def _history_content_to_intern( - self, - chat_history: List[ChatCompletionMessage], - IMG_START_TOKEN="<img>", - IMG_END_TOKEN="</img>", - IMG_CONTEXT_TOKEN="<IMG_CONTEXT>", - ): - def _image_to_piexl_values(images): - load_images = [] - for image in images: - if image.startswith("data:"): - logging.info("Parse url by base64 decoder.") - # https://platform.openai.com/docs/guides/vision/uploading-base-64-encoded-images - # e.g. f"data:image/jpeg;base64,{base64_image}" - _type, data = image.split(";") - _, ext = _type.split("/") - data = data[len("base64,") :] - data = base64.b64decode(data.encode("utf-8")) - img = Image.open(BytesIO(data)).convert("RGB") - pixel_value = ( - self._load_image(img, max_num=6).to(torch.bfloat16).cuda() - ) - load_images.append(pixel_value) - else: - try: - response = requests.get(image) - except requests.exceptions.MissingSchema: - img = Image.open(image).convert("RGB") - else: - img = Image.open(BytesIO(response.content)).convert("RGB") - pixel_value = ( - self._load_image(img, max_num=6).to(torch.bfloat16).cuda() - ) - load_images.append(pixel_value) - return torch.cat(tuple(load_images), dim=0) - - history: List[Tuple] = [] - pixel_values = None - for i in range(0, len(chat_history), 2): - tmp = [] - images: List[str] = [] - user = chat_history[i]["content"] - if isinstance(user, List): - for content in user: - c_type = content.get("type") - if c_type == "text": - tmp.append(content["text"]) - elif c_type == "image_url" and not history: - images.append(content["image_url"]["url"]) - if not history: - pixel_values = _image_to_piexl_values(images) - image_bs = pixel_values.shape[0] - image_tokens = ( - IMG_START_TOKEN - + IMG_CONTEXT_TOKEN * self._model.num_image_token * image_bs - + IMG_END_TOKEN - ) - tmp[0] = image_tokens + "\n" + tmp[0] - else: - tmp.append(user) - tmp.append(chat_history[i + 1]["content"]) - history.append(tuple(tmp)) - return history, pixel_values - - def _find_closest_aspect_ratio( - self, aspect_ratio, target_ratios, width, height, image_size - ): - best_ratio_diff = float("inf") - best_ratio = (1, 1) - area = width * height - for ratio in target_ratios: - target_aspect_ratio = ratio[0] / ratio[1] - ratio_diff = abs(aspect_ratio - target_aspect_ratio) - if ratio_diff < best_ratio_diff: - best_ratio_diff = ratio_diff - best_ratio = ratio - elif ratio_diff == best_ratio_diff: - if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]: - best_ratio = ratio - return best_ratio - - def _dynamic_preprocess( - self, image, min_num=1, max_num=6, image_size=448, use_thumbnail=False - ): - orig_width, orig_height = image.size - aspect_ratio = orig_width / orig_height - - # calculate the existing image aspect ratio - target_ratios = set( - (i, j) - for n in range(min_num, max_num + 1) - for i in range(1, n + 1) - for j in range(1, n + 1) - if i * j <= max_num and i * j >= min_num - ) - target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1]) - - # find the closest aspect ratio to the target - target_aspect_ratio = self._find_closest_aspect_ratio( - aspect_ratio, target_ratios, orig_width, orig_height, image_size - ) - - # calculate the target width and height - target_width = image_size * target_aspect_ratio[0] - target_height = image_size * target_aspect_ratio[1] - blocks = target_aspect_ratio[0] * target_aspect_ratio[1] - - # resize the image - resized_img = image.resize((target_width, target_height)) - processed_images = [] - for i in range(blocks): - box = ( - (i % (target_width // image_size)) * image_size, - (i // (target_width // image_size)) * image_size, - ((i % (target_width // image_size)) + 1) * image_size, - ((i // (target_width // image_size)) + 1) * image_size, - ) - # split the image - split_img = resized_img.crop(box) - processed_images.append(split_img) - assert len(processed_images) == blocks - if use_thumbnail and len(processed_images) != 1: - thumbnail_img = image.resize((image_size, image_size)) - processed_images.append(thumbnail_img) - return processed_images - - def _build_transform(self, input_size): - import torchvision.transforms as T - from torchvision.transforms.functional import InterpolationMode - - MEAN, STD = IMAGENET_MEAN, IMAGENET_STD - transform = T.Compose( - [ - T.Lambda(lambda img: img.convert("RGB") if img.mode != "RGB" else img), - T.Resize( - (input_size, input_size), interpolation=InterpolationMode.BICUBIC - ), - T.ToTensor(), - T.Normalize(mean=MEAN, std=STD), - ] - ) - return transform - - def _load_image(self, image_file, input_size=448, max_num=6): - transform = self._build_transform(input_size=input_size) - images = self._dynamic_preprocess( - image_file, image_size=input_size, use_thumbnail=True, max_num=max_num - ) - pixel_values = [transform(image) for image in images] - pixel_values = torch.stack(pixel_values) - return pixel_values def chat( self, @@ -303,38 +276,108 @@ def chat( chat_history: Optional[List[ChatCompletionMessage]] = None, generate_config: Optional[PytorchGenerateConfig] = None, ) -> Union[ChatCompletion, Iterator[ChatCompletionChunk]]: - if generate_config and generate_config.get("stream"): - raise Exception( - f"Chat with model {self.model_family.model_name} does not support stream." - ) - sanitized_config = { - "num_beams": 1, - "max_new_tokens": generate_config.get("max_tokens", 512) + from ....thirdparty.internvl.conversation import get_conv_template + + IMG_START_TOKEN = "<img>" + IMG_END_TOKEN = "</img>" + IMG_CONTEXT_TOKEN = "<IMG_CONTEXT>" + + generation_config = { + "max_new_tokens": generate_config.get("max_tokens", 1024) if generate_config - else 512, + else 1024, "do_sample": False, } - content, image = self._message_content_to_intern(prompt) + stream = ( + generate_config.get("stream", False) + if isinstance(generate_config, dict) + else False + ) + stream_options = ( + generate_config.get("stream_options", None) + if isinstance(generate_config, dict) + else False + ) + include_usage = ( + stream_options["include_usage"] + if isinstance(stream_options, dict) + else False + ) + + content, history, images = _get_prompt_and_chat_history(prompt, chat_history) - history = None - if chat_history: - history, pixel_values = self._history_content_to_intern(chat_history) + num_patches_list = [] + if len(images) == 1: + content = content.replace("Image-1: <image>\n\n", "<image>\n") + history = [ + [item[0].replace("Image-1: <image>\n\n", "<image>\n"), item[1]] + for item in history + ] + pixel_values = _load_image(images[-1], max_num=12).to(torch.bfloat16).cuda() + num_patches_list = ( + [pixel_values.shape[0]] if pixel_values is not None else [] + ) + elif len(images) > 1: + pixel_values = [ + _load_image(img, max_num=12).to(torch.bfloat16).cuda() for img in images + ] + num_patches_list = [values.size(0) for values in pixel_values] + pixel_values = torch.cat(pixel_values, dim=0) else: - load_images = [] - for img in image: - pixel_value = self._load_image(img, max_num=6).to(torch.bfloat16).cuda() - load_images.append(pixel_value) - pixel_values = torch.cat(tuple(load_images), dim=0) - - response, history = self._model.chat( - self._tokenizer, - pixel_values, - content, - sanitized_config, - history=history, - return_history=True, - ) + pixel_values = None + + assert pixel_values is None or len(pixel_values) == sum(num_patches_list) + + img_context_token_id = self._tokenizer.convert_tokens_to_ids(IMG_CONTEXT_TOKEN) + self._model.img_context_token_id = img_context_token_id + + template = get_conv_template(self._model.template) + template.system_message = self._model.system_message + eos_token_id = self._tokenizer.convert_tokens_to_ids(template.sep) + + history = [] if history is None else history + for old_question, old_answer in history: + template.append_message(template.roles[0], old_question) + template.append_message(template.roles[1], old_answer) + template.append_message(template.roles[0], content) + template.append_message(template.roles[1], None) + query = template.get_prompt() + + for num_patches in num_patches_list: + image_tokens = ( + IMG_START_TOKEN + + IMG_CONTEXT_TOKEN * self._model.num_image_token * num_patches + + IMG_END_TOKEN + ) + query = query.replace("<image>", image_tokens, 1) + + model_inputs = self._tokenizer(query, return_tensors="pt") + input_ids = model_inputs["input_ids"].cuda() + attention_mask = model_inputs["attention_mask"].cuda() + generation_config["eos_token_id"] = eos_token_id + generate_kwargs = { + "pixel_values": pixel_values, + "input_ids": input_ids, + "attention_mask": attention_mask, + } + generate_kwargs.update(generation_config) + + if stream: + chunk = self._generate_stream(generate_kwargs, input_ids, include_usage) + return self._to_chat_completion_chunks(chunk) + else: + chunk = self._generate(generate_kwargs, input_ids, template) + return self._to_chat_completion(chunk) + + def _generate(self, generate_kwargs, input_ids, template): + prompt_tokens = len(input_ids[0]) + generation_output = self._model.generate(**generate_kwargs) + completion_tokens = len(generation_output[0]) + response = self._tokenizer.batch_decode( + generation_output, skip_special_tokens=True + )[0] + response = response.split(template.sep)[0].strip() chunk = Completion( id=str(uuid.uuid1()), object="text_completion", @@ -346,7 +389,69 @@ def chat( ) ], usage=CompletionUsage( - prompt_tokens=-1, completion_tokens=-1, total_tokens=-1 + prompt_tokens=prompt_tokens, + completion_tokens=completion_tokens, + total_tokens=prompt_tokens + completion_tokens, ), ) - return self._to_chat_completion(chunk) + return chunk + + def _generate_stream(self, generate_kwargs, input_ids, include_usage): + from threading import Thread + + from transformers import TextIteratorStreamer + + # Initialize the streamer + streamer = TextIteratorStreamer( + self._tokenizer, skip_prompt=True, skip_special_tokens=True, timeout=10 + ) + # Define the generation configuration + generate_kwargs["streamer"] = streamer + # Start the model chat in a separate thread + thread = Thread( + target=self._model.generate, + kwargs=generate_kwargs, + ) + thread.start() + + completion_id = str(uuid.uuid1()) + prompt_tokens = len(input_ids[0]) + completion_tokens = 0 + # Loop through the streamer to get the new text as it is generated + for i, new_text in enumerate(streamer): + if new_text == self._model.conv_template.sep: + break + completion_choice = CompletionChoice( + text=new_text, index=0, logprobs=None, finish_reason=None + ) + chunk = CompletionChunk( + id=completion_id, + object="text_completion", + created=int(time.time()), + model=self.model_uid, + choices=[completion_choice], + ) + completion_tokens = max(completion_tokens, len(streamer.token_cache)) + total_tokens = prompt_tokens + completion_tokens + completion_usage = CompletionUsage( + prompt_tokens=prompt_tokens, + completion_tokens=completion_tokens, + total_tokens=total_tokens, + ) + chunk["usage"] = completion_usage + yield chunk + + if include_usage: + chunk = CompletionChunk( + id=completion_id, + object="text_completion", + created=int(time.time()), + model=self.model_uid, + choices=[], + ) + chunk["usage"] = CompletionUsage( + prompt_tokens=prompt_tokens, + completion_tokens=completion_tokens, + total_tokens=total_tokens, + ) + yield chunk diff --git a/xinference/model/llm/transformers/utils.py b/xinference/model/llm/transformers/utils.py index 5d64604315..5ada9a512c 100644 --- a/xinference/model/llm/transformers/utils.py +++ b/xinference/model/llm/transformers/utils.py @@ -42,7 +42,6 @@ if TYPE_CHECKING: from ...llm.transformers.core import PytorchModel - logger = logging.getLogger(__name__) diff --git a/xinference/model/llm/utils.py b/xinference/model/llm/utils.py index 94024d6e83..2caaa83c71 100644 --- a/xinference/model/llm/utils.py +++ b/xinference/model/llm/utils.py @@ -11,14 +11,19 @@ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. +import base64 import functools import json import logging import os import time import uuid +from io import BytesIO from typing import AsyncGenerator, Dict, Iterator, List, Optional, Tuple, cast +import requests +from PIL import Image + from ...types import ( SPECIAL_TOOL_PROMPT, ChatCompletion, @@ -60,7 +65,7 @@ def get_prompt( chat_history: List[ChatCompletionMessage], prompt_style: PromptStyleV1, tools: Optional[List[Dict]] = None, - ) -> str: + ): """ Inspired by FastChat. Format chat history into a prompt according to the prompty style of different models. @@ -440,6 +445,52 @@ def get_role(role_name: str): else: ret += role return ret + elif prompt_style.style_name == "INTERNVL": + ret = ( + "<s>" + if prompt_style.system_prompt == "" + else "<s><|im_start|>system\n" + + prompt_style.system_prompt + + prompt_style.intra_message_sep + + "\n" + ) + images = [] # type: ignore + for message in chat_history: + role = get_role(message["role"]) + content = message["content"] + if isinstance(content, str): + ret += role + "\n" + content + prompt_style.intra_message_sep + "\n" + elif isinstance(content, list): + text = "" + image_urls = [] + for c in content: + c_type = c.get("type") + if c_type == "text": + text = c["text"] + elif c_type == "image_url": + image_urls.append(c["image_url"]["url"]) + image_futures = [] + from concurrent.futures import ThreadPoolExecutor + + with ThreadPoolExecutor() as executor: + for image_url in image_urls: + fut = executor.submit(_decode_image, image_url) + image_futures.append(fut) + images = [fut.result() for fut in image_futures] + if len(image_futures) == 0: + ret += ( + role + "\n" + text + prompt_style.intra_message_sep + "\n" + ) + else: + ret += ( + role + + "\n" + + f"<image>\n{text}" + + prompt_style.intra_message_sep + + "\n" + ) + + return (ret, images) else: raise ValueError(f"Invalid prompt style: {prompt_style.style_name}") @@ -821,3 +872,22 @@ def get_model_version( llm_family: LLMFamilyV1, llm_spec: LLMSpecV1, quantization: str ) -> str: return f"{llm_family.model_name}--{llm_spec.model_size_in_billions}B--{llm_spec.model_format}--{quantization}" + + +def _decode_image(_url): + if _url.startswith("data:"): + logging.info("Parse url by base64 decoder.") + # https://platform.openai.com/docs/guides/vision/uploading-base-64-encoded-images + # e.g. f"data:image/jpeg;base64,{base64_image}" + _type, data = _url.split(";") + _, ext = _type.split("/") + data = data[len("base64,") :] + data = base64.b64decode(data.encode("utf-8")) + return Image.open(BytesIO(data)).convert("RGB") + else: + try: + response = requests.get(_url) + except requests.exceptions.MissingSchema: + return Image.open(_url).convert("RGB") + else: + return Image.open(BytesIO(response.content)).convert("RGB") diff --git a/xinference/model/llm/vllm/core.py b/xinference/model/llm/vllm/core.py index 280150f9a3..49a324e5d9 100644 --- a/xinference/model/llm/vllm/core.py +++ b/xinference/model/llm/vllm/core.py @@ -21,6 +21,7 @@ import uuid from typing import ( TYPE_CHECKING, + Any, AsyncGenerator, Dict, Iterable, @@ -88,6 +89,9 @@ class VLLMGenerateConfig(TypedDict, total=False): except ImportError: VLLM_INSTALLED = False +VLLM_SUPPORTED_VISION_MODEL_LIST: List[str] = [ + "internvl2", +] VLLM_SUPPORTED_MODELS = [ "llama-2", "llama-3", @@ -413,7 +417,7 @@ def _convert_request_output_to_completion( async def async_generate( self, - prompt: str, + prompt: Union[str, Dict[str, Any]], generate_config: Optional[Dict] = None, tools: object = False, ) -> Union[Completion, AsyncGenerator[CompletionChunk, None]]: @@ -636,3 +640,106 @@ async def async_chat( self.model_family, self.model_uid, c, tools ) return self._to_chat_completion(c) + + +class VLLMVisionModel(VLLMModel, ChatModelMixin): + def load(self): + try: + import vllm + from vllm.engine.arg_utils import AsyncEngineArgs + from vllm.engine.async_llm_engine import AsyncLLMEngine + except ImportError: + error_message = "Failed to import module 'vllm'" + installation_guide = [ + "Please make sure 'vllm' is installed. ", + "You can install it by `pip install vllm`\n", + ] + raise ImportError(f"{error_message}\n\n{''.join(installation_guide)}") + + if vllm.__version__ >= "0.3.1": + # from vllm v0.3.1, it uses cupy as NCCL backend + # in which cupy will fork a process + # only for xoscar >= 0.3.0, new process is allowed in subpool + # besides, xinference set start method as forkserver for unix + # we need to set it to fork to make cupy NCCL work + multiprocessing.set_start_method("fork", force=True) + + self._model_config = self._sanitize_model_config(self._model_config) + + logger.info( + f"Loading {self.model_uid} with following model config: {self._model_config}" + ) + + engine_args = AsyncEngineArgs( + model=self.model_path, + **self._model_config, + ) + self._engine = AsyncLLMEngine.from_engine_args(engine_args) + + @classmethod + def match( + cls, llm_family: "LLMFamilyV1", llm_spec: "LLMSpecV1", quantization: str + ) -> bool: + if llm_spec.model_format != "pytorch": + return False + if llm_spec.model_format == "pytorch": + if quantization != "none" and not (quantization is None): + return False + if isinstance(llm_family, CustomLLMFamilyV1): + if llm_family.model_family not in VLLM_SUPPORTED_VISION_MODEL_LIST: + return False + else: + if llm_family.model_name not in VLLM_SUPPORTED_VISION_MODEL_LIST: + return False + if "vision" not in llm_family.model_ability: + return False + return VLLM_INSTALLED + + def _sanitize_chat_config( + self, + generate_config: Optional[Dict] = None, + ) -> Dict: + if not generate_config: + generate_config = {} + if self.model_family.prompt_style: + if self.model_family.prompt_style.stop_token_ids: + generate_config.setdefault( + "stop_token_ids", + self.model_family.prompt_style.stop_token_ids.copy(), + ) + return generate_config + + async def async_chat( + self, + prompt: str, + system_prompt: Optional[str] = None, + chat_history: Optional[List[ChatCompletionMessage]] = None, + generate_config: Optional[Dict] = None, + ) -> Union[ChatCompletion, AsyncGenerator[ChatCompletionChunk, None]]: + # only support single image, waiting vllm support multi images + assert self.model_family.prompt_style is not None + prompt_style = self.model_family.prompt_style.copy() + chat_history = chat_history or [] + prompt, images = self.get_prompt(prompt, chat_history, prompt_style) + logger.info(f"messages:{prompt}") + if len(images) == 0: + inputs = { + "prompt": prompt, + } + else: + inputs = { + "prompt": prompt, + "multi_modal_data": {"image": images[-1]}, # type: ignore + } + generate_config = self._sanitize_chat_config(generate_config) + + stream = generate_config.get("stream", None) + + if stream: + agen = await self.async_generate(inputs, generate_config) + assert isinstance(agen, AsyncGenerator) + return self._async_to_chat_completion_chunks(agen) + else: + c = await self.async_generate(inputs, generate_config) + assert not isinstance(c, AsyncGenerator) + return self._to_chat_completion(c) diff --git a/xinference/thirdparty/internvl/__init__.py b/xinference/thirdparty/internvl/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/xinference/thirdparty/internvl/conversation.py b/xinference/thirdparty/internvl/conversation.py new file mode 100644 index 0000000000..2fe37ad08c --- /dev/null +++ b/xinference/thirdparty/internvl/conversation.py @@ -0,0 +1,393 @@ +""" +Conversation prompt templates. + +We kindly request that you import fastchat instead of copying this file if you wish to use it. +If you have changes in mind, please contribute back so the community can benefit collectively and continue to maintain these valuable templates. +""" + +import dataclasses +from enum import IntEnum, auto +from typing import Any, Dict, List, Tuple, Union + + +class SeparatorStyle(IntEnum): + """Separator styles.""" + + ADD_COLON_SINGLE = auto() + ADD_COLON_TWO = auto() + ADD_COLON_SPACE_SINGLE = auto() + NO_COLON_SINGLE = auto() + NO_COLON_TWO = auto() + ADD_NEW_LINE_SINGLE = auto() + LLAMA2 = auto() + CHATGLM = auto() + CHATML = auto() + CHATINTERN = auto() + DOLLY = auto() + RWKV = auto() + PHOENIX = auto() + ROBIN = auto() + FALCON_CHAT = auto() + CHATGLM3 = auto() + INTERNVL_ZH = auto() + MPT = auto() + + [email protected] +class Conversation: + """A class that manages prompt templates and keeps all conversation history.""" + + # The name of this template + name: str + # The template of the system prompt + system_template: str = '{system_message}' + # The system message + system_message: str = '' + # The names of two roles + roles: Tuple[str] = ('USER', 'ASSISTANT') + # All messages. Each item is (role, message). + messages: List[List[str]] = () + # The number of few shot examples + offset: int = 0 + # The separator style and configurations + sep_style: SeparatorStyle = SeparatorStyle.ADD_COLON_SINGLE + sep: str = '\n' + sep2: str = None + # Stop criteria (the default one is EOS token) + stop_str: Union[str, List[str]] = None + # Stops generation if meeting any token in this list + stop_token_ids: List[int] = None + + def get_prompt(self) -> str: + """Get the prompt for generation.""" + system_prompt = self.system_template.format(system_message=self.system_message) + if self.sep_style == SeparatorStyle.ADD_COLON_SINGLE: + ret = system_prompt + self.sep + for role, message in self.messages: + if message: + ret += role + ': ' + message + self.sep + else: + ret += role + ':' + return ret + elif self.sep_style == SeparatorStyle.ADD_COLON_TWO: + seps = [self.sep, self.sep2] + ret = system_prompt + seps[0] + for i, (role, message) in enumerate(self.messages): + if message: + ret += role + ': ' + message + seps[i % 2] + else: + ret += role + ':' + return ret + elif self.sep_style == SeparatorStyle.ADD_COLON_SPACE_SINGLE: + ret = system_prompt + self.sep + for role, message in self.messages: + if message: + ret += role + ': ' + message + self.sep + else: + ret += role + ': ' # must be end with a space + return ret + elif self.sep_style == SeparatorStyle.ADD_NEW_LINE_SINGLE: + ret = '' if system_prompt == '' else system_prompt + self.sep + for role, message in self.messages: + if message: + ret += role + '\n' + message + self.sep + else: + ret += role + '\n' + return ret + elif self.sep_style == SeparatorStyle.NO_COLON_SINGLE: + ret = system_prompt + for role, message in self.messages: + if message: + ret += role + message + self.sep + else: + ret += role + return ret + elif self.sep_style == SeparatorStyle.NO_COLON_TWO: + seps = [self.sep, self.sep2] + ret = system_prompt + for i, (role, message) in enumerate(self.messages): + if message: + ret += role + message + seps[i % 2] + else: + ret += role + return ret + elif self.sep_style == SeparatorStyle.RWKV: + ret = system_prompt + for i, (role, message) in enumerate(self.messages): + if message: + ret += ( + role + + ': ' + + message.replace('\r\n', '\n').replace('\n\n', '\n') + ) + ret += '\n\n' + else: + ret += role + ':' + return ret + elif self.sep_style == SeparatorStyle.LLAMA2: + seps = [self.sep, self.sep2] + if self.system_message: + ret = system_prompt + else: + ret = '[INST] ' + for i, (role, message) in enumerate(self.messages): + tag = self.roles[i % 2] + if message: + if i == 0: + ret += message + ' ' + else: + ret += tag + ' ' + message + seps[i % 2] + else: + ret += tag + return ret + elif self.sep_style == SeparatorStyle.CHATGLM: + # source: https://huggingface.co/THUDM/chatglm-6b/blob/1d240ba371910e9282298d4592532d7f0f3e9f3e/modeling_chatglm.py#L1302-L1308 + # source2: https://huggingface.co/THUDM/chatglm2-6b/blob/e186c891cf64310ac66ef10a87e6635fa6c2a579/modeling_chatglm.py#L926 + round_add_n = 1 if self.name == 'chatglm2' else 0 + if system_prompt: + ret = system_prompt + self.sep + else: + ret = '' + + for i, (role, message) in enumerate(self.messages): + if i % 2 == 0: + ret += f'[Round {i//2 + round_add_n}]{self.sep}' + + if message: + ret += f'{role}:{message}{self.sep}' + else: + ret += f'{role}:' + return ret + elif self.sep_style == SeparatorStyle.CHATML: + ret = '' if system_prompt == '' else system_prompt + self.sep + '\n' + for role, message in self.messages: + if message: + ret += role + '\n' + message + self.sep + '\n' + else: + ret += role + '\n' + return ret + elif self.sep_style == SeparatorStyle.CHATGLM3: + ret = '' + if self.system_message: + ret += system_prompt + for role, message in self.messages: + if message: + ret += role + '\n' + ' ' + message + else: + ret += role + return ret + elif self.sep_style == SeparatorStyle.CHATINTERN: + # source: https://huggingface.co/internlm/internlm-chat-7b-8k/blob/bd546fa984b4b0b86958f56bf37f94aa75ab8831/modeling_internlm.py#L771 + seps = [self.sep, self.sep2] + ret = system_prompt + for i, (role, message) in enumerate(self.messages): + # if i % 2 == 0: + # ret += "<s>" + if message: + ret += role + ':' + message + seps[i % 2] + '\n' + else: + ret += role + ':' + return ret + elif self.sep_style == SeparatorStyle.DOLLY: + seps = [self.sep, self.sep2] + ret = system_prompt + for i, (role, message) in enumerate(self.messages): + if message: + ret += role + ':\n' + message + seps[i % 2] + if i % 2 == 1: + ret += '\n\n' + else: + ret += role + ':\n' + return ret + elif self.sep_style == SeparatorStyle.PHOENIX: + ret = system_prompt + for role, message in self.messages: + if message: + ret += role + ': ' + '<s>' + message + '</s>' + else: + ret += role + ': ' + '<s>' + return ret + elif self.sep_style == SeparatorStyle.ROBIN: + ret = system_prompt + self.sep + for role, message in self.messages: + if message: + ret += role + ':\n' + message + self.sep + else: + ret += role + ':\n' + return ret + elif self.sep_style == SeparatorStyle.FALCON_CHAT: + ret = '' + if self.system_message: + ret += system_prompt + self.sep + for role, message in self.messages: + if message: + ret += role + ': ' + message + self.sep + else: + ret += role + ':' + + return ret + elif self.sep_style == SeparatorStyle.INTERNVL_ZH: + seps = [self.sep, self.sep2] + ret = self.system_message + seps[0] + for i, (role, message) in enumerate(self.messages): + if message: + ret += role + ': ' + message + seps[i % 2] + else: + ret += role + ':' + return ret + elif self.sep_style == SeparatorStyle.MPT: + ret = system_prompt + self.sep + for role, message in self.messages: + if message: + if type(message) is tuple: + message, _, _ = message + ret += role + message + self.sep + else: + ret += role + return ret + else: + raise ValueError(f'Invalid style: {self.sep_style}') + + def set_system_message(self, system_message: str): + """Set the system message.""" + self.system_message = system_message + + def append_message(self, role: str, message: str): + """Append a new message.""" + self.messages.append([role, message]) + + def update_last_message(self, message: str): + """Update the last output. + + The last message is typically set to be None when constructing the prompt, + so we need to update it in-place after getting the response from a model. + """ + self.messages[-1][1] = message + + def to_gradio_chatbot(self): + """Convert the conversation to gradio chatbot format.""" + ret = [] + for i, (role, msg) in enumerate(self.messages[self.offset :]): + if i % 2 == 0: + ret.append([msg, None]) + else: + ret[-1][-1] = msg + return ret + + def to_openai_api_messages(self): + """Convert the conversation to OpenAI chat completion format.""" + ret = [{'role': 'system', 'content': self.system_message}] + + for i, (_, msg) in enumerate(self.messages[self.offset :]): + if i % 2 == 0: + ret.append({'role': 'user', 'content': msg}) + else: + if msg is not None: + ret.append({'role': 'assistant', 'content': msg}) + return ret + + def copy(self): + return Conversation( + name=self.name, + system_template=self.system_template, + system_message=self.system_message, + roles=self.roles, + messages=[[x, y] for x, y in self.messages], + offset=self.offset, + sep_style=self.sep_style, + sep=self.sep, + sep2=self.sep2, + stop_str=self.stop_str, + stop_token_ids=self.stop_token_ids, + ) + + def dict(self): + return { + 'template_name': self.name, + 'system_message': self.system_message, + 'roles': self.roles, + 'messages': self.messages, + 'offset': self.offset, + } + + +# A global registry for all conversation templates +conv_templates: Dict[str, Conversation] = {} + + +def register_conv_template(template: Conversation, override: bool = False): + """Register a new conversation template.""" + if not override: + assert ( + template.name not in conv_templates + ), f'{template.name} has been registered.' + + conv_templates[template.name] = template + + +def get_conv_template(name: str) -> Conversation: + """Get a conversation template.""" + return conv_templates[name].copy() + + +# Both Hermes-2 and internlm2-chat are chatml-format conversation templates. The difference +# is that during training, the preprocessing function for the Hermes-2 template doesn't add +# <s> at the beginning of the tokenized sequence, while the internlm2-chat template does. +# Therefore, they are completely equivalent during inference. +register_conv_template( + Conversation( + name='Hermes-2', + system_template='<|im_start|>system\n{system_message}', + # note: The new system prompt was not used here to avoid changes in benchmark performance. + # system_message='我是书生·万象,英文名是InternVL,是由上海人工智能实验室、清华大学及多家合作单位联合开发的多模态大语言模型。', + system_message='你是由上海人工智能实验室联合商汤科技开发的书生多模态大模型,英文名叫InternVL, 是一个有用无害的人工智能助手。', + roles=('<|im_start|>user\n', '<|im_start|>assistant\n'), + sep_style=SeparatorStyle.MPT, + sep='<|im_end|>', + stop_token_ids=[ + 2, + 6, + 7, + 8, + ], + stop_str='<|endoftext|>', + ) +) + + +register_conv_template( + Conversation( + name='internlm2-chat', + system_template='<|im_start|>system\n{system_message}', + # note: The new system prompt was not used here to avoid changes in benchmark performance. + # system_message='我是书生·万象,英文名是InternVL,是由上海人工智能实验室、清华大学及多家合作单位联合开发的多模态大语言模型。', + system_message='你是由上海人工智能实验室联合商汤科技开发的书生多模态大模型,英文名叫InternVL, 是一个有用无害的人工智能助手。', + roles=('<|im_start|>user\n', '<|im_start|>assistant\n'), + sep_style=SeparatorStyle.MPT, + sep='<|im_end|>', + stop_token_ids=[ + 2, + 92543, + 92542 + ] + ) +) + + +register_conv_template( + Conversation( + name='phi3-chat', + system_template='<|system|>\n{system_message}', + # note: The new system prompt was not used here to avoid changes in benchmark performance. + # system_message='我是书生·万象,英文名是InternVL,是由上海人工智能实验室、清华大学及多家合作单位联合开发的多模态大语言模型。', + system_message='你是由上海人工智能实验室联合商汤科技开发的书生多模态大模型,英文名叫InternVL, 是一个有用无害的人工智能助手。', + roles=('<|user|>\n', '<|assistant|>\n'), + sep_style=SeparatorStyle.MPT, + sep='<|end|>', + stop_token_ids=[ + 2, + 32000, + 32007 + ] + ) +)
{ "difficulty": "high", "estimated_review_effort": 4, "problem_domain": "New Feature Additions" }
sympy__sympy-27708@5224812
sympy/sympy
Python
27,708
Fix ask(Q.finite(x**-1), Q.real(x)) to handle potential division by zero
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. -->Fixes #27707 #### Brief description of what is fixed or changed This PR addresses the issue where ```ask(Q.finite(x**-1), Q.real(x))``` incorrectly returns ```True```, ignoring the potential for division by zero when x could be zero. Changes made: - Modified the ```Pow``` handler for FinitePredicate to check if the base of an exponent could be zero and the exponent could be negative. - Added a condition to return ```None``` when there's a possibility of division by zero. - Added tests to verify the correct behavior for expressions like ```x**-1``` when x is real (and could be zero). #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> * assumptions * Bug fix: Now ```ask(Q.finite(x**-1), Q.real(x))``` correctly returns ```None``` instead of ```True```. <!-- END RELEASE NOTES -->
2025-03-06T21:49:55Z
ask(Q.finite(x**-1), Q.real(x)) incorrectly returns True, ignoring potential division by zero **Description:** There is an issue with the handling of assumptions for the expression x**-1 when x is real and could potentially be zero. The current implementation doesn't account for the possibility of x being zero, which leads to an incorrect result. **Current behavior:** ``` from sympy import ask, Q, Symbol x = Symbol('x') print(ask(Q.finite(x**-1), Q.real(x))) # Output: True ``` **Expected behavior:** The function should return ```None``` to indicate uncertainty, as x**-1 is undefined when x = 0.
[ { "body": "**Description:**\n\nThere is an issue with the handling of assumptions for the expression x**-1 when x is real and could potentially be zero. The current implementation doesn't account for the possibility of x being zero, which leads to an incorrect result.\n\n**Current behavior:**\n\n```\nfrom sympy import ask, Q, Symbol\nx = Symbol('x')\nprint(ask(Q.finite(x**-1), Q.real(x))) # Output: True\n```\n**Expected behavior:**\nThe function should return ```None``` to indicate uncertainty, as x**-1 is undefined when x = 0.", "number": 27707, "title": "ask(Q.finite(x**-1), Q.real(x)) incorrectly returns True, ignoring potential division by zero" } ]
715492717df410968e83efdd35a9ed135f2ec2de
{ "head_commit": "5224812d50ae2f7c32dabfdaba9ff971bc085ed4", "head_commit_message": "Added tests", "patch_to_review": "diff --git a/sympy/assumptions/handlers/calculus.py b/sympy/assumptions/handlers/calculus.py\nindex bb4f387eb582..43b0a62bf341 100644\n--- a/sympy/assumptions/handlers/calculus.py\n+++ b/sympy/assumptions/handlers/calculus.py\n@@ -195,6 +195,8 @@ def _(expr, assumptions):\n if base_bounded is False and ask(Q.extended_nonzero(expr.exp), assumptions):\n return False\n if base_bounded and exp_bounded:\n+ if ask(Q.zero(expr.base),assumptions) is not False and ask(Q.negative(expr.exp),assumptions) is not False:\n+ return None\n return True\n if (abs(expr.base) <= 1) == True and ask(Q.extended_positive(expr.exp), assumptions):\n return True\ndiff --git a/sympy/assumptions/tests/test_query.py b/sympy/assumptions/tests/test_query.py\nindex 5e0eeb50a037..72ea4ae06fdd 100644\n--- a/sympy/assumptions/tests/test_query.py\n+++ b/sympy/assumptions/tests/test_query.py\n@@ -1082,6 +1082,15 @@ def test_bounded():\n assert ask(Q.finite(2**x), ~Q.finite(x)) is False\n assert ask(Q.finite(x**2), ~Q.finite(x)) is False\n \n+ # https://github.com/sympy/sympy/issues/27707\n+ assert ask(Q.finite(x**y),Q.real(x) & Q.real(y)) is None\n+ assert ask(Q.finite(x**y),Q.real(x) & Q.negative(y)) is None\n+ assert ask(Q.finite(x**y),Q.zero(x) & Q.negative(y)) is None\n+ assert ask(Q.finite(x**y),Q.real(x) & Q.positive(y)) is True\n+ assert ask(Q.finite(x**y),Q.nonzero(x) & Q.real(y)) is True\n+ assert ask(Q.finite(x**y),Q.nonzero(x) & Q.negative(y)) is True\n+ assert ask(Q.finite(x**y),Q.zero(x) & Q.positive(y)) is True\n+\n # sign function\n assert ask(Q.finite(sign(x))) is True\n assert ask(Q.finite(sign(x)), ~Q.finite(x)) is True\n" }
[ { "diff_hunk": "@@ -195,6 +195,8 @@ def _(expr, assumptions):\n if base_bounded is False and ask(Q.extended_nonzero(expr.exp), assumptions):\n return False\n if base_bounded and exp_bounded:\n+ if ask(Q.zero(expr.base),assumptions) is not False and ask(Q.negative(expr.exp),assumptions) is not False:\n+ return None", "line": null, "original_line": 199, "original_start_line": 198, "path": "sympy/assumptions/handlers/calculus.py", "start_line": null, "text": "@user1:\nIf the base is zero and exponent is negative you could return False.\n\n@author:\nOhh right. Just to understand it a bit better we are doing this because finite should return False when the expression is either nan or infinity right ?" } ]
45c9656c09d62a53d5b69c844c6aecc873e72489
diff --git a/sympy/assumptions/handlers/calculus.py b/sympy/assumptions/handlers/calculus.py index bb4f387eb582..e2b9c43ccea2 100644 --- a/sympy/assumptions/handlers/calculus.py +++ b/sympy/assumptions/handlers/calculus.py @@ -195,6 +195,12 @@ def _(expr, assumptions): if base_bounded is False and ask(Q.extended_nonzero(expr.exp), assumptions): return False if base_bounded and exp_bounded: + is_base_zero = ask(Q.zero(expr.base),assumptions) + is_exp_negative = ask(Q.negative(expr.exp),assumptions) + if is_base_zero is True and is_exp_negative is True: + return False + if is_base_zero is not False and is_exp_negative is not False: + return None return True if (abs(expr.base) <= 1) == True and ask(Q.extended_positive(expr.exp), assumptions): return True diff --git a/sympy/assumptions/tests/test_query.py b/sympy/assumptions/tests/test_query.py index 5e0eeb50a037..2ddac9860940 100644 --- a/sympy/assumptions/tests/test_query.py +++ b/sympy/assumptions/tests/test_query.py @@ -1082,6 +1082,15 @@ def test_bounded(): assert ask(Q.finite(2**x), ~Q.finite(x)) is False assert ask(Q.finite(x**2), ~Q.finite(x)) is False + # https://github.com/sympy/sympy/issues/27707 + assert ask(Q.finite(x**y), Q.real(x) & Q.real(y)) is None + assert ask(Q.finite(x**y), Q.real(x) & Q.negative(y)) is None + assert ask(Q.finite(x**y), Q.zero(x) & Q.negative(y)) is False + assert ask(Q.finite(x**y), Q.real(x) & Q.positive(y)) is True + assert ask(Q.finite(x**y), Q.nonzero(x) & Q.real(y)) is True + assert ask(Q.finite(x**y), Q.nonzero(x) & Q.negative(y)) is True + assert ask(Q.finite(x**y), Q.zero(x) & Q.positive(y)) is True + # sign function assert ask(Q.finite(sign(x))) is True assert ask(Q.finite(sign(x)), ~Q.finite(x)) is True
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Bug Fixes" }
sympy__sympy-27666@a49f2ba
sympy/sympy
Python
27,666
Fix: Improve Q.infinite(Expr) handling for symbolic conditions and complex infinity
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. -->"Fixes #27610" #### Brief description of what is fixed or changed **Issue:** ask(Q.infinite(I * oo)) returns None instead of True. **Fix:** Added Q.infinite(Expr) to handle symbolic conditions more effectively. Ensured better handling of cases like x * oo, x + I * oo, and 1 / x under different assumptions #### Other comments Enhanced Q.infinite(Expr) to handle symbolic conditions correctly- 1) Fixes issue where ask(Q.infinite(I * oo)) returned None instead of True. 2) Added extensive test coverage for symbolic expressions. 3) Ensures consistency for operations involving infinity (oo, I * oo, zoo). 4) Handles cases where Expr includes symbols with Q.finite, Q.infinite, or Q.complex conditions. #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> * assumptions * ```ask(Q.infinite(I*oo))``` now returns ```True``` instead of ```None```. <!-- END RELEASE NOTES -->
2025-02-26T23:34:57Z
ask(Q.infinite(I*oo)) gives None The `Q.infinite` predicate returns `None` for expressions like `1 + I * oo`, which involve complex numbers where the imaginary part is infinite. This behavior is unexpected because the magnitude of the complex number should be considered infinite due to the presence of `I * oo`. **Expected Behavior**: The predicate should return `True` when the magnitude of a complex number is infinite, such as `1 + I * oo`. **Code Snippet**: ```python from sympy import I, oo, Q, ask print(ask(Q.infinite(1 + I * oo))) # Expected: True, but returns None ```
The Q.infinite predicate in SymPy is designed to identify expressions that are explicitly infinite, such as oo, -oo, or zoo (complex infinity). In your example, 1 + I * oo, the expression combines a finite real part (1) with an infinite imaginary part (I * oo). SymPy does not automatically interpret this combination as an infinite magnitude, which is why ask(Q.infinite(1 + I * oo)) returns None. To address this, you can manually check if any component of the complex number is infinite and then determine the magnitude accordingly. ``` from sympy import I, oo, Q, ask, Abs, re, im, Or # Define the complex expression expr = 1 + I * oo # Check if the real or imaginary part is infinite is_real_infinite = ask(Q.infinite(re(expr))) is_imag_infinite = ask(Q.infinite(im(expr))) # Determine if the magnitude is infinite is_magnitude_infinite = Or(is_real_infinite, is_imag_infinite) print(is_magnitude_infinite) # Outputs: True ``` In this code, re(expr) and im(expr) extract the real and imaginary parts of the expression, respectively. The ask(Q.infinite(...)) function checks if each part is infinite. The Or function then evaluates to True if either component is infinite, indicating that the magnitude of the complex number is indeed infinite. You are welcome to make a PR which fixes this. Note that oo*I is not the same thing as zoo. See https://en.wikipedia.org/wiki/Directed_infinity. Also, the current behavior isn't wrong. Returning None is always correct. But it would be nice to be able to handle this case. It's also worth mentioning the old assumptions are able to handle this case: ```python (I*oo).is_infinite # gives True (I*oo).is_finite # gives False ```
[ { "body": "The `Q.infinite` predicate returns `None` for expressions like `1 + I * oo`, \nwhich involve complex numbers where the imaginary part is infinite. \n\nThis behavior is unexpected because the magnitude of the complex number should be \nconsidered infinite due to the presence of `I * oo`.\n\n**Expected Behavior**: The predicate should return `True` when the magnitude of a complex number is infinite, such as `1 + I * oo`.\n\n**Code Snippet**:\n ```python\n from sympy import I, oo, Q, ask\n print(ask(Q.infinite(1 + I * oo))) # Expected: True, but returns None\n ```", "number": 27610, "title": "ask(Q.infinite(I*oo)) gives None" } ]
ee3c38aea07e53d42808dd89f1bf6fe5823ec6b8
{ "head_commit": "a49f2baa63d3f59cfd4b0928a4c06554a22dae62", "head_commit_message": "Removed whitespace", "patch_to_review": "diff --git a/sympy/assumptions/handlers/calculus.py b/sympy/assumptions/handlers/calculus.py\nindex 263bed6da00c..aac68c7d875d 100644\n--- a/sympy/assumptions/handlers/calculus.py\n+++ b/sympy/assumptions/handlers/calculus.py\n@@ -4,7 +4,7 @@\n \"\"\"\n \n from sympy.assumptions import Q, ask\n-from sympy.core import Add, Mul, Pow, Symbol\n+from sympy.core import Expr,Add, Mul, Pow, Symbol\n from sympy.core.numbers import (NegativeInfinity, GoldenRatio,\n Infinity, Exp1, ComplexInfinity, ImaginaryUnit, NaN, Number, Pi, E,\n TribonacciConstant)\n@@ -232,6 +232,15 @@ def _(expr, assumptions):\n return True\n \n \[email protected](Expr)\n+def _(expr, assumptions):\n+ if assumptions is True:\n+ result = ask(~Q.finite(expr))\n+ else:\n+ result = ask(~Q.finite(expr),assumptions)\n+ return result\n+\n+\n # PositiveInfinitePredicate\n \n \ndiff --git a/sympy/assumptions/tests/test_query.py b/sympy/assumptions/tests/test_query.py\nindex 4bf385d8575b..ae3832666e15 100644\n--- a/sympy/assumptions/tests/test_query.py\n+++ b/sympy/assumptions/tests/test_query.py\n@@ -1107,6 +1107,21 @@ def test_bounded():\n assert ask(Q.finite(cos(x) + sin(x))) is True\n \n \n+def test_unbounded_expr():\n+ # https://github.com/sympy/sympy/issues/27610\n+ assert ask(Q.infinite(I * oo)) is True\n+ assert ask(Q.infinite(1 + I*oo)) is True\n+ assert ask(Q.infinite(3 * (I * oo))) is True\n+ assert ask(Q.infinite(-I * oo)) is True\n+ assert ask(Q.infinite(1 + zoo)) is True\n+ assert ask(Q.infinite(I * zoo)) is True\n+ assert ask(Q.infinite(x / y), Q.infinite(x) & Q.finite(y) & ~Q.zero(y)) is True\n+ assert ask(Q.infinite(I * oo - I * oo)) is None\n+ assert ask(Q.infinite(x * I * oo)) is None\n+ assert ask(Q.infinite(1 / x), Q.finite(x) & ~Q.zero(x)) is False\n+ assert ask(Q.infinite(1 / (I * oo))) is False\n+\n+\n def test_issue_27441():\n # https://github.com/sympy/sympy/issues/27441\n assert ask(Q.composite(y), Q.integer(y) & Q.positive(y) & ~Q.prime(y)) is None\n" }
[ { "diff_hunk": "@@ -232,6 +232,15 @@ def _(expr, assumptions):\n return True\n \n \[email protected](Expr)\n+def _(expr, assumptions):\n+ if assumptions is True:\n+ result = ask(~Q.finite(expr))\n+ else:\n+ result = ask(~Q.finite(expr),assumptions)\n+ return result", "line": null, "original_line": 241, "original_start_line": 237, "path": "sympy/assumptions/handlers/calculus.py", "start_line": null, "text": "@user1:\nThe following should accomplish exactly the same thing:\r\n```python\r\nreturn ask(~Q.finite(expr),assumptions)\r\n```\r\n\n\n@author:\nYes even I thought so but it does not which is the reason when no assumptions have been passed I have to handle it a bit differently. As without any assumptions when I ask Q.infinite(I*oo) it calls the finite handler with assumptions as True, which actually changes the result from False in this case to None(that is asking if I*oo is finite or not). It you still think though something better can be done, please do let me know thanks. \n\n@user1:\nI'll take a look at it. By the way, in the future try to have shorter branch names. The branch name should be easy to type. \n\n@user1:\nI tried changing it and it seems to work" }, { "diff_hunk": "@@ -1107,6 +1107,21 @@ def test_bounded():\n assert ask(Q.finite(cos(x) + sin(x))) is True\n \n \n+def test_unbounded_expr():\n+ # https://github.com/sympy/sympy/issues/27610", "line": null, "original_line": 1111, "original_start_line": null, "path": "sympy/assumptions/tests/test_query.py", "start_line": null, "text": "@user1:\nA link to the issue isn't needed as this is not a bug fix\n\n@author:\nI have removed the link to the issue here." }, { "diff_hunk": "@@ -1107,6 +1107,21 @@ def test_bounded():\n assert ask(Q.finite(cos(x) + sin(x))) is True\n \n \n+def test_unbounded_expr():", "line": null, "original_line": 1110, "original_start_line": null, "path": "sympy/assumptions/tests/test_query.py", "start_line": null, "text": "@user1:\nI think it would be better to title this \"test_unbounded\"\n\n@author:\nI have changed the name of the test function as told." } ]
a021d8d9a0da6582a518c1bb9bcad79057c6b36f
diff --git a/sympy/assumptions/handlers/calculus.py b/sympy/assumptions/handlers/calculus.py index 263bed6da00c..c685b16cfe35 100644 --- a/sympy/assumptions/handlers/calculus.py +++ b/sympy/assumptions/handlers/calculus.py @@ -4,7 +4,7 @@ """ from sympy.assumptions import Q, ask -from sympy.core import Add, Mul, Pow, Symbol +from sympy.core import Expr, Add, Mul, Pow, Symbol from sympy.core.numbers import (NegativeInfinity, GoldenRatio, Infinity, Exp1, ComplexInfinity, ImaginaryUnit, NaN, Number, Pi, E, TribonacciConstant) @@ -227,9 +227,12 @@ def _(expr, assumptions): # InfinitePredicate [email protected]_many(ComplexInfinity, Infinity, NegativeInfinity) [email protected](Expr) def _(expr, assumptions): - return True + is_finite = Q.finite(expr)._eval_ask(assumptions) + if is_finite is None: + return None + return not is_finite # PositiveInfinitePredicate diff --git a/sympy/assumptions/tests/test_query.py b/sympy/assumptions/tests/test_query.py index 4bf385d8575b..9b43e1762184 100644 --- a/sympy/assumptions/tests/test_query.py +++ b/sympy/assumptions/tests/test_query.py @@ -1107,6 +1107,20 @@ def test_bounded(): assert ask(Q.finite(cos(x) + sin(x))) is True +def test_unbounded(): + assert ask(Q.infinite(I * oo)) is True + assert ask(Q.infinite(1 + I*oo)) is True + assert ask(Q.infinite(3 * (I * oo))) is True + assert ask(Q.infinite(-I * oo)) is True + assert ask(Q.infinite(1 + zoo)) is True + assert ask(Q.infinite(I * zoo)) is True + assert ask(Q.infinite(x / y), Q.infinite(x) & Q.finite(y) & ~Q.zero(y)) is True + assert ask(Q.infinite(I * oo - I * oo)) is None + assert ask(Q.infinite(x * I * oo)) is None + assert ask(Q.infinite(1 / x), Q.finite(x) & ~Q.zero(x)) is False + assert ask(Q.infinite(1 / (I * oo))) is False + + def test_issue_27441(): # https://github.com/sympy/sympy/issues/27441 assert ask(Q.composite(y), Q.integer(y) & Q.positive(y) & ~Q.prime(y)) is None
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Bug Fixes" }
xorbitsai__inference-793@17cc512
xorbitsai/inference
Python
793
FEAT: Simple OAuth2 system
Fixes #698 <img width="1849" alt="image" src="https://github.com/xorbitsai/inference/assets/109656400/69156198-e6d0-4094-8c1a-3b9251d4175c"> <img width="1288" alt="image" src="https://github.com/xorbitsai/inference/assets/109656400/663fd072-fb37-46a4-9131-deb0009d7603"> # Detail 1. add `--auth-config` option for `xinference-local` and `xinference-supervisor` command for startup xinference with auth system. 2. Nothing changed when using xinference without auth. 3. For web users: Firstly, a login web page for sign in when the user tries to open the web UI. Then use the web as before. If user does not have the right permission, UI would show error. 4. For command line users, user should login first: ``` xinference login --username <name> --password <pass> ``` Then use the command as before. 5. For SDK users, also login first: ``` from xinference.client import Client client = Client('<endpoint>') client.login('<name>', '<pass>') ```
2023-12-20T10:42:14Z
Provide some authentication mechanism for UI and API ### Is your feature request related to a problem? Please describe Currently anyone with the xinference URL can make requests and add/remove models. I need a little more control. It does not have to be necessarily very sophisticated initially, some sort of shared secret would do, although if you guys are willing to consider more full fledged authentication mechanisms that would certainly be welcome. ### Describe the solution you'd like The simplest viable solution for me here would be a single shared secret, like a "root"/password type of credentials, with "password" being configurable
Yes, it's in our roadmap, and we plan to implement it within next two to three releases. Suggestions are welcome. I'll provide some suggestions but first I want to repeat that I could use a very simple mechanism initially like I described above. But if we are talking more full fledged mechanism then here are some thoughts. 1. The API and the UI need to rely on separate mechanisms for authentication. My suggestion here is that the API relies on tokens that are generated by the users from the UI, like many other websites with a rest API do (e.g. github). The API should use the same protocol mechanism as openAI itself (ie an Authorization HTTP header with a Bearer string) for maximum compatibility with the openai-python SDK 2. There are several options for the UI here, but what would be most useful to me is SSO integration with either Microsoft Active Directory or LDAP (that way my users could rely on the same credentials that let them access other apps throughout the company) along with some way to assign users to roles like "read-only", "token-creator", "model-deployment" that would help me control who gets to spin models up and down, and who gets to make requests to the models.
[ { "body": "### Is your feature request related to a problem? Please describe\r\nCurrently anyone with the xinference URL can make requests and add/remove models. I need a little more control. It does not have to be necessarily very sophisticated initially, some sort of shared secret would do, although if you guys are willing to consider more full fledged authentication mechanisms that would certainly be welcome.\r\n\r\n### Describe the solution you'd like\r\nThe simplest viable solution for me here would be a single shared secret, like a \"root\"/password type of credentials, with \"password\" being configurable\r\n\r\n", "number": 698, "title": "Provide some authentication mechanism for UI and API" } ]
ab575c413b0a28c338b9ca1d9cccb51276ceed9d
{ "head_commit": "17cc512ad1e7df492c2c4611b8256d9fcae6ea3e", "head_commit_message": "Add chinese doc for Oauth2", "patch_to_review": "diff --git a/.github/workflows/python.yaml b/.github/workflows/python.yaml\nindex b5f969f797..70fa563fed 100644\n--- a/.github/workflows/python.yaml\n+++ b/.github/workflows/python.yaml\n@@ -128,6 +128,8 @@ jobs:\n ${{ env.SELF_HOST_PYTHON }} -m pip install -U modelscope\n ${{ env.SELF_HOST_PYTHON }} -m pip install -U sse_starlette\n ${{ env.SELF_HOST_PYTHON }} -m pip install -U xoscar\n+ ${{ env.SELF_HOST_PYTHON }} -m pip install -U \"python-jose[cryptography]\"\n+ ${{ env.SELF_HOST_PYTHON }} -m pip install -U \"passlib[bcrypt]\"\n ${{ env.SELF_HOST_PYTHON }} -m pytest --timeout=1500 \\\n -W ignore::PendingDeprecationWarning \\\n --cov-config=setup.cfg --cov-report=xml --cov=xinference xinference/model/image/tests/test_stable_diffusion.py\ndiff --git a/doc/source/locale/zh_CN/LC_MESSAGES/user_guide/auth_system.po b/doc/source/locale/zh_CN/LC_MESSAGES/user_guide/auth_system.po\nnew file mode 100644\nindex 0000000000..9df5ac6d60\n--- /dev/null\n+++ b/doc/source/locale/zh_CN/LC_MESSAGES/user_guide/auth_system.po\n@@ -0,0 +1,225 @@\n+# SOME DESCRIPTIVE TITLE.\n+# Copyright (C) 2023, Xorbits Inc.\n+# This file is distributed under the same license as the Xinference package.\n+# FIRST AUTHOR <EMAIL@ADDRESS>, 2024.\n+#\n+#, fuzzy\n+msgid \"\"\n+msgstr \"\"\n+\"Project-Id-Version: Xinference \\n\"\n+\"Report-Msgid-Bugs-To: \\n\"\n+\"POT-Creation-Date: 2024-01-10 11:33+0800\\n\"\n+\"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\\n\"\n+\"Last-Translator: FULL NAME <EMAIL@ADDRESS>\\n\"\n+\"Language: zh_CN\\n\"\n+\"Language-Team: zh_CN <[email protected]>\\n\"\n+\"Plural-Forms: nplurals=1; plural=0;\\n\"\n+\"MIME-Version: 1.0\\n\"\n+\"Content-Type: text/plain; charset=utf-8\\n\"\n+\"Content-Transfer-Encoding: 8bit\\n\"\n+\"Generated-By: Babel 2.12.1\\n\"\n+\n+#: ../../source/user_guide/auth_system.rst:5\n+msgid \"Simple OAuth2 System (experimental)\"\n+msgstr \"OAuth2 系统(实验性质)\"\n+\n+#: ../../source/user_guide/auth_system.rst:7\n+msgid \"\"\n+\"Xinference builds an In-memory OAuth2 authentication and authorization \"\n+\"system using the account-password mode.\"\n+msgstr \"\"\n+\"Xinference 使用了账号密码的模式构建了一个基于内存的 OAuth2 的身份验证和授权系统。\"\n+\n+#: ../../source/user_guide/auth_system.rst:10\n+msgid \"\"\n+\"If you don't have authentication and authorization requirements, you can \"\n+\"use Xinference as before, without any changes.\"\n+msgstr \"\"\n+\"如果没有身份验证和授权的要求,可以像之前一样使用 Xinference,无需任何改动。\"\n+\n+#: ../../source/user_guide/auth_system.rst:14\n+msgid \"Permissions\"\n+msgstr \"权限\"\n+\n+#: ../../source/user_guide/auth_system.rst:15\n+msgid \"\"\n+\"Currently, Xinference system internally defines some interface \"\n+\"permissions:\"\n+msgstr \"\"\n+\"目前,Xinference 内部定义了以下几个接口权限:\"\n+\n+#: ../../source/user_guide/auth_system.rst:17\n+msgid \"``models:list``: Permission to list models and get models' information.\"\n+msgstr \"``models:list``: 获取模型列表和信息的权限。\"\n+\n+#: ../../source/user_guide/auth_system.rst:18\n+msgid \"``models:read``: Permission to use models.\"\n+msgstr \"``models:read``: 使用模型的权限。\"\n+\n+#: ../../source/user_guide/auth_system.rst:19\n+msgid \"``models:register``: Permission to register custom models.\"\n+msgstr \"``models:register``: 注册模型的权限。\"\n+\n+#: ../../source/user_guide/auth_system.rst:20\n+msgid \"``models:unregister``: Permission to unregister custom models.\"\n+msgstr \"``models:unregister``: 取消注册模型的权限。\"\n+\n+#: ../../source/user_guide/auth_system.rst:21\n+msgid \"``models:start``: Permission to launch models.\"\n+msgstr \"``models:start``: 启动模型的权限。\"\n+\n+#: ../../source/user_guide/auth_system.rst:22\n+msgid \"``models:stop``: Permission to stop running models.\"\n+msgstr \"``models:stop``: 停止模型的权限。\"\n+\n+#: ../../source/user_guide/auth_system.rst:23\n+msgid \"``admin``: Administrators have permissions for all interfaces.\"\n+msgstr \"``admin``: 管理员拥有所有接口的权限。\"\n+\n+#: ../../source/user_guide/auth_system.rst:27\n+msgid \"Startup\"\n+msgstr \"开始使用\"\n+\n+#: ../../source/user_guide/auth_system.rst:28\n+msgid \"\"\n+\"All authentication and authorization information needs to be specified \"\n+\"and loaded into memory when Xinference is started. Xinference requires a \"\n+\"JSON-formatted file with the following specific fields:\"\n+msgstr \"\"\n+\"在启动 Xinference 时,需要指定所有的验证和授权信息。当前,Xinference 需要一个\"\n+\" JSON 文件,其中包含以下特定字段:\"\n+\n+#: ../../source/user_guide/auth_system.rst:59\n+msgid \"\"\n+\"``auth_config``: This field is used to configure security-related \"\n+\"information.\"\n+msgstr \"\"\n+\"``auth_config``: 这个字段配置与安全相关的信息。\"\n+\n+#: ../../source/user_guide/auth_system.rst:61\n+msgid \"\"\n+\"``algorithm``: The algorithm used for token generation and parsing. \"\n+\"``HS256`` or ``RS256`` is recommended.\"\n+msgstr \"``algorithm``: 用于令牌生成与解析的算法。推荐使用 `HS256`` 或者 ``RS256`` 。\"\n+\n+#: ../../source/user_guide/auth_system.rst:63\n+msgid \"\"\n+\"``secret_key``: The secret_key used for token generation and parsing. Use\"\n+\" this command to generate: ``openssl rand -hex 32``.\"\n+msgstr \"``secret_key``: 用于令牌生成和解析的密钥。可以使用该命令生成: ``openssl rand -hex 32`` 。\"\n+\n+#: ../../source/user_guide/auth_system.rst:65\n+msgid \"\"\n+\"``token_expire_in_minutes``: Reserved field indicating the expiration \"\n+\"time of the token. The current open-source version of Xinference does not\"\n+\" check the expiration time of tokens.\"\n+msgstr \"\"\n+\"``token_expire_in_minutes``: 保留字段,表示令牌失效时间。目前 Xinference 开源版本\"\n+\"不会检查令牌过期时间。\"\n+\n+#: ../../source/user_guide/auth_system.rst:67\n+msgid \"\"\n+\"``user_config``: This field is used to configure user and permission \"\n+\"information. Each user information is composed of these fields:\"\n+msgstr \"\"\n+\"``user_config``: 这个字段用来配置用户和权限信息。每个用户信息由以下字段组成:\"\n+\n+#: ../../source/user_guide/auth_system.rst:69\n+msgid \"``username``: string field for username.\"\n+msgstr \"``username``: 字符串,表示用户名\"\n+\n+#: ../../source/user_guide/auth_system.rst:71\n+msgid \"``password``: string field for password.\"\n+msgstr \"``password``: 字符串,表示密码\"\n+\n+#: ../../source/user_guide/auth_system.rst:73\n+msgid \"\"\n+\"``permissions``: A list containing strings representing the permissions \"\n+\"that this user has. The permissions are described as above.\"\n+msgstr \"\"\n+\"``permissions``: 字符串列表,表示该用户拥有的权限。权限描述如上权限部分文档所述。\"\n+\n+#: ../../source/user_guide/auth_system.rst:76\n+msgid \"\"\n+\"Once you have configured such a JSON file, use the ``--auth-config`` \"\n+\"option to enable Xinference with the authentication and authorization \"\n+\"system. For example, for local startup:\"\n+msgstr \"\"\n+\"配置好这样一个 JSON 文件后,可以使用 ``--auth-config`` 选项启用具有身份验证和\"\n+\"授权系统的 Xinference。例如,本地启动的命令如下所示:\"\n+\n+#: ../../source/user_guide/auth_system.rst:83\n+msgid \"\"\n+\"For distributed startup, just specify this option when starting the \"\n+\"supervisor:\"\n+msgstr \"\"\n+\"在分布式环境下,只需要在启动 supervisor 的是指定这个选项:\"\n+\n+#: ../../source/user_guide/auth_system.rst:91\n+msgid \"Usage\"\n+msgstr \"使用\"\n+\n+#: ../../source/user_guide/auth_system.rst:92\n+msgid \"\"\n+\"For Xinference with the authentication and authorization system enabled, \"\n+\"all usage remains the same, except for the addition of a login step at \"\n+\"the beginning.\"\n+msgstr \"\"\n+\"使用带有权限管理的 Xinference 服务与正常的版本保持一致,只是在开始阶段添加了登录步骤。\"\n+\n+#: ../../source/user_guide/auth_system.rst:94\n+msgid \"Signin for command line users:\"\n+msgstr \"使用命令行登录:\"\n+\n+#: ../../source/user_guide/auth_system.rst:101\n+msgid \"For python SDK users:\"\n+msgstr \"使用 Python SDK 登录:\"\n+\n+#: ../../source/user_guide/auth_system.rst:110\n+msgid \"\"\n+\"For web UI users, when opening the web UI, you will first be directed to \"\n+\"the login page. After logging in, you can use the web UI normally.\"\n+msgstr \"\"\n+\"对于 Web UI 的用户,在打开 Web UI 时,将首先跳转到登录页面。登录后,就可以正常使用\"\n+\"Web UI 的功能。\"\n+\n+#: ../../source/user_guide/auth_system.rst:114\n+msgid \"Http Status Code\"\n+msgstr \"Http 状态码\"\n+\n+#: ../../source/user_guide/auth_system.rst:115\n+msgid \"Add the following two HTTP status codes:\"\n+msgstr \"添加了以下两种 HTTP 状态码:\"\n+\n+#: ../../source/user_guide/auth_system.rst:117\n+msgid \"``401 Unauthorized``: login information or token verifies failed.\"\n+msgstr \"``401 Unauthorized``: 登录信息或者令牌验证失效。\"\n+\n+#: ../../source/user_guide/auth_system.rst:118\n+msgid \"``403 Forbidden``: No enough permissions when accessing interfaces.\"\n+msgstr \"``403 Forbidden``: 没有足够的权限访问接口。\"\n+\n+#: ../../source/user_guide/auth_system.rst:120\n+msgid \"\"\n+\"For the command line, SDK, or web UI users, there will be clear \"\n+\"information prompts when encountering authorization and permissions \"\n+\"issues.\"\n+msgstr \"对于命令行、SDK 或 Web UI 用户,在遇到授权和权限问题时,会有明确的信息提示。\"\n+\n+#: ../../source/user_guide/auth_system.rst:124\n+msgid \"Note\"\n+msgstr \"注意\"\n+\n+#: ../../source/user_guide/auth_system.rst:125\n+msgid \"\"\n+\"This feature is still in an experimental stage. Feel free to provide \"\n+\"feedback on usage issues or improvement suggestions through `GitHub \"\n+\"issues <https://github.com/xorbitsai/inference/issues>`_ or `our Slack \"\n+\"<https://join.slack.com/t/xorbitsio/shared_invite/zt-1o3z9ucdh-\"\n+\"RbfhbPVpx7prOVdM1CAuxg>`_.\"\n+msgstr \"\"\n+\"该功能处于实验阶段。欢迎通过 `GitHub \"\n+\"issues <https://github.com/xorbitsai/inference/issues>`_ 或者\"\n+\" `Slack <https://join.slack.com/t/xorbitsio/shared_invite/zt-1o3z9ucdh-\"\n+\"RbfhbPVpx7prOVdM1CAuxg>`_ 提供反馈和建议。\"\n+\ndiff --git a/doc/source/locale/zh_CN/LC_MESSAGES/user_guide/client_api.po b/doc/source/locale/zh_CN/LC_MESSAGES/user_guide/client_api.po\nindex c4080f0eab..134def615b 100644\n--- a/doc/source/locale/zh_CN/LC_MESSAGES/user_guide/client_api.po\n+++ b/doc/source/locale/zh_CN/LC_MESSAGES/user_guide/client_api.po\n@@ -8,7 +8,7 @@ msgid \"\"\n msgstr \"\"\n \"Project-Id-Version: Xinference \\n\"\n \"Report-Msgid-Bugs-To: \\n\"\n-\"POT-Creation-Date: 2023-12-25 17:11+0800\\n\"\n+\"POT-Creation-Date: 2024-01-10 11:33+0800\\n\"\n \"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\\n\"\n \"Last-Translator: FULL NAME <EMAIL@ADDRESS>\\n\"\n \"Language: zh_CN\\n\"\n@@ -17,7 +17,7 @@ msgstr \"\"\n \"MIME-Version: 1.0\\n\"\n \"Content-Type: text/plain; charset=utf-8\\n\"\n \"Content-Transfer-Encoding: 8bit\\n\"\n-\"Generated-By: Babel 2.11.0\\n\"\n+\"Generated-By: Babel 2.12.1\\n\"\n \n #: ../../source/user_guide/client_api.rst:5\n msgid \"Client API\"\n@@ -39,8 +39,8 @@ msgid \"\"\n \"can connect to the xinference server through this endpoint using the \"\n \"Client.\"\n msgstr \"\"\n-\"在命令日志里会打印服务地址,上述日志中为 `http://127.0.0.1:9997`。用户可以通过 \"\n-\"Client 连接 Xinference 服务。\"\n+\"在命令日志里会打印服务地址,上述日志中为 `http://127.0.0.1:9997`。用户可以通过 Client 连接 Xinference \"\n+\"服务。\"\n \n #: ../../source/user_guide/client_api.rst:20\n msgid \"\"\n@@ -60,41 +60,89 @@ msgstr \"列出所有内置支持的 LLM 模型:\"\n msgid \"To initialize an LLM and chat:\"\n msgstr \"初始化一个大语言模型并且与之对话:\"\n \n-#: ../../source/user_guide/client_api.rst:63\n+#: ../../source/user_guide/client_api.rst:41\n+#: ../../source/user_guide/client_api.rst:162\n+#: ../../source/user_guide/client_api.rst:233\n+msgid \"Xinference Client\"\n+msgstr \"Xinference Client\"\n+\n+#: ../../source/user_guide/client_api.rst:66\n+#: ../../source/user_guide/client_api.rst:194\n+#: ../../source/user_guide/client_api.rst:257\n+msgid \"OpenAI Client\"\n+msgstr \"OpenAI Client\"\n+\n+#: ../../source/user_guide/client_api.rst:68\n+msgid \"\"\n+\"Openai client request with the same function as before, excluding launch \"\n+\"model. More details refer to: https://platform.openai.com/docs/api-\"\n+\"reference/chat?lang=python\"\n+msgstr \"\"\n+\"使用 Openai 发送请求时,除了创建模型,其余的请求都保持与 Openai 的接口兼容。\"\n+\"Openai 使用方式可以参考 https://platform.openai.com/docs/api-reference/chat?lang=python\"\n+\n+#: ../../source/user_guide/client_api.rst:90\n+msgid \"OpenAI Client Tool Calls\"\n+msgstr \"OpenAI 工具调用\"\n+\n+#: ../../source/user_guide/client_api.rst:135\n+#: ../../source/user_guide/client_api.rst:176\n+#: ../../source/user_guide/client_api.rst:208\n+#: ../../source/user_guide/client_api.rst:248\n+#: ../../source/user_guide/client_api.rst:272\n+#: ../../source/user_guide/client_api.rst:300\n+msgid \"Output:\"\n+msgstr \"输出:\"\n+\n+#: ../../source/user_guide/client_api.rst:144\n msgid \"Embedding\"\n msgstr \"Embedding\"\n \n-#: ../../source/user_guide/client_api.rst:65\n+#: ../../source/user_guide/client_api.rst:146\n msgid \"To list the available built-in embedding models:\"\n msgstr \"列出所有内置支持的 embedding 模型:\"\n \n-#: ../../source/user_guide/client_api.rst:78\n+#: ../../source/user_guide/client_api.rst:159\n msgid \"To launch an embedding model and embed text:\"\n msgstr \"拉起 embedding 模型并使用文本向量化:\"\n \n-#: ../../source/user_guide/client_api.rst:92\n-#: ../../source/user_guide/client_api.rst:138\n-#: ../../source/user_guide/client_api.rst:168\n-msgid \"Output:\"\n-msgstr \"输出:\"\n+#: ../../source/user_guide/client_api.rst:196\n+msgid \"\"\n+\"Openai client request with the same function as before, excluding launch \"\n+\"model. More details refer to: https://platform.openai.com/docs/api-\"\n+\"reference/embeddings?lang=python\"\n+msgstr \"\"\n+\"使用 Openai 发送请求时,除了创建模型,其余的请求都保持与 Openai 的接口兼容。\"\n+\"Openai 使用方式可以参考 https://platform.openai.com/docs/api-reference/embeddings?lang=python\"\n+\n \n-#: ../../source/user_guide/client_api.rst:110\n+#: ../../source/user_guide/client_api.rst:215\n msgid \"Image\"\n msgstr \"图片\"\n \n-#: ../../source/user_guide/client_api.rst:112\n+#: ../../source/user_guide/client_api.rst:217\n msgid \"To list the available built-in image models:\"\n msgstr \"列出所有内置的文生图模型:\"\n \n-#: ../../source/user_guide/client_api.rst:123\n+#: ../../source/user_guide/client_api.rst:230\n msgid \"To initiate an image model and generate an image using a text prompt:\"\n msgstr \"初始化一个文生图模型并通过提示词生成图片:\"\n \n-#: ../../source/user_guide/client_api.rst:147\n+#: ../../source/user_guide/client_api.rst:259\n+msgid \"\"\n+\"Openai client request with the same function as before, excluding launch \"\n+\"model. More details refer to: https://platform.openai.com/docs/api-\"\n+\"reference/images/create?lang=python\"\n+msgstr \"\"\n+\"使用 Openai 发送请求时,除了创建模型,其余的请求都保持与 Openai 的接口兼容。\"\n+\"Openai 使用方式可以参考 https://platform.openai.com/docs/api-reference/images/create?lang=python\"\n+\n+\n+#: ../../source/user_guide/client_api.rst:279\n msgid \"Rerank\"\n msgstr \"Rerank\"\n \n-#: ../../source/user_guide/client_api.rst:148\n+#: ../../source/user_guide/client_api.rst:280\n msgid \"To launch a rerank model and compute the similarity scores:\"\n msgstr \"拉起 rerank 模型并计算文本相似度:\"\n \ndiff --git a/doc/source/user_guide/auth_system.rst b/doc/source/user_guide/auth_system.rst\nnew file mode 100644\nindex 0000000000..eeeed7181c\n--- /dev/null\n+++ b/doc/source/user_guide/auth_system.rst\n@@ -0,0 +1,127 @@\n+.. _user_guide_auth_system:\n+\n+===================================\n+Simple OAuth2 System (experimental)\n+===================================\n+\n+Xinference builds an In-memory OAuth2 authentication and authorization system using the account-password mode.\n+\n+.. note::\n+ If you don't have authentication and authorization requirements, you can use Xinference as before, without any changes.\n+\n+\n+Permissions\n+===========\n+Currently, Xinference system internally defines some interface permissions:\n+\n+* ``models:list``: Permission to list models and get models' information.\n+* ``models:read``: Permission to use models.\n+* ``models:register``: Permission to register custom models.\n+* ``models:unregister``: Permission to unregister custom models.\n+* ``models:start``: Permission to launch models.\n+* ``models:stop``: Permission to stop running models.\n+* ``admin``: Administrators have permissions for all interfaces.\n+\n+\n+Startup\n+=======\n+All authentication and authorization information needs to be specified and loaded into memory when Xinference is started.\n+Xinference requires a JSON-formatted file with the following specific fields:\n+\n+.. code-block:: json\n+\n+ {\n+ \"auth_config\": {\n+ \"algorithm\": \"HS256\",\n+ \"secret_key\": \"09d25e094faa6ca2556c818166b7a9563b93f7099f6f0f4caa6cf63b88e8d3e7\",\n+ \"token_expire_in_minutes\": 30\n+ },\n+ \"user_config\": [\n+ {\n+ \"username\": \"user1\",\n+ \"password\": \"secret1\",\n+ \"permissions\": [\n+ \"admin\"\n+ ]\n+ },\n+ {\n+ \"username\": \"user2\",\n+ \"password\": \"secret2\",\n+ \"permissions\": [\n+ \"models:list\",\n+ \"models:read\"\n+ ]\n+ }\n+ ]\n+ }\n+\n+\n+* ``auth_config``: This field is used to configure security-related information.\n+\n+ * ``algorithm``: The algorithm used for token generation and parsing. ``HS256`` or ``RS256`` is recommended.\n+\n+ * ``secret_key``: The secret_key used for token generation and parsing. Use this command to generate: ``openssl rand -hex 32``.\n+\n+ * ``token_expire_in_minutes``: Reserved field indicating the expiration time of the token. The current open-source version of Xinference does not check the expiration time of tokens.\n+\n+* ``user_config``: This field is used to configure user and permission information. Each user information is composed of these fields:\n+\n+ * ``username``: string field for username.\n+\n+ * ``password``: string field for password.\n+\n+ * ``permissions``: A list containing strings representing the permissions that this user has. The permissions are described as above.\n+\n+\n+Once you have configured such a JSON file, use the ``--auth-config`` option to enable Xinference with the authentication and authorization system. For example, for local startup:\n+\n+.. code-block:: bash\n+\n+ xinference-local -H 0.0.0.0 --auth-config /path/to/your_json_config_file\n+\n+\n+For distributed startup, just specify this option when starting the supervisor:\n+\n+.. code-block:: bash\n+\n+ xinference-supervisor -H <supervisor_ip> --auth-config /path/to/your_json_config_file\n+\n+\n+Usage\n+=====\n+For Xinference with the authentication and authorization system enabled, all usage remains the same, except for the addition of a login step at the beginning.\n+\n+Signin for command line users:\n+\n+.. code-block:: bash\n+\n+ xinference login -e <endpoint> --username <username> --password <password>\n+\n+\n+For python SDK users:\n+\n+.. code-block:: python\n+\n+ from xinference.client import Client\n+ client = Client('<endpoint>')\n+ client.login('<name>', '<pass>')\n+\n+\n+For web UI users, when opening the web UI, you will first be directed to the login page. After logging in, you can use the web UI normally.\n+\n+\n+Http Status Code\n+================\n+Add the following two HTTP status codes:\n+\n+* ``401 Unauthorized``: login information or token verifies failed.\n+* ``403 Forbidden``: No enough permissions when accessing interfaces.\n+\n+For the command line, SDK, or web UI users, there will be clear information prompts when encountering authorization and permissions issues.\n+\n+\n+Note\n+====\n+This feature is still in an experimental stage.\n+Feel free to provide feedback on usage issues or improvement suggestions through `GitHub issues <https://github.com/xorbitsai/inference/issues>`_ or\n+`our Slack <https://join.slack.com/t/xorbitsio/shared_invite/zt-1o3z9ucdh-RbfhbPVpx7prOVdM1CAuxg>`_.\ndiff --git a/doc/source/user_guide/index.rst b/doc/source/user_guide/index.rst\nindex bca65cec4f..5cdf96e95e 100644\n--- a/doc/source/user_guide/index.rst\n+++ b/doc/source/user_guide/index.rst\n@@ -11,3 +11,4 @@ User Guide\n backends\n client_api\n spec_decoding\n+ auth_system\ndiff --git a/setup.cfg b/setup.cfg\nindex 893f9c334b..aa037fa46e 100644\n--- a/setup.cfg\n+++ b/setup.cfg\n@@ -41,6 +41,8 @@ install_requires =\n modelscope>=1.10.0\n sse_starlette>=1.6.5 # ensure_bytes API break change: https://github.com/sysid/sse-starlette/issues/65\n openai>1 # For typing\n+ python-jose[cryptography]\n+ passlib[bcrypt]\n \n [options.packages.find]\n exclude =\ndiff --git a/xinference/api/oauth2/__init__.py b/xinference/api/oauth2/__init__.py\nnew file mode 100644\nindex 0000000000..37f6558d95\n--- /dev/null\n+++ b/xinference/api/oauth2/__init__.py\n@@ -0,0 +1,13 @@\n+# Copyright 2022-2023 XProbe Inc.\n+#\n+# Licensed under the Apache License, Version 2.0 (the \"License\");\n+# you may not use this file except in compliance with the License.\n+# You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing, software\n+# distributed under the License is distributed on an \"AS IS\" BASIS,\n+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+# See the License for the specific language governing permissions and\n+# limitations under the License.\ndiff --git a/xinference/api/oauth2/common.py b/xinference/api/oauth2/common.py\nnew file mode 100644\nindex 0000000000..3d74b66482\n--- /dev/null\n+++ b/xinference/api/oauth2/common.py\n@@ -0,0 +1,14 @@\n+# Copyright 2022-2023 XProbe Inc.\n+#\n+# Licensed under the Apache License, Version 2.0 (the \"License\");\n+# you may not use this file except in compliance with the License.\n+# You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing, software\n+# distributed under the License is distributed on an \"AS IS\" BASIS,\n+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+# See the License for the specific language governing permissions and\n+# limitations under the License.\n+XINFERENCE_OAUTH2_CONFIG = None\ndiff --git a/xinference/api/oauth2/core.py b/xinference/api/oauth2/core.py\nnew file mode 100644\nindex 0000000000..e1a6724de0\n--- /dev/null\n+++ b/xinference/api/oauth2/core.py\n@@ -0,0 +1,93 @@\n+# Copyright 2022-2023 XProbe Inc.\n+#\n+# Licensed under the Apache License, Version 2.0 (the \"License\");\n+# you may not use this file except in compliance with the License.\n+# You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing, software\n+# distributed under the License is distributed on an \"AS IS\" BASIS,\n+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+# See the License for the specific language governing permissions and\n+# limitations under the License.\n+import logging\n+from typing import List, Optional, Union\n+\n+from fastapi import Depends, HTTPException, status\n+from fastapi.security import OAuth2PasswordBearer, SecurityScopes\n+from jose import JWTError, jwt\n+from pydantic import BaseModel, ValidationError\n+from typing_extensions import Annotated\n+\n+from .types import AuthStartupConfig, User\n+\n+logger = logging.getLogger(__name__)\n+\n+\n+oauth2_scheme = OAuth2PasswordBearer(tokenUrl=\"token\")\n+\n+\n+def get_db():\n+ from .common import XINFERENCE_OAUTH2_CONFIG\n+\n+ # In a real enterprise-level environment, this should be the database\n+ yield XINFERENCE_OAUTH2_CONFIG\n+\n+\n+def get_user(db_users: List[User], username: str) -> Optional[User]:\n+ for user in db_users:\n+ if user.username == username:\n+ return user\n+ return None\n+\n+\n+class TokenData(BaseModel):\n+ username: Union[str, None] = None\n+ scopes: List[str] = []\n+\n+\n+def verify_token(\n+ security_scopes: SecurityScopes,\n+ token: Annotated[str, Depends(oauth2_scheme)],\n+ config: Optional[AuthStartupConfig] = Depends(get_db),\n+):\n+ if security_scopes.scopes:\n+ authenticate_value = f'Bearer scope=\"{security_scopes.scope_str}\"'\n+ else:\n+ authenticate_value = \"Bearer\"\n+ credentials_exception = HTTPException(\n+ status_code=status.HTTP_401_UNAUTHORIZED,\n+ detail=\"Could not validate credentials\",\n+ headers={\"WWW-Authenticate\": authenticate_value},\n+ )\n+\n+ try:\n+ assert config is not None\n+ payload = jwt.decode(\n+ token,\n+ config.auth_config.secret_key,\n+ algorithms=[config.auth_config.algorithm],\n+ options={\"verify_exp\": False}, # TODO: supports token expiration\n+ )\n+ username: str = payload.get(\"sub\")\n+ if username is None:\n+ raise credentials_exception\n+ token_scopes = payload.get(\"scopes\", [])\n+ # TODO: check expire\n+ token_data = TokenData(scopes=token_scopes, username=username)\n+ except (JWTError, ValidationError):\n+ raise credentials_exception\n+ user = get_user(config.user_config, username=token_data.username) # type: ignore\n+ if user is None:\n+ raise credentials_exception\n+ if \"admin\" in token_data.scopes:\n+ return user\n+ for scope in security_scopes.scopes:\n+ if scope not in token_data.scopes:\n+ raise HTTPException(\n+ status_code=status.HTTP_403_FORBIDDEN,\n+ detail=\"Not enough permissions\",\n+ headers={\"WWW-Authenticate\": authenticate_value},\n+ )\n+ return user\ndiff --git a/xinference/api/oauth2/types.py b/xinference/api/oauth2/types.py\nnew file mode 100644\nindex 0000000000..b0a86a5314\n--- /dev/null\n+++ b/xinference/api/oauth2/types.py\n@@ -0,0 +1,36 @@\n+# Copyright 2022-2023 XProbe Inc.\n+#\n+# Licensed under the Apache License, Version 2.0 (the \"License\");\n+# you may not use this file except in compliance with the License.\n+# You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing, software\n+# distributed under the License is distributed on an \"AS IS\" BASIS,\n+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+# See the License for the specific language governing permissions and\n+# limitations under the License.\n+from typing import List\n+\n+from pydantic import BaseModel\n+\n+\n+class LoginUserForm(BaseModel):\n+ username: str\n+ password: str\n+\n+\n+class User(LoginUserForm):\n+ permissions: List[str]\n+\n+\n+class AuthConfig(BaseModel):\n+ algorithm: str = \"HS256\"\n+ secret_key: str\n+ token_expire_in_minutes: int\n+\n+\n+class AuthStartupConfig(BaseModel):\n+ auth_config: AuthConfig\n+ user_config: List[User]\ndiff --git a/xinference/api/oauth2/utils.py b/xinference/api/oauth2/utils.py\nnew file mode 100644\nindex 0000000000..9980b7722a\n--- /dev/null\n+++ b/xinference/api/oauth2/utils.py\n@@ -0,0 +1,44 @@\n+# Copyright 2022-2023 XProbe Inc.\n+#\n+# Licensed under the Apache License, Version 2.0 (the \"License\");\n+# you may not use this file except in compliance with the License.\n+# You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing, software\n+# distributed under the License is distributed on an \"AS IS\" BASIS,\n+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+# See the License for the specific language governing permissions and\n+# limitations under the License.\n+from datetime import datetime, timedelta\n+from typing import Union\n+\n+from jose import jwt\n+from passlib.context import CryptContext\n+\n+pwd_context = CryptContext(schemes=[\"bcrypt\"], deprecated=\"auto\")\n+\n+\n+def create_access_token(\n+ data: dict,\n+ secret_key: str,\n+ algorithm: str,\n+ expires_delta: Union[timedelta, None] = None,\n+):\n+ to_encode = data.copy()\n+ if expires_delta:\n+ expire = datetime.utcnow() + expires_delta\n+ else:\n+ expire = datetime.utcnow() + timedelta(minutes=15)\n+ to_encode.update({\"exp\": expire})\n+ encoded_jwt = jwt.encode(to_encode, secret_key, algorithm=algorithm)\n+ return encoded_jwt\n+\n+\n+def verify_password(plain_password, hashed_password):\n+ return pwd_context.verify(plain_password, hashed_password)\n+\n+\n+def get_password_hash(password):\n+ return pwd_context.hash(password)\ndiff --git a/xinference/api/restful_api.py b/xinference/api/restful_api.py\nindex dd628a08d9..3bdee59210 100644\n--- a/xinference/api/restful_api.py\n+++ b/xinference/api/restful_api.py\n@@ -21,9 +21,11 @@\n import pprint\n import sys\n import warnings\n+from datetime import timedelta\n from typing import Any, List, Optional, Union\n \n import gradio as gr\n+import pydantic\n import xoscar as xo\n from fastapi import (\n APIRouter,\n@@ -34,9 +36,12 @@\n Query,\n Request,\n Response,\n+ Security,\n UploadFile,\n+ status,\n )\n from fastapi.middleware.cors import CORSMiddleware\n+from fastapi.responses import JSONResponse\n from fastapi.staticfiles import StaticFiles\n from PIL import Image\n from pydantic import BaseModel, Field\n@@ -57,11 +62,14 @@\n CreateCompletion,\n ImageList,\n )\n+from .oauth2.core import get_user, verify_token\n+from .oauth2.types import AuthStartupConfig, LoginUserForm, User\n+from .oauth2.utils import create_access_token, get_password_hash, verify_password\n \n logger = logging.getLogger(__name__)\n \n \n-class JSONResponse(StarletteJSONResponse):\n+class JSONResponse(StarletteJSONResponse): # type: ignore # noqa: F811\n def render(self, content: Any) -> bytes:\n return json_dumps(content)\n \n@@ -125,16 +133,48 @@ class BuildGradioInterfaceRequest(BaseModel):\n model_lang: List[str]\n \n \n+def authenticate_user(db_users: List[User], username: str, password: str):\n+ user = get_user(db_users, username)\n+ if not user:\n+ return False\n+ if not verify_password(password, user.password):\n+ return False\n+ return user\n+\n+\n class RESTfulAPI:\n- def __init__(self, supervisor_address: str, host: str, port: int):\n+ def __init__(\n+ self,\n+ supervisor_address: str,\n+ host: str,\n+ port: int,\n+ auth_config_file: Optional[str] = None,\n+ ):\n super().__init__()\n self._supervisor_address = supervisor_address\n self._host = host\n self._port = port\n self._supervisor_ref = None\n+ self._auth_config: AuthStartupConfig = self.init_auth_config(auth_config_file)\n self._router = APIRouter()\n self._app = FastAPI()\n \n+ @staticmethod\n+ def init_auth_config(auth_config_file: Optional[str]):\n+ from .oauth2 import common\n+\n+ if auth_config_file:\n+ config: AuthStartupConfig = pydantic.parse_file_as(\n+ path=auth_config_file, type_=AuthStartupConfig\n+ )\n+ for user in config.user_config:\n+ user.password = get_password_hash(user.password)\n+ common.XINFERENCE_OAUTH2_CONFIG = config # type: ignore\n+ return config\n+\n+ def is_authenticated(self):\n+ return False if self._auth_config is None else True\n+\n @staticmethod\n def handle_request_limit_error(e: Exception):\n if \"Rate limit reached\" in str(e):\n@@ -147,6 +187,33 @@ async def _get_supervisor_ref(self) -> xo.ActorRefType[SupervisorActor]:\n )\n return self._supervisor_ref\n \n+ async def login_for_access_token(self, form_data: LoginUserForm) -> JSONResponse:\n+ user = authenticate_user(\n+ self._auth_config.user_config, form_data.username, form_data.password\n+ )\n+ if not user:\n+ raise HTTPException(\n+ status_code=status.HTTP_401_UNAUTHORIZED,\n+ detail=\"Incorrect username or password\",\n+ headers={\"WWW-Authenticate\": \"Bearer\"},\n+ )\n+ assert user is not None and isinstance(user, User)\n+ access_token_expires = timedelta(\n+ minutes=self._auth_config.auth_config.token_expire_in_minutes\n+ )\n+ access_token = create_access_token(\n+ data={\"sub\": user.username, \"scopes\": user.permissions},\n+ secret_key=self._auth_config.auth_config.secret_key,\n+ algorithm=self._auth_config.auth_config.algorithm,\n+ expires_delta=access_token_expires,\n+ )\n+ return JSONResponse(\n+ content={\"access_token\": access_token, \"token_type\": \"bearer\"}\n+ )\n+\n+ async def is_cluster_authenticated(self) -> JSONResponse:\n+ return JSONResponse(content={\"auth\": self.is_authenticated()})\n+\n def serve(self, logging_conf: Optional[dict] = None):\n self._app.add_middleware(\n CORSMiddleware,\n@@ -155,8 +222,10 @@ def serve(self, logging_conf: Optional[dict] = None):\n allow_methods=[\"*\"],\n allow_headers=[\"*\"],\n )\n+\n+ # internal interface\n self._router.add_api_route(\"/status\", self.get_status, methods=[\"GET\"])\n- self._router.add_api_route(\"/v1/models\", self.list_models, methods=[\"GET\"])\n+ # conflict with /v1/models/{model_uid} below, so register this first\n self._router.add_api_route(\n \"/v1/models/prompts\", self._get_builtin_prompts, methods=[\"GET\"]\n )\n@@ -166,52 +235,115 @@ def serve(self, logging_conf: Optional[dict] = None):\n self._router.add_api_route(\n \"/v1/cluster/devices\", self._get_devices_count, methods=[\"GET\"]\n )\n+ self._router.add_api_route(\"/v1/address\", self.get_address, methods=[\"GET\"])\n+\n+ # user interface\n+ self._router.add_api_route(\n+ \"/v1/ui/{model_uid}\",\n+ self.build_gradio_interface,\n+ methods=[\"POST\"],\n+ dependencies=[Security(verify_token, scopes=[\"models:read\"])]\n+ if self.is_authenticated()\n+ else None,\n+ )\n+ self._router.add_api_route(\n+ \"/token\", self.login_for_access_token, methods=[\"POST\"]\n+ )\n+ self._router.add_api_route(\n+ \"/v1/cluster/auth\", self.is_cluster_authenticated, methods=[\"GET\"]\n+ )\n+ self._router.add_api_route(\n+ \"/v1/models\",\n+ self.list_models,\n+ methods=[\"GET\"],\n+ dependencies=[Security(verify_token, scopes=[\"models:list\"])]\n+ if self.is_authenticated()\n+ else None,\n+ )\n+\n+ self._router.add_api_route(\n+ \"/v1/models/{model_uid}\",\n+ self.describe_model,\n+ methods=[\"GET\"],\n+ dependencies=[Security(verify_token, scopes=[\"models:list\"])]\n+ if self.is_authenticated()\n+ else None,\n+ )\n self._router.add_api_route(\n- \"/v1/models/{model_uid}\", self.describe_model, methods=[\"GET\"]\n+ \"/v1/models\",\n+ self.launch_model,\n+ methods=[\"POST\"],\n+ dependencies=[Security(verify_token, scopes=[\"models:start\"])]\n+ if self.is_authenticated()\n+ else None,\n )\n- self._router.add_api_route(\"/v1/models\", self.launch_model, methods=[\"POST\"])\n self._router.add_api_route(\n \"/experimental/speculative_llms\",\n self.launch_speculative_llm,\n methods=[\"POST\"],\n+ dependencies=[Security(verify_token, scopes=[\"models:start\"])]\n+ if self.is_authenticated()\n+ else None,\n )\n self._router.add_api_route(\n- \"/v1/models/{model_uid}\", self.terminate_model, methods=[\"DELETE\"]\n+ \"/v1/models/{model_uid}\",\n+ self.terminate_model,\n+ methods=[\"DELETE\"],\n+ dependencies=[Security(verify_token, scopes=[\"models:stop\"])]\n+ if self.is_authenticated()\n+ else None,\n )\n- self._router.add_api_route(\"/v1/address\", self.get_address, methods=[\"GET\"])\n self._router.add_api_route(\n \"/v1/completions\",\n self.create_completion,\n methods=[\"POST\"],\n response_model=Completion,\n+ dependencies=[Security(verify_token, scopes=[\"models:read\"])]\n+ if self.is_authenticated()\n+ else None,\n )\n self._router.add_api_route(\n \"/v1/embeddings\",\n self.create_embedding,\n methods=[\"POST\"],\n+ dependencies=[Security(verify_token, scopes=[\"models:read\"])]\n+ if self.is_authenticated()\n+ else None,\n )\n self._router.add_api_route(\n \"/v1/rerank\",\n self.rerank,\n methods=[\"POST\"],\n+ dependencies=[Security(verify_token, scopes=[\"models:read\"])]\n+ if self.is_authenticated()\n+ else None,\n )\n self._router.add_api_route(\n \"/v1/images/generations\",\n self.create_images,\n methods=[\"POST\"],\n response_model=ImageList,\n+ dependencies=[Security(verify_token, scopes=[\"models:read\"])]\n+ if self.is_authenticated()\n+ else None,\n )\n self._router.add_api_route(\n \"/v1/images/variations\",\n self.create_variations,\n methods=[\"POST\"],\n response_model=ImageList,\n+ dependencies=[Security(verify_token, scopes=[\"models:read\"])]\n+ if self.is_authenticated()\n+ else None,\n )\n self._router.add_api_route(\n \"/v1/chat/completions\",\n self.create_chat_completion,\n methods=[\"POST\"],\n response_model=ChatCompletion,\n+ dependencies=[Security(verify_token, scopes=[\"models:read\"])]\n+ if self.is_authenticated()\n+ else None,\n )\n \n # for custom models\n@@ -219,25 +351,33 @@ def serve(self, logging_conf: Optional[dict] = None):\n \"/v1/model_registrations/{model_type}\",\n self.register_model,\n methods=[\"POST\"],\n+ dependencies=[Security(verify_token, scopes=[\"models:register\"])]\n+ if self.is_authenticated()\n+ else None,\n )\n self._router.add_api_route(\n \"/v1/model_registrations/{model_type}/{model_name}\",\n self.unregister_model,\n methods=[\"DELETE\"],\n+ dependencies=[Security(verify_token, scopes=[\"models:unregister\"])]\n+ if self.is_authenticated()\n+ else None,\n )\n self._router.add_api_route(\n \"/v1/model_registrations/{model_type}\",\n self.list_model_registrations,\n methods=[\"GET\"],\n+ dependencies=[Security(verify_token, scopes=[\"models:list\"])]\n+ if self.is_authenticated()\n+ else None,\n )\n self._router.add_api_route(\n \"/v1/model_registrations/{model_type}/{model_name}\",\n self.get_model_registrations,\n methods=[\"GET\"],\n- )\n-\n- self._router.add_api_route(\n- \"/v1/ui/{model_uid}\", self.build_gradio_interface, methods=[\"POST\"]\n+ dependencies=[Security(verify_token, scopes=[\"models:list\"])]\n+ if self.is_authenticated()\n+ else None,\n )\n \n self._app.include_router(self._router)\n@@ -467,7 +607,7 @@ async def launch_model(self, request: Request) -> JSONResponse:\n return JSONResponse(content={\"model_uid\": model_uid})\n \n async def build_gradio_interface(\n- self, model_uid: str, body: BuildGradioInterfaceRequest\n+ self, model_uid: str, body: BuildGradioInterfaceRequest, request: Request\n ) -> JSONResponse:\n \"\"\"\n Separate build_interface with launch_model\n@@ -492,6 +632,7 @@ async def build_gradio_interface(\n from ..core.chat_interface import LLMInterface\n \n try:\n+ access_token = request.headers.get(\"Authorization\")\n internal_host = \"localhost\" if self._host == \"0.0.0.0\" else self._host\n interface = LLMInterface(\n endpoint=f\"http://{internal_host}:{self._port}\",\n@@ -504,6 +645,7 @@ async def build_gradio_interface(\n model_ability=body.model_ability,\n model_description=body.model_description,\n model_lang=body.model_lang,\n+ access_token=access_token,\n ).build()\n gr.mount_gradio_app(self._app, interface, f\"/{model_uid}\")\n except ValueError as ve:\n@@ -921,11 +1063,20 @@ async def get_model_registrations(\n \n \n def run(\n- supervisor_address: str, host: str, port: int, logging_conf: Optional[dict] = None\n+ supervisor_address: str,\n+ host: str,\n+ port: int,\n+ logging_conf: Optional[dict] = None,\n+ auth_config_file: Optional[str] = None,\n ):\n logger.info(f\"Starting Xinference at endpoint: http://{host}:{port}\")\n try:\n- api = RESTfulAPI(supervisor_address=supervisor_address, host=host, port=port)\n+ api = RESTfulAPI(\n+ supervisor_address=supervisor_address,\n+ host=host,\n+ port=port,\n+ auth_config_file=auth_config_file,\n+ )\n api.serve(logging_conf=logging_conf)\n except SystemExit:\n logger.warning(\"Failed to create socket with port %d\", port)\n@@ -936,7 +1087,10 @@ def run(\n logger.info(f\"Found available port: {port}\")\n logger.info(f\"Starting Xinference at endpoint: http://{host}:{port}\")\n api = RESTfulAPI(\n- supervisor_address=supervisor_address, host=host, port=port\n+ supervisor_address=supervisor_address,\n+ host=host,\n+ port=port,\n+ auth_config_file=auth_config_file,\n )\n api.serve(logging_conf=logging_conf)\n else:\n@@ -944,10 +1098,15 @@ def run(\n \n \n def run_in_subprocess(\n- supervisor_address: str, host: str, port: int, logging_conf: Optional[dict] = None\n+ supervisor_address: str,\n+ host: str,\n+ port: int,\n+ logging_conf: Optional[dict] = None,\n+ auth_config_file: Optional[str] = None,\n ) -> multiprocessing.Process:\n p = multiprocessing.Process(\n- target=run, args=(supervisor_address, host, port, logging_conf)\n+ target=run,\n+ args=(supervisor_address, host, port, logging_conf, auth_config_file),\n )\n p.daemon = True\n p.start()\ndiff --git a/xinference/client/restful/restful_client.py b/xinference/client/restful/restful_client.py\nindex 6a8c918c50..c081ede84c 100644\n--- a/xinference/client/restful/restful_client.py\n+++ b/xinference/client/restful/restful_client.py\n@@ -53,9 +53,10 @@ class RESTfulModelHandle:\n programmatically.\n \"\"\"\n \n- def __init__(self, model_uid: str, base_url: str):\n+ def __init__(self, model_uid: str, base_url: str, auth_headers: Dict):\n self._model_uid = model_uid\n self._base_url = base_url\n+ self.auth_headers = auth_headers\n \n \n class RESTfulEmbeddingModelHandle(RESTfulModelHandle):\n@@ -82,7 +83,7 @@ def create_embedding(self, input: Union[str, List[str]]) -> \"Embedding\":\n \"\"\"\n url = f\"{self._base_url}/v1/embeddings\"\n request_body = {\"model\": self._model_uid, \"input\": input}\n- response = requests.post(url, json=request_body)\n+ response = requests.post(url, json=request_body, headers=self.auth_headers)\n if response.status_code != 200:\n raise RuntimeError(\n f\"Failed to create the embeddings, detail: {_get_error_string(response)}\"\n@@ -135,7 +136,7 @@ def rerank(\n \"max_chunks_per_doc\": max_chunks_per_doc,\n \"return_documents\": return_documents,\n }\n- response = requests.post(url, json=request_body)\n+ response = requests.post(url, json=request_body, headers=self.auth_headers)\n if response.status_code != 200:\n raise RuntimeError(\n f\"Failed to rerank documents, detail: {response.json()['detail']}\"\n@@ -182,7 +183,7 @@ def text_to_image(\n \"response_format\": response_format,\n \"kwargs\": json.dumps(kwargs),\n }\n- response = requests.post(url, json=request_body)\n+ response = requests.post(url, json=request_body, headers=self.auth_headers)\n if response.status_code != 200:\n raise RuntimeError(\n f\"Failed to create the images, detail: {_get_error_string(response)}\"\n@@ -246,10 +247,7 @@ def image_to_image(\n for key, value in params.items():\n files.append((key, (None, value)))\n files.append((\"image\", (\"image\", image, \"application/octet-stream\")))\n- response = requests.post(\n- url,\n- files=files,\n- )\n+ response = requests.post(url, files=files, headers=self.auth_headers)\n if response.status_code != 200:\n raise RuntimeError(\n f\"Failed to variants the images, detail: {_get_error_string(response)}\"\n@@ -302,7 +300,9 @@ def generate(\n \n stream = bool(generate_config and generate_config.get(\"stream\"))\n \n- response = requests.post(url, json=request_body, stream=stream)\n+ response = requests.post(\n+ url, json=request_body, stream=stream, headers=self.auth_headers\n+ )\n if response.status_code != 200:\n raise RuntimeError(\n f\"Failed to generate completion, detail: {_get_error_string(response)}\"\n@@ -384,7 +384,9 @@ def chat(\n request_body[key] = value\n \n stream = bool(generate_config and generate_config.get(\"stream\"))\n- response = requests.post(url, json=request_body, stream=stream)\n+ response = requests.post(\n+ url, json=request_body, stream=stream, headers=self.auth_headers\n+ )\n \n if response.status_code != 200:\n raise RuntimeError(\n@@ -468,7 +470,9 @@ def chat(\n request_body[key] = value\n \n stream = bool(generate_config and generate_config.get(\"stream\"))\n- response = requests.post(url, json=request_body, stream=stream)\n+ response = requests.post(\n+ url, json=request_body, stream=stream, headers=self.auth_headers\n+ )\n \n if response.status_code != 200:\n raise RuntimeError(\n@@ -536,7 +540,9 @@ def chat(\n request_body[key] = value\n \n stream = bool(generate_config and generate_config.get(\"stream\"))\n- response = requests.post(url, json=request_body, stream=stream)\n+ response = requests.post(\n+ url, json=request_body, stream=stream, headers=self.auth_headers\n+ )\n \n if response.status_code != 200:\n raise RuntimeError(\n@@ -589,7 +595,9 @@ def generate(\n \n stream = bool(generate_config and generate_config.get(\"stream\"))\n \n- response = requests.post(url, json=request_body, stream=stream)\n+ response = requests.post(\n+ url, json=request_body, stream=stream, headers=self.auth_headers\n+ )\n if response.status_code != 200:\n raise RuntimeError(\n f\"Failed to generate completion, detail: {response.json()['detail']}\"\n@@ -605,6 +613,47 @@ def generate(\n class Client:\n def __init__(self, base_url):\n self.base_url = base_url\n+ self._headers = {}\n+ self._cluster_authed = False\n+ self._check_cluster_authenticated()\n+\n+ def _set_token(self, token: Optional[str]):\n+ if not self._cluster_authed or token is None:\n+ return\n+ self._headers[\"Authorization\"] = f\"Bearer {token}\"\n+\n+ def _get_token(self) -> Optional[str]:\n+ return (\n+ str(self._headers[\"Authorization\"]).replace(\"Bearer \", \"\")\n+ if \"Authorization\" in self._headers\n+ else None\n+ )\n+\n+ def _check_cluster_authenticated(self):\n+ url = f\"{self.base_url}/v1/cluster/auth\"\n+ response = requests.get(url)\n+ if response.status_code != 200:\n+ raise RuntimeError(\n+ f\"Failed to get cluster information, detail: {response.json()['detail']}\"\n+ )\n+ response_data = response.json()\n+ self._cluster_authed = bool(response_data[\"auth\"])\n+\n+ def login(self, username: str, password: str):\n+ if not self._cluster_authed:\n+ return\n+ url = f\"{self.base_url}/token\"\n+\n+ payload = {\"username\": username, \"password\": password}\n+\n+ response = requests.post(url, json=payload)\n+ if response.status_code != 200:\n+ raise RuntimeError(f\"Failed to login, detail: {response.json()['detail']}\")\n+\n+ response_data = response.json()\n+ # Only bearer token for now\n+ access_token = response_data[\"access_token\"]\n+ self._headers[\"Authorization\"] = f\"Bearer {access_token}\"\n \n def list_models(self) -> Dict[str, Dict[str, Any]]:\n \"\"\"\n@@ -619,7 +668,7 @@ def list_models(self) -> Dict[str, Dict[str, Any]]:\n \n url = f\"{self.base_url}/v1/models\"\n \n- response = requests.get(url)\n+ response = requests.get(url, headers=self._headers)\n if response.status_code != 200:\n raise RuntimeError(\n f\"Failed to list model, detail: {_get_error_string(response)}\"\n@@ -664,7 +713,7 @@ def launch_speculative_llm(\n }\n \n url = f\"{self.base_url}/experimental/speculative_llms\"\n- response = requests.post(url, json=payload)\n+ response = requests.post(url, json=payload, headers=self._headers)\n if response.status_code != 200:\n raise RuntimeError(\n f\"Failed to launch model, detail: {_get_error_string(response)}\"\n@@ -739,7 +788,7 @@ def launch_model(\n for key, value in kwargs.items():\n payload[str(key)] = value\n \n- response = requests.post(url, json=payload)\n+ response = requests.post(url, json=payload, headers=self._headers)\n if response.status_code != 200:\n raise RuntimeError(\n f\"Failed to launch model, detail: {_get_error_string(response)}\"\n@@ -766,7 +815,7 @@ def terminate_model(self, model_uid: str):\n \n url = f\"{self.base_url}/v1/models/{model_uid}\"\n \n- response = requests.delete(url)\n+ response = requests.delete(url, headers=self._headers)\n if response.status_code != 200:\n raise RuntimeError(\n f\"Failed to terminate model, detail: {_get_error_string(response)}\"\n@@ -774,7 +823,7 @@ def terminate_model(self, model_uid: str):\n \n def _get_supervisor_internal_address(self):\n url = f\"{self.base_url}/v1/address\"\n- response = requests.get(url)\n+ response = requests.get(url, headers=self._headers)\n if response.status_code != 200:\n raise RuntimeError(f\"Failed to get supervisor internal address\")\n response_data = response.json()\n@@ -806,7 +855,7 @@ def get_model(self, model_uid: str) -> RESTfulModelHandle:\n \"\"\"\n \n url = f\"{self.base_url}/v1/models/{model_uid}\"\n- response = requests.get(url)\n+ response = requests.get(url, headers=self._headers)\n if response.status_code != 200:\n raise RuntimeError(\n f\"Failed to get the model description, detail: {_get_error_string(response)}\"\n@@ -815,21 +864,35 @@ def get_model(self, model_uid: str) -> RESTfulModelHandle:\n \n if desc[\"model_type\"] == \"LLM\":\n if desc[\"model_format\"] == \"ggmlv3\" and \"chatglm\" in desc[\"model_name\"]:\n- return RESTfulChatglmCppGenerateModelHandle(model_uid, self.base_url)\n+ return RESTfulChatglmCppGenerateModelHandle(\n+ model_uid, self.base_url, auth_headers=self._headers\n+ )\n elif \"chat\" in desc[\"model_ability\"]:\n- return RESTfulChatModelHandle(model_uid, self.base_url)\n+ return RESTfulChatModelHandle(\n+ model_uid, self.base_url, auth_headers=self._headers\n+ )\n elif \"generate\" in desc[\"model_ability\"]:\n- return RESTfulGenerateModelHandle(model_uid, self.base_url)\n+ return RESTfulGenerateModelHandle(\n+ model_uid, self.base_url, auth_headers=self._headers\n+ )\n else:\n raise ValueError(f\"Unrecognized model ability: {desc['model_ability']}\")\n elif desc[\"model_type\"] == \"embedding\":\n- return RESTfulEmbeddingModelHandle(model_uid, self.base_url)\n+ return RESTfulEmbeddingModelHandle(\n+ model_uid, self.base_url, auth_headers=self._headers\n+ )\n elif desc[\"model_type\"] == \"image\":\n- return RESTfulImageModelHandle(model_uid, self.base_url)\n+ return RESTfulImageModelHandle(\n+ model_uid, self.base_url, auth_headers=self._headers\n+ )\n elif desc[\"model_type\"] == \"rerank\":\n- return RESTfulRerankModelHandle(model_uid, self.base_url)\n+ return RESTfulRerankModelHandle(\n+ model_uid, self.base_url, auth_headers=self._headers\n+ )\n elif desc[\"model_type\"] == \"multimodal\":\n- return RESTfulMultimodalModelHandle(model_uid, self.base_url)\n+ return RESTfulMultimodalModelHandle(\n+ model_uid, self.base_url, auth_headers=self._headers\n+ )\n else:\n raise ValueError(f\"Unknown model type:{desc['model_type']}\")\n \n@@ -876,7 +939,7 @@ def describe_model(self, model_uid: str):\n \"\"\"\n \n url = f\"{self.base_url}/v1/models/{model_uid}\"\n- response = requests.get(url)\n+ response = requests.get(url, headers=self._headers)\n if response.status_code != 200:\n raise RuntimeError(\n f\"Failed to get the model description, detail: {_get_error_string(response)}\"\n@@ -903,7 +966,7 @@ def register_model(self, model_type: str, model: str, persist: bool):\n \"\"\"\n url = f\"{self.base_url}/v1/model_registrations/{model_type}\"\n request_body = {\"model\": model, \"persist\": persist}\n- response = requests.post(url, json=request_body)\n+ response = requests.post(url, json=request_body, headers=self._headers)\n if response.status_code != 200:\n raise RuntimeError(\n f\"Failed to register model, detail: {_get_error_string(response)}\"\n@@ -929,7 +992,7 @@ def unregister_model(self, model_type: str, model_name: str):\n Report failure to unregister the custom model. Provide details of failure through error message.\n \"\"\"\n url = f\"{self.base_url}/v1/model_registrations/{model_type}/{model_name}\"\n- response = requests.delete(url)\n+ response = requests.delete(url, headers=self._headers)\n if response.status_code != 200:\n raise RuntimeError(\n f\"Failed to register model, detail: {_get_error_string(response)}\"\n@@ -959,7 +1022,7 @@ def list_model_registrations(self, model_type: str) -> List[Dict[str, Any]]:\n \n \"\"\"\n url = f\"{self.base_url}/v1/model_registrations/{model_type}\"\n- response = requests.get(url)\n+ response = requests.get(url, headers=self._headers)\n if response.status_code != 200:\n raise RuntimeError(\n f\"Failed to list model registration, detail: {_get_error_string(response)}\"\n@@ -987,7 +1050,7 @@ def get_model_registration(\n The collection of registered models on the server.\n \"\"\"\n url = f\"{self.base_url}/v1/model_registrations/{model_type}/{model_name}\"\n- response = requests.get(url)\n+ response = requests.get(url, headers=self._headers)\n if response.status_code != 200:\n raise RuntimeError(\n f\"Failed to list model registration, detail: {_get_error_string(response)}\"\ndiff --git a/xinference/client/tests/test_client_with_auth.py b/xinference/client/tests/test_client_with_auth.py\nnew file mode 100644\nindex 0000000000..5be0df8fbf\n--- /dev/null\n+++ b/xinference/client/tests/test_client_with_auth.py\n@@ -0,0 +1,51 @@\n+# Copyright 2022-2023 XProbe Inc.\n+#\n+# Licensed under the Apache License, Version 2.0 (the \"License\");\n+# you may not use this file except in compliance with the License.\n+# You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing, software\n+# distributed under the License is distributed on an \"AS IS\" BASIS,\n+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+# See the License for the specific language governing permissions and\n+# limitations under the License.\n+\n+import pytest\n+\n+from ..restful.restful_client import Client as RESTfulClient\n+from ..restful.restful_client import RESTfulEmbeddingModelHandle\n+\n+\n+def test_client_auth(setup_with_auth):\n+ endpoint, _ = setup_with_auth\n+ client = RESTfulClient(endpoint)\n+ with pytest.raises(RuntimeError):\n+ client.list_models()\n+\n+ client.login(\"user2\", \"pass2\")\n+ assert len(client.list_models()) == 0\n+\n+ with pytest.raises(RuntimeError):\n+ client.launch_model(\n+ model_name=\"jina-embeddings-v2-small-en\", model_type=\"embedding\"\n+ )\n+\n+ client.login(\"user3\", \"pass3\")\n+ model_uid = client.launch_model(\n+ model_name=\"jina-embeddings-v2-small-en\", model_type=\"embedding\"\n+ )\n+ model = client.get_model(model_uid=model_uid)\n+ assert isinstance(model, RESTfulEmbeddingModelHandle)\n+\n+ completion = model.create_embedding(\"write a poem.\")\n+ assert len(completion[\"data\"][0][\"embedding\"]) == 512\n+\n+ with pytest.raises(RuntimeError):\n+ client.terminate_model(model_uid=model_uid)\n+\n+ client.login(\"user1\", \"pass1\")\n+ assert len(client.list_models()) == 1\n+ client.terminate_model(model_uid=model_uid)\n+ assert len(client.list_models()) == 0\ndiff --git a/xinference/conftest.py b/xinference/conftest.py\nindex 4d41c326c4..5c0394d688 100644\n--- a/xinference/conftest.py\n+++ b/xinference/conftest.py\n@@ -13,16 +13,19 @@\n # limitations under the License.\n \n import asyncio\n+import json\n import logging\n import multiprocessing\n import os\n import signal\n import sys\n+import tempfile\n from typing import Dict, Optional\n \n import pytest\n import xoscar as xo\n \n+from .api.oauth2.types import AuthConfig, AuthStartupConfig, User\n from .constants import XINFERENCE_LOG_BACKUP_COUNT, XINFERENCE_LOG_MAX_BYTES\n from .core.supervisor import SupervisorActor\n from .deploy.utils import create_worker_actor_pool, get_log_file, get_timestamp_ms\n@@ -233,3 +236,58 @@ def setup_with_file_logging():\n \n local_cluster_proc.terminate()\n restful_api_proc.terminate()\n+\n+\[email protected]\n+def setup_with_auth():\n+ from .api.restful_api import run_in_subprocess as run_restful_api\n+ from .deploy.utils import health_check as cluster_health_check\n+\n+ logging.config.dictConfig(TEST_LOGGING_CONF) # type: ignore\n+\n+ supervisor_addr = f\"localhost:{xo.utils.get_next_port()}\"\n+ local_cluster_proc = run_test_cluster_in_subprocess(\n+ supervisor_addr, TEST_LOGGING_CONF\n+ )\n+ if not cluster_health_check(supervisor_addr, max_attempts=10, sleep_interval=3):\n+ raise RuntimeError(\"Cluster is not available after multiple attempts\")\n+\n+ user1 = User(username=\"user1\", password=\"pass1\", permissions=[\"admin\"])\n+ user2 = User(username=\"user2\", password=\"pass2\", permissions=[\"models:list\"])\n+ user3 = User(\n+ username=\"user3\",\n+ password=\"pass3\",\n+ permissions=[\"models:list\", \"models:read\", \"models:start\"],\n+ )\n+ auth_config = AuthConfig(\n+ algorithm=\"HS256\",\n+ secret_key=\"09d25e094faa6ca2556c818166b7a9563b93f7099f6f0f4caa6cf63b88e8d3e7\",\n+ token_expire_in_minutes=30,\n+ )\n+ startup_config = AuthStartupConfig(\n+ auth_config=auth_config, user_config=[user1, user2, user3]\n+ )\n+ _, auth_file = tempfile.mkstemp()\n+ with open(auth_file, \"w\") as fd:\n+ fd.write(json.dumps(startup_config.dict()))\n+\n+ port = xo.utils.get_next_port()\n+ restful_api_proc = run_restful_api(\n+ supervisor_addr,\n+ host=\"localhost\",\n+ port=port,\n+ logging_conf=TEST_LOGGING_CONF,\n+ auth_config_file=auth_file,\n+ )\n+ endpoint = f\"http://localhost:{port}\"\n+ if not api_health_check(endpoint, max_attempts=10, sleep_interval=5):\n+ raise RuntimeError(\"Endpoint is not available after multiple attempts\")\n+\n+ yield f\"http://localhost:{port}\", supervisor_addr\n+\n+ local_cluster_proc.terminate()\n+ restful_api_proc.terminate()\n+ try:\n+ os.remove(auth_file)\n+ except:\n+ pass\ndiff --git a/xinference/constants.py b/xinference/constants.py\nindex 5fee675083..294ea71e02 100644\n--- a/xinference/constants.py\n+++ b/xinference/constants.py\n@@ -39,6 +39,7 @@ def get_xinference_home() -> str:\n XINFERENCE_MODEL_DIR = os.path.join(XINFERENCE_HOME, \"model\")\n XINFERENCE_LOG_DIR = os.path.join(XINFERENCE_HOME, \"logs\")\n XINFERENCE_IMAGE_DIR = os.path.join(XINFERENCE_HOME, \"image\")\n+XINFERENCE_AUTH_DIR = os.path.join(XINFERENCE_HOME, \"auth\")\n \n XINFERENCE_DEFAULT_LOCAL_HOST = \"127.0.0.1\"\n XINFERENCE_DEFAULT_DISTRIBUTED_HOST = \"0.0.0.0\"\ndiff --git a/xinference/core/chat_interface.py b/xinference/core/chat_interface.py\nindex aa5b284a72..3adbbd36b2 100644\n--- a/xinference/core/chat_interface.py\n+++ b/xinference/core/chat_interface.py\n@@ -14,7 +14,7 @@\n \n import logging\n import os\n-from typing import Generator, List\n+from typing import Generator, List, Optional\n \n import gradio as gr\n from gradio.components import Markdown, Textbox\n@@ -43,6 +43,7 @@ def __init__(\n model_ability: List[str],\n model_description: str,\n model_lang: List[str],\n+ access_token: Optional[str],\n ):\n self.endpoint = endpoint\n self.model_uid = model_uid\n@@ -54,6 +55,9 @@ def __init__(\n self.model_ability = model_ability\n self.model_description = model_description\n self.model_lang = model_lang\n+ self._access_token = (\n+ access_token.replace(\"Bearer \", \"\") if access_token is not None else None\n+ )\n \n def build(self) -> \"gr.Blocks\":\n if \"chat\" in self.model_ability:\n@@ -102,6 +106,7 @@ def generate_wrapper(\n from ..client import RESTfulClient\n \n client = RESTfulClient(self.endpoint)\n+ client._set_token(self._access_token)\n model = client.get_model(self.model_uid)\n assert isinstance(\n model, (RESTfulChatModelHandle, RESTfulChatglmCppChatModelHandle)\n@@ -198,6 +203,7 @@ def complete(text, hist, max_tokens, temperature) -> Generator:\n from ..client import RESTfulClient\n \n client = RESTfulClient(self.endpoint)\n+ client._set_token(self._access_token)\n model = client.get_model(self.model_uid)\n assert isinstance(model, RESTfulGenerateModelHandle)\n \n@@ -234,6 +240,7 @@ def retry(text, hist, max_tokens, temperature) -> Generator:\n from ..client import RESTfulClient\n \n client = RESTfulClient(self.endpoint)\n+ client._set_token(self._access_token)\n model = client.get_model(self.model_uid)\n assert isinstance(model, RESTfulGenerateModelHandle)\n \ndiff --git a/xinference/deploy/cmdline.py b/xinference/deploy/cmdline.py\nindex 0910ae3c9b..28099bdf1a 100644\n--- a/xinference/deploy/cmdline.py\n+++ b/xinference/deploy/cmdline.py\n@@ -24,13 +24,13 @@\n \n from .. import __version__\n from ..client import RESTfulClient\n-from ..client.oscar.actor_client import ActorClient\n from ..client.restful.restful_client import (\n RESTfulChatglmCppChatModelHandle,\n RESTfulChatModelHandle,\n RESTfulGenerateModelHandle,\n )\n from ..constants import (\n+ XINFERENCE_AUTH_DIR,\n XINFERENCE_DEFAULT_DISTRIBUTED_HOST,\n XINFERENCE_DEFAULT_ENDPOINT_PORT,\n XINFERENCE_DEFAULT_LOCAL_HOST,\n@@ -62,10 +62,32 @@ def get_endpoint(endpoint: Optional[str]) -> str:\n return endpoint\n \n \n+def get_hash_endpoint(endpoint: str) -> str:\n+ import hashlib\n+\n+ m = hashlib.sha256()\n+ m.update(bytes(endpoint, \"utf-8\"))\n+ return m.hexdigest()\n+\n+\n+def get_stored_token(\n+ endpoint: str, client: Optional[RESTfulClient] = None\n+) -> Optional[str]:\n+ rest_client = RESTfulClient(endpoint) if client is None else client\n+ authed = rest_client._cluster_authed\n+ if not authed:\n+ return None\n+\n+ token_path = os.path.join(XINFERENCE_AUTH_DIR, get_hash_endpoint(endpoint))\n+ if not os.path.exists(token_path):\n+ raise RuntimeError(\"Cannot find access token, please login first!\")\n+ with open(token_path, \"r\") as f:\n+ access_token = str(f.read())\n+ return access_token\n+\n+\n def start_local_cluster(\n- log_level: str,\n- host: str,\n- port: int,\n+ log_level: str, host: str, port: int, auth_config_file: Optional[str] = None\n ):\n from .local import main\n \n@@ -81,6 +103,7 @@ def start_local_cluster(\n host=host,\n port=port,\n logging_conf=dict_config,\n+ auth_config_file=auth_config_file,\n )\n \n \n@@ -159,12 +182,15 @@ def cli(\n type=int,\n help=\"Specify the port number for the Xinference server.\",\n )\n-def local(\n- log_level: str,\n- host: str,\n- port: int,\n-):\n- start_local_cluster(log_level=log_level, host=host, port=port)\[email protected](\n+ \"--auth-config\",\n+ type=str,\n+ help=\"Specify the auth config json file.\",\n+)\n+def local(log_level: str, host: str, port: int, auth_config: Optional[str]):\n+ start_local_cluster(\n+ log_level=log_level, host=host, port=port, auth_config_file=auth_config\n+ )\n \n \n @click.command(\n@@ -196,7 +222,18 @@ def local(\n type=int,\n help=\"Specify the port number for the Xinference supervisor.\",\n )\n-def supervisor(log_level: str, host: str, port: int, supervisor_port: Optional[int]):\[email protected](\n+ \"--auth-config\",\n+ type=str,\n+ help=\"Specify the auth config json file.\",\n+)\n+def supervisor(\n+ log_level: str,\n+ host: str,\n+ port: int,\n+ supervisor_port: Optional[int],\n+ auth_config: Optional[str],\n+):\n from ..deploy.supervisor import main\n \n dict_config = get_config_dict(\n@@ -208,7 +245,11 @@ def supervisor(log_level: str, host: str, port: int, supervisor_port: Optional[i\n logging.config.dictConfig(dict_config) # type: ignore\n \n main(\n- host=host, port=port, supervisor_port=supervisor_port, logging_conf=dict_config\n+ host=host,\n+ port=port,\n+ supervisor_port=supervisor_port,\n+ logging_conf=dict_config,\n+ auth_config_file=auth_config,\n )\n \n \n@@ -288,6 +329,7 @@ def register_model(\n model = fd.read()\n \n client = RESTfulClient(base_url=endpoint)\n+ client._set_token(get_stored_token(endpoint, client))\n client.register_model(\n model_type=model_type,\n model=model,\n@@ -316,6 +358,7 @@ def unregister_model(\n endpoint = get_endpoint(endpoint)\n \n client = RESTfulClient(base_url=endpoint)\n+ client._set_token(get_stored_token(endpoint, client))\n client.unregister_model(\n model_type=model_type,\n model_name=model_name,\n@@ -343,8 +386,9 @@ def list_model_registrations(\n from tabulate import tabulate\n \n endpoint = get_endpoint(endpoint)\n-\n client = RESTfulClient(base_url=endpoint)\n+ client._set_token(get_stored_token(endpoint, client))\n+\n registrations = client.list_model_registrations(model_type=model_type)\n \n table = []\n@@ -518,8 +562,9 @@ def model_launch(\n if size_in_billions is None or \"_\" in size_in_billions\n else int(size_in_billions)\n )\n-\n client = RESTfulClient(base_url=endpoint)\n+ client._set_token(get_stored_token(endpoint, client))\n+\n model_uid = client.launch_model(\n model_name=model_name,\n model_type=model_type,\n@@ -550,6 +595,7 @@ def model_list(endpoint: Optional[str]):\n \n endpoint = get_endpoint(endpoint)\n client = RESTfulClient(base_url=endpoint)\n+ client._set_token(get_stored_token(endpoint, client))\n \n llm_table = []\n embedding_table = []\n@@ -626,8 +672,8 @@ def model_terminate(\n model_uid: str,\n ):\n endpoint = get_endpoint(endpoint)\n-\n client = RESTfulClient(base_url=endpoint)\n+ client._set_token(get_stored_token(endpoint, client))\n client.terminate_model(model_uid=model_uid)\n \n \n@@ -657,6 +703,8 @@ def model_generate(\n stream: bool,\n ):\n endpoint = get_endpoint(endpoint)\n+ client = RESTfulClient(base_url=endpoint)\n+ client._set_token(get_stored_token(endpoint, client))\n if stream:\n # TODO: when stream=True, RestfulClient cannot generate words one by one.\n # So use Client in temporary. The implementation needs to be changed to\n@@ -669,7 +717,7 @@ async def generate_internal():\n if prompt == \"\":\n break\n print(f\"Completion: {prompt}\", end=\"\", file=sys.stdout)\n- async for chunk in model.generate(\n+ for chunk in model.generate(\n prompt=prompt,\n generate_config={\"stream\": stream, \"max_tokens\": max_tokens},\n ):\n@@ -680,7 +728,6 @@ async def generate_internal():\n print(choice[\"text\"], end=\"\", flush=True, file=sys.stdout)\n print(\"\", file=sys.stdout)\n \n- client = ActorClient(endpoint=endpoint)\n model = client.get_model(model_uid=model_uid)\n \n loop = asyncio.get_event_loop()\n@@ -700,8 +747,7 @@ async def generate_internal():\n # avoid displaying exception-unhandled warnings\n task.exception()\n else:\n- restful_client = RESTfulClient(base_url=endpoint)\n- restful_model = restful_client.get_model(model_uid=model_uid)\n+ restful_model = client.get_model(model_uid=model_uid)\n if not isinstance(\n restful_model, (RESTfulChatModelHandle, RESTfulGenerateModelHandle)\n ):\n@@ -744,6 +790,9 @@ def model_chat(\n ):\n # TODO: chat model roles may not be user and assistant.\n endpoint = get_endpoint(endpoint)\n+ client = RESTfulClient(base_url=endpoint)\n+ client._set_token(get_stored_token(endpoint, client))\n+\n chat_history: \"List[ChatCompletionMessage]\" = []\n if stream:\n # TODO: when stream=True, RestfulClient cannot generate words one by one.\n@@ -758,7 +807,7 @@ async def chat_internal():\n break\n print(\"Assistant: \", end=\"\", file=sys.stdout)\n response_content = \"\"\n- async for chunk in model.chat(\n+ for chunk in model.chat(\n prompt=prompt,\n chat_history=chat_history,\n generate_config={\"stream\": stream, \"max_tokens\": max_tokens},\n@@ -775,7 +824,6 @@ async def chat_internal():\n ChatCompletionMessage(role=\"assistant\", content=response_content)\n )\n \n- client = ActorClient(endpoint=endpoint)\n model = client.get_model(model_uid=model_uid)\n \n loop = asyncio.get_event_loop()\n@@ -795,8 +843,7 @@ async def chat_internal():\n # avoid displaying exception-unhandled warnings\n task.exception()\n else:\n- restful_client = RESTfulClient(base_url=endpoint)\n- restful_model = restful_client.get_model(model_uid=model_uid)\n+ restful_model = client.get_model(model_uid=model_uid)\n if not isinstance(\n restful_model, (RESTfulChatModelHandle, RESTfulChatglmCppChatModelHandle)\n ):\n@@ -822,5 +869,31 @@ async def chat_internal():\n )\n \n \[email protected](\"login\", help=\"Login when the cluster is authenticated.\")\[email protected](\"--endpoint\", \"-e\", type=str, help=\"Xinference endpoint.\")\[email protected](\"--username\", type=str, required=True, help=\"Username.\")\[email protected](\n+ \"--password\",\n+ type=str,\n+ required=True,\n+ help=\"Password.\",\n+)\n+def cluster_login(\n+ endpoint: Optional[str],\n+ username: str,\n+ password: str,\n+):\n+ endpoint = get_endpoint(endpoint)\n+ restful_client = RESTfulClient(base_url=endpoint)\n+ if restful_client._cluster_authed:\n+ restful_client.login(username, password)\n+ access_token = restful_client._get_token()\n+ assert access_token is not None\n+ os.makedirs(XINFERENCE_AUTH_DIR, exist_ok=True)\n+ hashed_ep = get_hash_endpoint(endpoint)\n+ with open(os.path.join(XINFERENCE_AUTH_DIR, hashed_ep), \"w\") as f:\n+ f.write(access_token)\n+\n+\n if __name__ == \"__main__\":\n cli()\ndiff --git a/xinference/deploy/local.py b/xinference/deploy/local.py\nindex d646f80906..a152c45edc 100644\n--- a/xinference/deploy/local.py\n+++ b/xinference/deploy/local.py\n@@ -79,7 +79,12 @@ def run_in_subprocess(\n return p\n \n \n-def main(host: str, port: int, logging_conf: Optional[Dict] = None):\n+def main(\n+ host: str,\n+ port: int,\n+ logging_conf: Optional[Dict] = None,\n+ auth_config_file: Optional[str] = None,\n+):\n supervisor_address = f\"{host}:{get_next_port()}\"\n local_cluster = run_in_subprocess(supervisor_address, logging_conf)\n \n@@ -98,6 +103,7 @@ def main(host: str, port: int, logging_conf: Optional[Dict] = None):\n host=host,\n port=port,\n logging_conf=logging_conf,\n+ auth_config_file=auth_config_file,\n )\n finally:\n local_cluster.terminate()\ndiff --git a/xinference/deploy/supervisor.py b/xinference/deploy/supervisor.py\nindex ddc4f25224..57f03c99c6 100644\n--- a/xinference/deploy/supervisor.py\n+++ b/xinference/deploy/supervisor.py\n@@ -75,6 +75,7 @@ def main(\n port: int,\n supervisor_port: Optional[int],\n logging_conf: Optional[Dict] = None,\n+ auth_config_file: Optional[str] = None,\n ):\n supervisor_address = f\"{host}:{supervisor_port or get_next_port()}\"\n local_cluster = run_in_subprocess(supervisor_address, logging_conf)\n@@ -94,6 +95,7 @@ def main(\n host=host,\n port=port,\n logging_conf=logging_conf,\n+ auth_config_file=auth_config_file,\n )\n finally:\n local_cluster.terminate()\ndiff --git a/xinference/web/ui/package-lock.json b/xinference/web/ui/package-lock.json\nindex dfd1fc9d1b..6cbc3b3670 100644\n--- a/xinference/web/ui/package-lock.json\n+++ b/xinference/web/ui/package-lock.json\n@@ -27,7 +27,9 @@\n \"@testing-library/react\": \"^13.4.0\",\n \"@testing-library/user-event\": \"^13.5.0\",\n \"formik\": \"^2.4.2\",\n+ \"jsonwebtoken\": \"^9.0.2\",\n \"react\": \"^18.2.0\",\n+ \"react-cookie\": \"^6.1.1\",\n \"react-dom\": \"^18.2.0\",\n \"react-pro-sidebar\": \"^1.1.0-alpha.1\",\n \"react-router-dom\": \"^6.14.1\",\n@@ -4960,6 +4962,11 @@\n \"@types/node\": \"*\"\n }\n },\n+ \"node_modules/@types/cookie\": {\n+ \"version\": \"0.5.4\",\n+ \"resolved\": \"https://registry.npmjs.org/@types/cookie/-/cookie-0.5.4.tgz\",\n+ \"integrity\": \"sha512-7z/eR6O859gyWIAjuvBWFzNURmf2oPBmJlfVWkwehU5nzIyjwBsTh7WMmEEV4JFnHuQ3ex4oyTvfKzcyJVDBNA==\"\n+ },\n \"node_modules/@types/d3-color\": {\n \"version\": \"2.0.3\",\n \"resolved\": \"https://registry.npmjs.org/@types/d3-color/-/d3-color-2.0.3.tgz\",\n@@ -5069,6 +5076,15 @@\n \"@types/node\": \"*\"\n }\n },\n+ \"node_modules/@types/hoist-non-react-statics\": {\n+ \"version\": \"3.3.5\",\n+ \"resolved\": \"https://registry.npmjs.org/@types/hoist-non-react-statics/-/hoist-non-react-statics-3.3.5.tgz\",\n+ \"integrity\": \"sha512-SbcrWzkKBw2cdwRTwQAswfpB9g9LJWfjtUeW/jvNwbhC8cpmmNYVePa+ncbUe0rGTQ7G3Ff6mYUN2VMfLVr+Sg==\",\n+ \"dependencies\": {\n+ \"@types/react\": \"*\",\n+ \"hoist-non-react-statics\": \"^3.3.0\"\n+ }\n+ },\n \"node_modules/@types/html-minifier-terser\": {\n \"version\": \"6.1.0\",\n \"resolved\": \"https://registry.npmjs.org/@types/html-minifier-terser/-/html-minifier-terser-6.1.0.tgz\",\n@@ -6822,6 +6838,11 @@\n \"node-int64\": \"^0.4.0\"\n }\n },\n+ \"node_modules/buffer-equal-constant-time\": {\n+ \"version\": \"1.0.1\",\n+ \"resolved\": \"https://registry.npmjs.org/buffer-equal-constant-time/-/buffer-equal-constant-time-1.0.1.tgz\",\n+ \"integrity\": \"sha512-zRpUiDwd/xk6ADqPMATG8vc9VPrkck7T07OIx0gnjmJAnHnTVXNQG3vfvWNuiZIkwu9KrKdA1iJKfsfTVxE6NA==\"\n+ },\n \"node_modules/buffer-from\": {\n \"version\": \"1.1.2\",\n \"resolved\": \"https://registry.npmjs.org/buffer-from/-/buffer-from-1.1.2.tgz\",\n@@ -8224,6 +8245,14 @@\n \"resolved\": \"https://registry.npmjs.org/duplexer/-/duplexer-0.1.2.tgz\",\n \"integrity\": \"sha512-jtD6YG370ZCIi/9GTaJKQxWTZD045+4R4hTk/x1UyoqadyJ9x9CgSi1RlVDQF8U2sxLLSnFkCaMihqljHIWgMg==\"\n },\n+ \"node_modules/ecdsa-sig-formatter\": {\n+ \"version\": \"1.0.11\",\n+ \"resolved\": \"https://registry.npmjs.org/ecdsa-sig-formatter/-/ecdsa-sig-formatter-1.0.11.tgz\",\n+ \"integrity\": \"sha512-nagl3RYrbNv6kQkeJIpt6NJZy8twLB/2vtz6yN9Z4vRKHN4/QZJIEbqohALSgwKdnksuY3k5Addp5lg8sVoVcQ==\",\n+ \"dependencies\": {\n+ \"safe-buffer\": \"^5.0.1\"\n+ }\n+ },\n \"node_modules/ee-first\": {\n \"version\": \"1.1.1\",\n \"resolved\": \"https://registry.npmjs.org/ee-first/-/ee-first-1.1.1.tgz\",\n@@ -13249,6 +13278,27 @@\n \"node\": \">=0.10.0\"\n }\n },\n+ \"node_modules/jsonwebtoken\": {\n+ \"version\": \"9.0.2\",\n+ \"resolved\": \"https://registry.npmjs.org/jsonwebtoken/-/jsonwebtoken-9.0.2.tgz\",\n+ \"integrity\": \"sha512-PRp66vJ865SSqOlgqS8hujT5U4AOgMfhrwYIuIhfKaoSCZcirrmASQr8CX7cUg+RMih+hgznrjp99o+W4pJLHQ==\",\n+ \"dependencies\": {\n+ \"jws\": \"^3.2.2\",\n+ \"lodash.includes\": \"^4.3.0\",\n+ \"lodash.isboolean\": \"^3.0.3\",\n+ \"lodash.isinteger\": \"^4.0.4\",\n+ \"lodash.isnumber\": \"^3.0.3\",\n+ \"lodash.isplainobject\": \"^4.0.6\",\n+ \"lodash.isstring\": \"^4.0.1\",\n+ \"lodash.once\": \"^4.0.0\",\n+ \"ms\": \"^2.1.1\",\n+ \"semver\": \"^7.5.4\"\n+ },\n+ \"engines\": {\n+ \"node\": \">=12\",\n+ \"npm\": \">=6\"\n+ }\n+ },\n \"node_modules/jsx-ast-utils\": {\n \"version\": \"3.3.5\",\n \"resolved\": \"https://registry.npmjs.org/jsx-ast-utils/-/jsx-ast-utils-3.3.5.tgz\",\n@@ -13263,6 +13313,25 @@\n \"node\": \">=4.0\"\n }\n },\n+ \"node_modules/jwa\": {\n+ \"version\": \"1.4.1\",\n+ \"resolved\": \"https://registry.npmjs.org/jwa/-/jwa-1.4.1.tgz\",\n+ \"integrity\": \"sha512-qiLX/xhEEFKUAJ6FiBMbes3w9ATzyk5W7Hvzpa/SLYdxNtng+gcurvrI7TbACjIXlsJyr05/S1oUhZrc63evQA==\",\n+ \"dependencies\": {\n+ \"buffer-equal-constant-time\": \"1.0.1\",\n+ \"ecdsa-sig-formatter\": \"1.0.11\",\n+ \"safe-buffer\": \"^5.0.1\"\n+ }\n+ },\n+ \"node_modules/jws\": {\n+ \"version\": \"3.2.2\",\n+ \"resolved\": \"https://registry.npmjs.org/jws/-/jws-3.2.2.tgz\",\n+ \"integrity\": \"sha512-YHlZCB6lMTllWDtSPHz/ZXTsi8S00usEV6v1tjq8tOUZzw7DpSDWVXjXDre6ed1w/pd495ODpHZYSdkRTsa0HA==\",\n+ \"dependencies\": {\n+ \"jwa\": \"^1.4.1\",\n+ \"safe-buffer\": \"^5.0.1\"\n+ }\n+ },\n \"node_modules/kind-of\": {\n \"version\": \"6.0.3\",\n \"resolved\": \"https://registry.npmjs.org/kind-of/-/kind-of-6.0.3.tgz\",\n@@ -13395,6 +13464,36 @@\n \"resolved\": \"https://registry.npmjs.org/lodash.debounce/-/lodash.debounce-4.0.8.tgz\",\n \"integrity\": \"sha512-FT1yDzDYEoYWhnSGnpE/4Kj1fLZkDFyqRb7fNt6FdYOSxlUWAtp42Eh6Wb0rGIv/m9Bgo7x4GhQbm5Ys4SG5ow==\"\n },\n+ \"node_modules/lodash.includes\": {\n+ \"version\": \"4.3.0\",\n+ \"resolved\": \"https://registry.npmjs.org/lodash.includes/-/lodash.includes-4.3.0.tgz\",\n+ \"integrity\": \"sha512-W3Bx6mdkRTGtlJISOvVD/lbqjTlPPUDTMnlXZFnVwi9NKJ6tiAk6LVdlhZMm17VZisqhKcgzpO5Wz91PCt5b0w==\"\n+ },\n+ \"node_modules/lodash.isboolean\": {\n+ \"version\": \"3.0.3\",\n+ \"resolved\": \"https://registry.npmjs.org/lodash.isboolean/-/lodash.isboolean-3.0.3.tgz\",\n+ \"integrity\": \"sha512-Bz5mupy2SVbPHURB98VAcw+aHh4vRV5IPNhILUCsOzRmsTmSQ17jIuqopAentWoehktxGd9e/hbIXq980/1QJg==\"\n+ },\n+ \"node_modules/lodash.isinteger\": {\n+ \"version\": \"4.0.4\",\n+ \"resolved\": \"https://registry.npmjs.org/lodash.isinteger/-/lodash.isinteger-4.0.4.tgz\",\n+ \"integrity\": \"sha512-DBwtEWN2caHQ9/imiNeEA5ys1JoRtRfY3d7V9wkqtbycnAmTvRRmbHKDV4a0EYc678/dia0jrte4tjYwVBaZUA==\"\n+ },\n+ \"node_modules/lodash.isnumber\": {\n+ \"version\": \"3.0.3\",\n+ \"resolved\": \"https://registry.npmjs.org/lodash.isnumber/-/lodash.isnumber-3.0.3.tgz\",\n+ \"integrity\": \"sha512-QYqzpfwO3/CWf3XP+Z+tkQsfaLL/EnUlXWVkIk5FUPc4sBdTehEqZONuyRt2P67PXAk+NXmTBcc97zw9t1FQrw==\"\n+ },\n+ \"node_modules/lodash.isplainobject\": {\n+ \"version\": \"4.0.6\",\n+ \"resolved\": \"https://registry.npmjs.org/lodash.isplainobject/-/lodash.isplainobject-4.0.6.tgz\",\n+ \"integrity\": \"sha512-oSXzaWypCMHkPC3NvBEaPHf0KsA5mvPrOPgQWDsbg8n7orZ290M0BmC/jgRZ4vcJ6DTAhjrsSYgdsW/F+MFOBA==\"\n+ },\n+ \"node_modules/lodash.isstring\": {\n+ \"version\": \"4.0.1\",\n+ \"resolved\": \"https://registry.npmjs.org/lodash.isstring/-/lodash.isstring-4.0.1.tgz\",\n+ \"integrity\": \"sha512-0wJxfxH1wgO3GrbuP+dTTk7op+6L41QCXbGINEmD+ny/G/eCqGzxyCsh7159S+mgDDcoarnBw6PC1PS5+wUGgw==\"\n+ },\n \"node_modules/lodash.memoize\": {\n \"version\": \"4.1.2\",\n \"resolved\": \"https://registry.npmjs.org/lodash.memoize/-/lodash.memoize-4.1.2.tgz\",\n@@ -13405,6 +13504,11 @@\n \"resolved\": \"https://registry.npmjs.org/lodash.merge/-/lodash.merge-4.6.2.tgz\",\n \"integrity\": \"sha512-0KpjqXRVvrYyCsX1swR/XTK0va6VQkQM6MNo7PqW77ByjAhoARA8EfrP1N4+KlKj8YS0ZUCtRT/YUuhyYDujIQ==\"\n },\n+ \"node_modules/lodash.once\": {\n+ \"version\": \"4.1.1\",\n+ \"resolved\": \"https://registry.npmjs.org/lodash.once/-/lodash.once-4.1.1.tgz\",\n+ \"integrity\": \"sha512-Sb487aTOCr9drQVL8pIxOzVhafOjZN9UU54hiN8PU3uAiSV7lx1yYNpbNmex2PK6dSJoNTSJUUswT651yww3Mg==\"\n+ },\n \"node_modules/lodash.sortby\": {\n \"version\": \"4.7.0\",\n \"resolved\": \"https://registry.npmjs.org/lodash.sortby/-/lodash.sortby-4.7.0.tgz\",\n@@ -15863,6 +15967,19 @@\n \"node\": \">=14\"\n }\n },\n+ \"node_modules/react-cookie\": {\n+ \"version\": \"6.1.1\",\n+ \"resolved\": \"https://registry.npmjs.org/react-cookie/-/react-cookie-6.1.1.tgz\",\n+ \"integrity\": \"sha512-fuFRpf8LH6SfmVMowDUIRywJF5jAUDUWrm0EI5VdXfTl5bPcJ7B0zWbuYpT0Tvikx7Gs18MlvAT+P+744dUz2g==\",\n+ \"dependencies\": {\n+ \"@types/hoist-non-react-statics\": \"^3.3.1\",\n+ \"hoist-non-react-statics\": \"^3.3.2\",\n+ \"universal-cookie\": \"^6.0.0\"\n+ },\n+ \"peerDependencies\": {\n+ \"react\": \">= 16.3.0\"\n+ }\n+ },\n \"node_modules/react-dev-utils\": {\n \"version\": \"12.0.1\",\n \"resolved\": \"https://registry.npmjs.org/react-dev-utils/-/react-dev-utils-12.0.1.tgz\",\n@@ -18450,6 +18567,15 @@\n \"node\": \">=8\"\n }\n },\n+ \"node_modules/universal-cookie\": {\n+ \"version\": \"6.1.1\",\n+ \"resolved\": \"https://registry.npmjs.org/universal-cookie/-/universal-cookie-6.1.1.tgz\",\n+ \"integrity\": \"sha512-33S9x3CpdUnnjwTNs2Fgc41WGve2tdLtvaK2kPSbZRc5pGpz2vQFbRWMxlATsxNNe/Cy8SzmnmbuBM85jpZPtA==\",\n+ \"dependencies\": {\n+ \"@types/cookie\": \"^0.5.1\",\n+ \"cookie\": \"^0.5.0\"\n+ }\n+ },\n \"node_modules/universalify\": {\n \"version\": \"2.0.0\",\n \"resolved\": \"https://registry.npmjs.org/universalify/-/universalify-2.0.0.tgz\",\ndiff --git a/xinference/web/ui/package.json b/xinference/web/ui/package.json\nindex 86abc72cfb..2c0fe23afa 100644\n--- a/xinference/web/ui/package.json\n+++ b/xinference/web/ui/package.json\n@@ -11,9 +11,9 @@\n \"@fullcalendar/list\": \"^6.1.8\",\n \"@fullcalendar/timegrid\": \"^6.1.8\",\n \"@mui/icons-material\": \"^5.14.0\",\n+ \"@mui/lab\": \"latest\",\n \"@mui/material\": \"^5.14.0\",\n \"@mui/x-data-grid\": \"^6.10.0\",\n- \"@mui/lab\": \"latest\",\n \"@nivo/bar\": \"^0.83.0\",\n \"@nivo/core\": \"^0.83.0\",\n \"@nivo/geo\": \"^0.83.0\",\n@@ -24,6 +24,7 @@\n \"@testing-library/user-event\": \"^13.5.0\",\n \"formik\": \"^2.4.2\",\n \"react\": \"^18.2.0\",\n+ \"react-cookie\": \"^6.1.1\",\n \"react-dom\": \"^18.2.0\",\n \"react-pro-sidebar\": \"^1.1.0-alpha.1\",\n \"react-router-dom\": \"^6.14.1\",\n@@ -58,9 +59,9 @@\n ]\n },\n \"devDependencies\": {\n- \"@babel/plugin-proposal-private-property-in-object\": \"^7.21.11\",\n \"@babel/core\": \"^7.21.0\",\n \"@babel/eslint-parser\": \"^7.19.1\",\n+ \"@babel/plugin-proposal-private-property-in-object\": \"^7.21.11\",\n \"eslint\": \"^7.32.0\",\n \"eslint-config-prettier\": \"^8.5.0\",\n \"eslint-plugin-react\": \"^7.24.0\",\ndiff --git a/xinference/web/ui/src/App.js b/xinference/web/ui/src/App.js\nindex 693c872584..beffd7fc97 100644\n--- a/xinference/web/ui/src/App.js\n+++ b/xinference/web/ui/src/App.js\n@@ -1,22 +1,98 @@\n import { CssBaseline, ThemeProvider } from '@mui/material'\n+import Snackbar from '@mui/material/Snackbar'\n+import React, { useEffect, useState } from 'react'\n+import { useCookies } from 'react-cookie'\n import { HashRouter, Route, Routes } from 'react-router-dom'\n \n+import { Alert } from './components/alertComponent'\n import { ApiContextProvider } from './components/apiContext'\n+import AuthAlertDialog from './components/authAlertDialog'\n+import { getEndpoint, isValidBearerToken } from './components/utils'\n import Layout from './scenes/_layout'\n import LaunchModel from './scenes/launch_model'\n+import Login from './scenes/login/login'\n import RegisterModel from './scenes/register_model'\n import RunningModels from './scenes/running_models'\n import { useMode } from './theme'\n \n function App() {\n const [theme] = useMode()\n+ const [cookie, setCookie, removeCookie] = useCookies(['token'])\n+ const [msg, setMsg] = useState('')\n+\n+ const endPoint = getEndpoint()\n+\n+ const removeToken = () => {\n+ removeCookie('token', { path: '/' })\n+ }\n+\n+ useEffect(() => {\n+ // token possible value: no_auth / need_auth / <real bearer token>\n+ fetch(endPoint + '/v1/cluster/auth', {\n+ method: 'GET',\n+ headers: {\n+ 'Content-Type': 'application/json',\n+ },\n+ }).then((res) => {\n+ if (!res.ok) {\n+ res.json().then((errorData) => {\n+ setMsg(\n+ `Server error: ${res.status} - ${\n+ errorData.detail || 'Unknown error'\n+ }`\n+ )\n+ })\n+ } else {\n+ res.json().then((data) => {\n+ if (data['auth'] === false) {\n+ if (cookie.token !== 'no_auth') {\n+ setCookie('token', 'no_auth', { path: '/' })\n+ }\n+ } else {\n+ // TODO: validate bearer token\n+ if (\n+ cookie.token === undefined ||\n+ !isValidBearerToken(cookie.token)\n+ ) {\n+ // not a bearer token, need a bearer token here\n+ setCookie('token', 'need_auth', { path: '/' })\n+ }\n+ }\n+ })\n+ }\n+ })\n+ // return a function in useEffect means doing something on component unmount\n+ return () => {\n+ removeToken()\n+ }\n+ }, [])\n+\n+ const handleClose = (event, reason) => {\n+ if (reason === 'clickaway') {\n+ return\n+ }\n+ setMsg('')\n+ }\n+\n return (\n <div className=\"app\">\n+ <Snackbar\n+ open={msg !== ''}\n+ autoHideDuration={10000}\n+ anchorOrigin={{ vertical: 'top', horizontal: 'center' }}\n+ onClose={handleClose}\n+ >\n+ <Alert severity=\"error\" onClose={handleClose} sx={{ width: '100%' }}>\n+ {msg}\n+ </Alert>\n+ </Snackbar>\n <HashRouter>\n <ThemeProvider theme={theme}>\n <ApiContextProvider>\n <CssBaseline />\n+ <AuthAlertDialog />\n <Routes>\n+ <Route path=\"/login\" element={<Login />} />\n <Route element={<Layout />}>\n <Route path=\"/\" element={<LaunchModel />} />\n <Route path=\"/running_models\" element={<RunningModels />} />\ndiff --git a/xinference/web/ui/src/components/Title.js b/xinference/web/ui/src/components/Title.js\nindex a28cfd5d67..05e124a281 100644\n--- a/xinference/web/ui/src/components/Title.js\n+++ b/xinference/web/ui/src/components/Title.js\n@@ -1,19 +1,42 @@\n-import { Box, Typography } from '@mui/material'\n+import ExitToAppIcon from '@mui/icons-material/ExitToApp'\n+import { Box, Stack, Typography } from '@mui/material'\n+import Button from '@mui/material/Button'\n+import { useCookies } from 'react-cookie'\n+import { useNavigate } from 'react-router-dom'\n+\n+import { isValidBearerToken } from './utils'\n+\n+const Title = ({ title }) => {\n+ const [cookie, , removeCookie] = useCookies(['token'])\n+ const navigate = useNavigate()\n+\n+ const handleLogout = () => {\n+ removeCookie('token', { path: '/' })\n+ navigate('/login', { replace: true })\n+ }\n \n-const Title = ({ title, subtitle }) => {\n return (\n <Box mb=\"30px\">\n- <Typography\n- variant=\"h2\"\n- color=\"#141414\"\n- fontWeight=\"bold\"\n- sx={{ m: '0 0 5px 0' }}\n- >\n- {title}\n- </Typography>\n- <Typography variant=\"h5\" color=\"#3d3d3d\">\n- {subtitle}\n- </Typography>\n+ <Stack direction=\"row\" alignItems=\"center\" justifyContent=\"space-between\">\n+ <Typography\n+ variant=\"h2\"\n+ color=\"#141414\"\n+ fontWeight=\"bold\"\n+ sx={{ m: '0 0 5px 0' }}\n+ >\n+ {title}\n+ </Typography>\n+ {isValidBearerToken(cookie.token) && (\n+ <Button\n+ variant=\"outlined\"\n+ size=\"large\"\n+ onClick={handleLogout}\n+ startIcon={<ExitToAppIcon />}\n+ >\n+ LOG OUT\n+ </Button>\n+ )}\n+ </Stack>\n </Box>\n )\n }\ndiff --git a/xinference/web/ui/src/components/alertComponent.js b/xinference/web/ui/src/components/alertComponent.js\nnew file mode 100644\nindex 0000000000..4603b3f9c6\n--- /dev/null\n+++ b/xinference/web/ui/src/components/alertComponent.js\n@@ -0,0 +1,8 @@\n+import MuiAlert from '@mui/material/Alert'\n+import React from 'react'\n+\n+const Alert = React.forwardRef(function Alert(props, ref) {\n+ return <MuiAlert elevation={6} ref={ref} variant=\"filled\" {...props} />\n+})\n+\n+export { Alert }\ndiff --git a/xinference/web/ui/src/components/apiContext.js b/xinference/web/ui/src/components/apiContext.js\nindex 0b8f5aa748..f4f223e648 100644\n--- a/xinference/web/ui/src/components/apiContext.js\n+++ b/xinference/web/ui/src/components/apiContext.js\n@@ -1,18 +1,14 @@\n import React, { createContext, useState } from 'react'\n \n+import { getEndpoint } from './utils'\n+\n export const ApiContext = createContext()\n \n export const ApiContextProvider = ({ children }) => {\n const [isCallingApi, setIsCallingApi] = useState(false)\n const [isUpdatingModel, setIsUpdatingModel] = useState(false)\n const [errorMsg, setErrorMsg] = useState('')\n- let endPoint = ''\n- if (!process.env.NODE_ENV || process.env.NODE_ENV === 'development') {\n- endPoint = 'http://127.0.0.1:9997'\n- } else {\n- const fullUrl = window.location.href\n- endPoint = fullUrl.split('/ui')[0]\n- }\n+ const endPoint = getEndpoint()\n \n return (\n <ApiContext.Provider\ndiff --git a/xinference/web/ui/src/components/authAlertDialog.js b/xinference/web/ui/src/components/authAlertDialog.js\nnew file mode 100644\nindex 0000000000..4150ac54b2\n--- /dev/null\n+++ b/xinference/web/ui/src/components/authAlertDialog.js\n@@ -0,0 +1,92 @@\n+import Button from '@mui/material/Button'\n+import Dialog from '@mui/material/Dialog'\n+import DialogActions from '@mui/material/DialogActions'\n+import DialogContent from '@mui/material/DialogContent'\n+import DialogContentText from '@mui/material/DialogContentText'\n+import DialogTitle from '@mui/material/DialogTitle'\n+import * as React from 'react'\n+import { useEffect, useState } from 'react'\n+import { useCookies } from 'react-cookie'\n+import { useNavigate } from 'react-router-dom'\n+\n+export default function AuthAlertDialog() {\n+ const navigate = useNavigate()\n+ const [authStatus, setAuthStatus] = useState('')\n+ const [, , removeCookie] = useCookies(['token'])\n+\n+ const handleAuthStatus = () => {\n+ const status = localStorage.getItem('authStatus')\n+ if (status) {\n+ setAuthStatus(status)\n+ } else {\n+ setAuthStatus('')\n+ }\n+ }\n+\n+ useEffect(() => {\n+ localStorage.removeItem('authStatus')\n+ window.addEventListener('auth-status', handleAuthStatus)\n+\n+ return () => {\n+ window.removeEventListener('auth-status', handleAuthStatus)\n+ }\n+ }, [])\n+\n+ const handleClose = () => {\n+ // trigger first\n+ const code = localStorage.getItem('authStatus')\n+ localStorage.removeItem('authStatus')\n+ setAuthStatus('')\n+ if (code === '401') {\n+ removeCookie('token', { path: '/' })\n+ navigate('/login', { replace: true })\n+ }\n+ }\n+\n+ const handleDialogClose = (event, reason) => {\n+ if (reason && reason === 'backdropClick') {\n+ return\n+ }\n+ localStorage.removeItem('authStatus')\n+ setAuthStatus('')\n+ }\n+\n+ return (\n+ <React.Fragment>\n+ <Dialog\n+ fullWidth\n+ maxWidth=\"md\"\n+ open={authStatus === '401' || authStatus === '403'}\n+ onClose={handleDialogClose}\n+ aria-labelledby=\"alert-dialog-title\"\n+ aria-describedby=\"alert-dialog-description\"\n+ >\n+ {authStatus === '403' && (\n+ <DialogTitle id=\"alert-dialog-title\">\n+ {'Permission Error'}\n+ </DialogTitle>\n+ )}\n+ {authStatus === '401' && (\n+ <DialogTitle id=\"alert-dialog-title\">\n+ {'Authentication Error'}\n+ </DialogTitle>\n+ )}\n+ <DialogContent>\n+ {authStatus === '403' && (\n+ <DialogContentText id=\"alert-dialog-description\">\n+ {'You do not have permissions to do this!'}\n+ </DialogContentText>\n+ )}\n+ {authStatus === '401' && (\n+ <DialogContentText id=\"alert-dialog-description\">\n+ {'Invalid credentials! Please login.'}\n+ </DialogContentText>\n+ )}\n+ </DialogContent>\n+ <DialogActions>\n+ <Button onClick={handleClose}>CONFIRMED</Button>\n+ </DialogActions>\n+ </Dialog>\n+ </React.Fragment>\n+ )\n+}\ndiff --git a/xinference/web/ui/src/components/errorMessageSnackBar.js b/xinference/web/ui/src/components/errorMessageSnackBar.js\nindex 2c4802adea..905f84d50b 100644\n--- a/xinference/web/ui/src/components/errorMessageSnackBar.js\n+++ b/xinference/web/ui/src/components/errorMessageSnackBar.js\n@@ -1,13 +1,9 @@\n-import MuiAlert from '@mui/material/Alert'\n import Snackbar from '@mui/material/Snackbar'\n import React, { useContext } from 'react'\n \n+import { Alert } from './alertComponent'\n import { ApiContext } from './apiContext'\n \n-const Alert = React.forwardRef(function Alert(props, ref) {\n- return <MuiAlert elevation={6} ref={ref} variant=\"filled\" {...props} />\n-})\n-\n const ErrorMessageSnackBar = () => {\n const { errorMsg, setErrorMsg } = useContext(ApiContext)\n \ndiff --git a/xinference/web/ui/src/components/fetcher.js b/xinference/web/ui/src/components/fetcher.js\nnew file mode 100644\nindex 0000000000..6d1544ebe0\n--- /dev/null\n+++ b/xinference/web/ui/src/components/fetcher.js\n@@ -0,0 +1,36 @@\n+import { Cookies } from 'react-cookie'\n+\n+import { isValidBearerToken } from './utils'\n+\n+const cookies = new Cookies()\n+\n+const updateOptions = (url, options) => {\n+ const update = { ...options }\n+ if (cookies.get('token') !== 'no_auth') {\n+ update.headers = {\n+ ...update.headers,\n+ Authorization: 'Bearer ' + cookies.get('token'),\n+ }\n+ }\n+ return update\n+}\n+\n+export default function fetcher(url, options) {\n+ return fetch(url, updateOptions(url, options)).then((res) => {\n+ // For the situation that server has already been restarted, the current token may become invalid,\n+ // which leads to UI hangs.\n+ if (res.status === 401 && isValidBearerToken(cookies.get('token'))) {\n+ if (localStorage.getItem('authStatus') !== '401') {\n+ localStorage.setItem('authStatus', '401')\n+ window.dispatchEvent(new Event('auth-status'))\n+ }\n+ } else if (res.status === 403 && isValidBearerToken(cookies.get('token'))) {\n+ if (localStorage.getItem('authStatus') !== '403') {\n+ localStorage.setItem('authStatus', '403')\n+ window.dispatchEvent(new Event('auth-status'))\n+ }\n+ } else {\n+ return res\n+ }\n+ })\n+}\ndiff --git a/xinference/web/ui/src/components/utils.js b/xinference/web/ui/src/components/utils.js\nnew file mode 100644\nindex 0000000000..fe995efe03\n--- /dev/null\n+++ b/xinference/web/ui/src/components/utils.js\n@@ -0,0 +1,18 @@\n+const getEndpoint = () => {\n+ let endPoint = ''\n+ if (!process.env.NODE_ENV || process.env.NODE_ENV === 'development') {\n+ endPoint = 'http://127.0.0.1:9997'\n+ } else {\n+ const fullUrl = window.location.href\n+ endPoint = fullUrl.split('/ui')[0]\n+ }\n+ return endPoint\n+}\n+\n+const isValidBearerToken = (token) => {\n+ return (\n+ token !== '' && token !== undefined && token !== null && token.length > 10\n+ )\n+}\n+\n+export { getEndpoint, isValidBearerToken }\ndiff --git a/xinference/web/ui/src/index.js b/xinference/web/ui/src/index.js\nindex 34d5faf9cb..eed265203d 100644\n--- a/xinference/web/ui/src/index.js\n+++ b/xinference/web/ui/src/index.js\n@@ -1,4 +1,5 @@\n import React from 'react'\n+import { CookiesProvider } from 'react-cookie'\n import ReactDOM from 'react-dom/client'\n \n import App from './App'\n@@ -6,6 +7,8 @@ import App from './App'\n const root = ReactDOM.createRoot(document.getElementById('root'))\n root.render(\n <React.StrictMode>\n- <App />\n+ <CookiesProvider>\n+ <App />\n+ </CookiesProvider>\n </React.StrictMode>\n )\ndiff --git a/xinference/web/ui/src/scenes/launch_model/embeddingCard.js b/xinference/web/ui/src/scenes/launch_model/embeddingCard.js\nindex 07e01749f9..14c37a1a7d 100644\n--- a/xinference/web/ui/src/scenes/launch_model/embeddingCard.js\n+++ b/xinference/web/ui/src/scenes/launch_model/embeddingCard.js\n@@ -12,6 +12,7 @@ import IconButton from '@mui/material/IconButton'\n import React, { useContext, useEffect, useState } from 'react'\n \n import { ApiContext } from '../../components/apiContext'\n+import fetcher from '../../components/fetcher'\n \n const CARD_HEIGHT = 270\n const CARD_WIDTH = 270\n@@ -46,8 +47,8 @@ const EmbeddingCard = ({\n model_type: 'embedding',\n }\n \n- // First fetch request to initiate the model\n- fetch(url + '/v1/models', {\n+ // First fetcher request to initiate the model\n+ fetcher(url + '/v1/models', {\n method: 'POST',\n headers: {\n 'Content-Type': 'application/json',\n@@ -204,7 +205,7 @@ const EmbeddingCard = ({\n \n const handeCustomDelete = (e) => {\n e.stopPropagation()\n- fetch(url + `/v1/model_registrations/embedding/${modelData.model_name}`, {\n+ fetcher(url + `/v1/model_registrations/embedding/${modelData.model_name}`, {\n method: 'DELETE',\n headers: {\n 'Content-Type': 'application/json',\ndiff --git a/xinference/web/ui/src/scenes/launch_model/index.js b/xinference/web/ui/src/scenes/launch_model/index.js\nindex 33aceee695..b6c89bbb94 100644\n--- a/xinference/web/ui/src/scenes/launch_model/index.js\n+++ b/xinference/web/ui/src/scenes/launch_model/index.js\n@@ -1,6 +1,8 @@\n import { TabContext, TabList, TabPanel } from '@mui/lab'\n import { Box, Tab } from '@mui/material'\n import React, { useContext, useEffect, useState } from 'react'\n+import { useCookies } from 'react-cookie'\n+import { useNavigate } from 'react-router-dom'\n \n import { ApiContext } from '../../components/apiContext'\n import ErrorMessageSnackBar from '../../components/errorMessageSnackBar'\n@@ -16,12 +18,22 @@ const LaunchModel = () => {\n const [gpuAvailable, setGPUAvailable] = useState(-1)\n \n const { setErrorMsg } = useContext(ApiContext)\n+ const [cookie] = useCookies(['token'])\n+ const navigate = useNavigate()\n \n const handleTabChange = (event, newValue) => {\n setValue(newValue)\n }\n \n useEffect(() => {\n+ if (cookie.token === '' || cookie.token === undefined) {\n+ return\n+ }\n+ if (cookie.token === 'need_auth') {\n+ navigate('/login', { replace: true })\n+ return\n+ }\n+\n if (gpuAvailable === -1) {\n fetch(endPoint + '/v1/cluster/devices', {\n method: 'GET',\n@@ -45,7 +57,7 @@ const LaunchModel = () => {\n }\n })\n }\n- }, [])\n+ }, [cookie.token])\n \n return (\n <Box m=\"20px\">\ndiff --git a/xinference/web/ui/src/scenes/launch_model/launchCustom.js b/xinference/web/ui/src/scenes/launch_model/launchCustom.js\nindex eda5599680..fbbf683ab1 100644\n--- a/xinference/web/ui/src/scenes/launch_model/launchCustom.js\n+++ b/xinference/web/ui/src/scenes/launch_model/launchCustom.js\n@@ -2,6 +2,7 @@ import { Box, FormControl, TextField } from '@mui/material'\n import React, { useContext, useEffect, useState } from 'react'\n \n import { ApiContext } from '../../components/apiContext'\n+import fetcher from '../../components/fetcher'\n import EmbeddingCard from './embeddingCard'\n import ModelCard from './modelCard'\n import RerankCard from './rerankCard'\n@@ -33,7 +34,7 @@ const LaunchCustom = ({ gpuAvailable }) => {\n try {\n setIsCallingApi(true)\n \n- const rerankResponse = await fetch(\n+ const rerankResponse = await fetcher(\n `${endPoint}/v1/model_registrations/rerank`,\n {\n method: 'GET',\n@@ -44,7 +45,7 @@ const LaunchCustom = ({ gpuAvailable }) => {\n (data) => !data.is_builtin\n )\n \n- const embeddingResponse = await fetch(\n+ const embeddingResponse = await fetcher(\n `${endPoint}/v1/model_registrations/embedding`,\n {\n method: 'GET',\n@@ -56,7 +57,7 @@ const LaunchCustom = ({ gpuAvailable }) => {\n (data) => !data.is_builtin\n )\n \n- const llmResponse = await fetch(\n+ const llmResponse = await fetcher(\n `${endPoint}/v1/model_registrations/LLM`,\n {\n method: 'GET',\n@@ -69,7 +70,7 @@ const LaunchCustom = ({ gpuAvailable }) => {\n \n const newEmbeddingData = await Promise.all(\n customEmbeddingRegistrations.map(async (registration) => {\n- const desc = await fetch(\n+ const desc = await fetcher(\n `${endPoint}/v1/model_registrations/embedding/${registration.model_name}`,\n {\n method: 'GET',\n@@ -85,7 +86,7 @@ const LaunchCustom = ({ gpuAvailable }) => {\n \n const newLLMData = await Promise.all(\n customLLMRegistrations.map(async (registration) => {\n- const desc = await fetch(\n+ const desc = await fetcher(\n `${endPoint}/v1/model_registrations/LLM/${registration.model_name}`,\n {\n method: 'GET',\n@@ -101,7 +102,7 @@ const LaunchCustom = ({ gpuAvailable }) => {\n \n const newRerankData = await Promise.all(\n customRerankRegistrations.map(async (registration) => {\n- const desc = await fetch(\n+ const desc = await fetcher(\n `${endPoint}/v1/model_registrations/rerank/${registration.model_name}`,\n {\n method: 'GET',\ndiff --git a/xinference/web/ui/src/scenes/launch_model/launchEmbedding.js b/xinference/web/ui/src/scenes/launch_model/launchEmbedding.js\nindex 9d54c95ebe..fecbbf9eb0 100644\n--- a/xinference/web/ui/src/scenes/launch_model/launchEmbedding.js\n+++ b/xinference/web/ui/src/scenes/launch_model/launchEmbedding.js\n@@ -2,6 +2,7 @@ import { Box, FormControl, TextField } from '@mui/material'\n import React, { useContext, useEffect, useState } from 'react'\n \n import { ApiContext } from '../../components/apiContext'\n+import fetcher from '../../components/fetcher'\n import EmbeddingCard from './embeddingCard'\n \n const LaunchEmbedding = () => {\n@@ -31,7 +32,7 @@ const LaunchEmbedding = () => {\n try {\n setIsCallingApi(true)\n \n- const response = await fetch(\n+ const response = await fetcher(\n `${endPoint}/v1/model_registrations/embedding?detailed=true`,\n {\n method: 'GET',\n@@ -41,7 +42,7 @@ const LaunchEmbedding = () => {\n const registrations = await response.json()\n const newRegistrationData = await Promise.all(\n registrations.map(async (registration) => {\n- const desc = await fetch(\n+ const desc = await fetcher(\n `${endPoint}/v1/model_registrations/embedding/${registration.model_name}`,\n {\n method: 'GET',\ndiff --git a/xinference/web/ui/src/scenes/launch_model/launchLLM.js b/xinference/web/ui/src/scenes/launch_model/launchLLM.js\nindex 9755c13462..9b3eabc18d 100644\n--- a/xinference/web/ui/src/scenes/launch_model/launchLLM.js\n+++ b/xinference/web/ui/src/scenes/launch_model/launchLLM.js\n@@ -7,19 +7,22 @@ import {\n TextField,\n } from '@mui/material'\n import React, { useContext, useEffect, useState } from 'react'\n+import { useCookies } from 'react-cookie'\n \n import { ApiContext } from '../../components/apiContext'\n+import fetcher from '../../components/fetcher'\n import ModelCard from './modelCard'\n \n const LaunchLLM = ({ gpuAvailable }) => {\n let endPoint = useContext(ApiContext).endPoint\n- const [registrationData, setRegistrationData] = useState([])\n const { isCallingApi, setIsCallingApi } = useContext(ApiContext)\n const { isUpdatingModel } = useContext(ApiContext)\n+ const { setErrorMsg } = useContext(ApiContext)\n+ const [cookie] = useCookies(['token'])\n \n+ const [registrationData, setRegistrationData] = useState([])\n // States used for filtering\n const [searchTerm, setSearchTerm] = useState('')\n-\n const [modelAbility, setModelAbility] = useState('all')\n \n const handleChange = (event) => {\n@@ -53,23 +56,39 @@ const LaunchLLM = ({ gpuAvailable }) => {\n return true\n }\n \n- const update = async () => {\n- if (isCallingApi || isUpdatingModel) return\n+ const update = () => {\n+ if (\n+ isCallingApi ||\n+ isUpdatingModel ||\n+ cookie.token === '' ||\n+ cookie.token === undefined ||\n+ cookie.token === 'need_auth'\n+ )\n+ return\n \n try {\n setIsCallingApi(true)\n \n- const response = await fetch(\n- `${endPoint}/v1/model_registrations/LLM?detailed=true`,\n- {\n- method: 'GET',\n+ fetcher(`${endPoint}/v1/model_registrations/LLM?detailed=true`, {\n+ method: 'GET',\n+ }).then((response) => {\n+ if (!response.ok) {\n+ response\n+ .json()\n+ .then((errData) =>\n+ setErrorMsg(\n+ `Server error: ${response.status} - ${\n+ errData.detail || 'Unknown error'\n+ }`\n+ )\n+ )\n+ } else {\n+ response.json().then((data) => {\n+ const builtinRegistrations = data.filter((v) => v.is_builtin)\n+ setRegistrationData(builtinRegistrations)\n+ })\n }\n- )\n-\n- const registrations = await response.json()\n- const builtinRegistrations = registrations.filter((v) => v.is_builtin)\n-\n- setRegistrationData(builtinRegistrations)\n+ })\n } catch (error) {\n console.error('Error:', error)\n } finally {\n@@ -78,8 +97,8 @@ const LaunchLLM = ({ gpuAvailable }) => {\n }\n \n useEffect(() => {\n- update().catch(console.error)\n- }, [])\n+ update()\n+ }, [cookie.token])\n \n const style = {\n display: 'grid',\ndiff --git a/xinference/web/ui/src/scenes/launch_model/launchRerank.js b/xinference/web/ui/src/scenes/launch_model/launchRerank.js\nindex a342d3c9ca..bb26b629f4 100644\n--- a/xinference/web/ui/src/scenes/launch_model/launchRerank.js\n+++ b/xinference/web/ui/src/scenes/launch_model/launchRerank.js\n@@ -2,6 +2,7 @@ import { Box, FormControl, TextField } from '@mui/material'\n import React, { useContext, useEffect, useState } from 'react'\n \n import { ApiContext } from '../../components/apiContext'\n+import fetcher from '../../components/fetcher'\n import RerankCard from './rerankCard'\n \n const LaunchRerank = () => {\n@@ -31,7 +32,7 @@ const LaunchRerank = () => {\n try {\n setIsCallingApi(true)\n \n- const response = await fetch(\n+ const response = await fetcher(\n `${endPoint}/v1/model_registrations/rerank?detailed=true`,\n {\n method: 'GET',\n@@ -41,7 +42,7 @@ const LaunchRerank = () => {\n const registrations = await response.json()\n const newRegistrationData = await Promise.all(\n registrations.map(async (registration) => {\n- const desc = await fetch(\n+ const desc = await fetcher(\n `${endPoint}/v1/model_registrations/rerank/${registration.model_name}`,\n {\n method: 'GET',\ndiff --git a/xinference/web/ui/src/scenes/launch_model/modelCard.js b/xinference/web/ui/src/scenes/launch_model/modelCard.js\nindex 8f3224d28a..b4d0330b05 100644\n--- a/xinference/web/ui/src/scenes/launch_model/modelCard.js\n+++ b/xinference/web/ui/src/scenes/launch_model/modelCard.js\n@@ -21,8 +21,10 @@ import {\n import IconButton from '@mui/material/IconButton'\n import Typography from '@mui/material/Typography'\n import React, { useContext, useEffect, useState } from 'react'\n+import { useNavigate } from 'react-router-dom'\n \n import { ApiContext } from '../../components/apiContext'\n+import fetcher from '../../components/fetcher'\n \n const CARD_HEIGHT = 380\n const CARD_WIDTH = 300\n@@ -33,6 +35,7 @@ const ModelCard = ({ url, modelData, gpuAvailable, is_custom = false }) => {\n const { isCallingApi, setIsCallingApi } = useContext(ApiContext)\n const { isUpdatingModel } = useContext(ApiContext)\n const { setErrorMsg } = useContext(ApiContext)\n+ const navigate = useNavigate()\n \n // Model parameter selections\n const [modelUID, setModelUID] = useState('')\n@@ -122,8 +125,8 @@ const ModelCard = ({ url, modelData, gpuAvailable, is_custom = false }) => {\n nGPU === '0' ? null : nGPU === 'auto' ? 'auto' : parseInt(nGPU, 10),\n }\n \n- // First fetch request to initiate the model\n- fetch(url + '/v1/models', {\n+ // First fetcher request to initiate the model\n+ fetcher(url + '/v1/models', {\n method: 'POST',\n headers: {\n 'Content-Type': 'application/json',\n@@ -141,7 +144,7 @@ const ModelCard = ({ url, modelData, gpuAvailable, is_custom = false }) => {\n )\n })\n } else {\n- window.open(url + '/ui/#/running_models', '_blank', 'noreferrer')\n+ navigate('/running_models')\n }\n setIsCallingApi(false)\n })\n@@ -281,7 +284,7 @@ const ModelCard = ({ url, modelData, gpuAvailable, is_custom = false }) => {\n \n const handeCustomDelete = (e) => {\n e.stopPropagation()\n- fetch(url + `/v1/model_registrations/LLM/${modelData.model_name}`, {\n+ fetcher(url + `/v1/model_registrations/LLM/${modelData.model_name}`, {\n method: 'DELETE',\n headers: {\n 'Content-Type': 'application/json',\ndiff --git a/xinference/web/ui/src/scenes/launch_model/rerankCard.js b/xinference/web/ui/src/scenes/launch_model/rerankCard.js\nindex a38ff6c39c..f5b8c1d21e 100644\n--- a/xinference/web/ui/src/scenes/launch_model/rerankCard.js\n+++ b/xinference/web/ui/src/scenes/launch_model/rerankCard.js\n@@ -12,6 +12,7 @@ import IconButton from '@mui/material/IconButton'\n import React, { useContext, useEffect, useState } from 'react'\n \n import { ApiContext } from '../../components/apiContext'\n+import fetcher from '../../components/fetcher'\n \n const CARD_HEIGHT = 270\n const CARD_WIDTH = 270\n@@ -45,8 +46,8 @@ const RerankCard = ({\n model_type: 'rerank',\n }\n \n- // First fetch request to initiate the model\n- fetch(url + '/v1/models', {\n+ // First fetcher request to initiate the model\n+ fetcher(url + '/v1/models', {\n method: 'POST',\n headers: {\n 'Content-Type': 'application/json',\n@@ -190,7 +191,7 @@ const RerankCard = ({\n \n const handeCustomDelete = (e) => {\n e.stopPropagation()\n- fetch(url + `/v1/model_registrations/rerank/${modelData.model_name}`, {\n+ fetcher(url + `/v1/model_registrations/rerank/${modelData.model_name}`, {\n method: 'DELETE',\n headers: {\n 'Content-Type': 'application/json',\ndiff --git a/xinference/web/ui/src/scenes/login/header.js b/xinference/web/ui/src/scenes/login/header.js\nnew file mode 100644\nindex 0000000000..247951c19a\n--- /dev/null\n+++ b/xinference/web/ui/src/scenes/login/header.js\n@@ -0,0 +1,37 @@\n+import { AppBar, Box, Toolbar } from '@mui/material'\n+import Typography from '@mui/material/Typography'\n+import * as React from 'react'\n+\n+import icon from '../../media/icon.webp'\n+\n+export default function Header() {\n+ return (\n+ <AppBar\n+ elevation={0}\n+ color=\"transparent\"\n+ sx={{\n+ backdropFilter: 'blur(20px)',\n+ borderBottom: 1,\n+ borderColor: 'grey.300',\n+ zIndex: (theme) => theme.zIndex.drawer + 1,\n+ }}\n+ >\n+ <Toolbar sx={{ justifyContent: 'start' }}>\n+ <Box\n+ component=\"img\"\n+ alt=\"profile\"\n+ src={icon}\n+ height=\"60px\"\n+ width=\"60px\"\n+ borderRadius=\"50%\"\n+ sx={{ objectFit: 'cover', mr: 1.5 }}\n+ />\n+ <Box textAlign=\"left\">\n+ <Typography fontWeight=\"bold\" fontSize=\"1.7rem\">\n+ {'Xinference'}\n+ </Typography>\n+ </Box>\n+ </Toolbar>\n+ </AppBar>\n+ )\n+}\ndiff --git a/xinference/web/ui/src/scenes/login/login.js b/xinference/web/ui/src/scenes/login/login.js\nnew file mode 100644\nindex 0000000000..05a26b1b93\n--- /dev/null\n+++ b/xinference/web/ui/src/scenes/login/login.js\n@@ -0,0 +1,112 @@\n+import { Box } from '@mui/material'\n+import Button from '@mui/material/Button'\n+import Container from '@mui/material/Container'\n+import TextField from '@mui/material/TextField'\n+import Typography from '@mui/material/Typography'\n+import * as React from 'react'\n+import { Fragment, useContext, useState } from 'react'\n+import { useCookies } from 'react-cookie'\n+import { useNavigate } from 'react-router-dom'\n+\n+import { ApiContext } from '../../components/apiContext'\n+import ErrorMessageSnackBar from '../../components/errorMessageSnackBar'\n+import { getEndpoint } from '../../components/utils'\n+import Header from './header'\n+\n+function Login() {\n+ const [, setCookie] = useCookies(['token'])\n+ const navigate = useNavigate()\n+ const [username, setUsername] = useState('')\n+ const [password, setPassword] = useState('')\n+ const { setErrorMsg } = useContext(ApiContext)\n+ const endpoint = getEndpoint()\n+\n+ const handleSubmit = () => {\n+ fetch(endpoint + '/token', {\n+ method: 'POST',\n+ headers: {\n+ 'Content-Type': 'application/json',\n+ },\n+ body: JSON.stringify({\n+ username: username,\n+ password: password,\n+ }),\n+ }).then((res) => {\n+ if (!res.ok) {\n+ res.json().then((errorData) => {\n+ setErrorMsg(\n+ `Login failed: ${res.status} - ${\n+ errorData.detail || 'Unknown error'\n+ }`\n+ )\n+ })\n+ } else {\n+ res.json().then((data) => {\n+ setCookie('token', data['access_token'], { path: '/' })\n+ navigate('/')\n+ })\n+ }\n+ })\n+ }\n+\n+ return (\n+ <Fragment>\n+ <Header />\n+ <Container component=\"main\" maxWidth=\"xl\" sx={{ marginTop: 20 }}>\n+ <ErrorMessageSnackBar />\n+ <Box\n+ sx={{\n+ marginTop: 8,\n+ display: 'flex',\n+ flexDirection: 'column',\n+ alignItems: 'center',\n+ }}\n+ >\n+ <Typography component=\"h1\" variant=\"h5\">\n+ LOGIN\n+ </Typography>\n+ <Box component=\"main\" noValidate sx={{ mt: 1 }}>\n+ <TextField\n+ margin=\"normal\"\n+ required\n+ fullWidth\n+ id=\"username\"\n+ label=\"Username\"\n+ name=\"username\"\n+ value={username}\n+ onChange={(e) => {\n+ setUsername(e.target.value)\n+ }}\n+ autoFocus\n+ />\n+ <TextField\n+ margin=\"normal\"\n+ required\n+ fullWidth\n+ name=\"password\"\n+ label=\"Password\"\n+ type=\"password\"\n+ id=\"password\"\n+ autoComplete=\"current-password\"\n+ value={password}\n+ onChange={(e) => {\n+ setPassword(e.target.value)\n+ }}\n+ />\n+ <Button\n+ type=\"submit\"\n+ fullWidth\n+ variant=\"contained\"\n+ sx={{ mt: 3, mb: 2 }}\n+ onClick={handleSubmit}\n+ >\n+ Sign In\n+ </Button>\n+ </Box>\n+ </Box>\n+ </Container>\n+ </Fragment>\n+ )\n+}\n+\n+export default Login\ndiff --git a/xinference/web/ui/src/scenes/register_model/index.js b/xinference/web/ui/src/scenes/register_model/index.js\nindex 85dc8c8090..5d8a760368 100644\n--- a/xinference/web/ui/src/scenes/register_model/index.js\n+++ b/xinference/web/ui/src/scenes/register_model/index.js\n@@ -14,9 +14,12 @@ import AlertTitle from '@mui/material/AlertTitle'\n import Button from '@mui/material/Button'\n import TextField from '@mui/material/TextField'\n import React, { useContext, useEffect, useState } from 'react'\n+import { useCookies } from 'react-cookie'\n+import { useNavigate } from 'react-router-dom'\n \n import { ApiContext } from '../../components/apiContext'\n import ErrorMessageSnackBar from '../../components/errorMessageSnackBar'\n+import fetcher from '../../components/fetcher'\n import Title from '../../components/Title'\n import { useMode } from '../../theme'\n import RegisterEmbeddingModel from './register_embedding'\n@@ -54,6 +57,8 @@ const RegisterModel = () => {\n })\n const [familyLabel, setFamilyLabel] = useState('')\n const [tabValue, setTabValue] = React.useState('1')\n+ const [cookie] = useCookies(['token'])\n+ const navigate = useNavigate()\n \n const errorModelName = formData.model_name.trim().length <= 0\n const errorModelDescription = formData.model_description.length < 0\n@@ -81,6 +86,14 @@ const RegisterModel = () => {\n errorFamily\n \n useEffect(() => {\n+ if (cookie.token === '' || cookie.token === undefined) {\n+ return\n+ }\n+ if (cookie.token === 'need_auth') {\n+ navigate('/login', { replace: true })\n+ return\n+ }\n+\n const getBuiltinFamilies = async () => {\n const response = await fetch(endPoint + '/v1/models/families', {\n method: 'GET',\n@@ -147,7 +160,7 @@ const RegisterModel = () => {\n console.error('Error: ', error)\n })\n }\n- })\n+ }, [cookie.token])\n \n const getFamilyByAbility = () => {\n if (formData.model_ability.includes('chat')) {\n@@ -232,7 +245,7 @@ const RegisterModel = () => {\n }\n \n try {\n- const response = await fetch(endPoint + '/v1/model_registrations/LLM', {\n+ const response = await fetcher(endPoint + '/v1/model_registrations/LLM', {\n method: 'POST',\n headers: {\n 'Content-Type': 'application/json',\ndiff --git a/xinference/web/ui/src/scenes/register_model/register_embedding.js b/xinference/web/ui/src/scenes/register_model/register_embedding.js\nindex 29ce0335b5..ac7ab8d4ae 100644\n--- a/xinference/web/ui/src/scenes/register_model/register_embedding.js\n+++ b/xinference/web/ui/src/scenes/register_model/register_embedding.js\n@@ -6,6 +6,7 @@ import TextField from '@mui/material/TextField'\n import React, { useContext, useState } from 'react'\n \n import { ApiContext } from '../../components/apiContext'\n+import fetcher from '../../components/fetcher'\n import { useMode } from '../../theme'\n \n const SUPPORTED_LANGUAGES_DICT = { en: 'English', zh: 'Chinese' }\n@@ -41,7 +42,7 @@ const RegisterEmbeddingModel = () => {\n }\n \n try {\n- const response = await fetch(\n+ const response = await fetcher(\n endPoint + '/v1/model_registrations/embedding',\n {\n method: 'POST',\ndiff --git a/xinference/web/ui/src/scenes/register_model/register_rerank.js b/xinference/web/ui/src/scenes/register_model/register_rerank.js\nindex ed8f7255c9..075b35ff9d 100644\n--- a/xinference/web/ui/src/scenes/register_model/register_rerank.js\n+++ b/xinference/web/ui/src/scenes/register_model/register_rerank.js\n@@ -6,6 +6,7 @@ import TextField from '@mui/material/TextField'\n import React, { useContext, useState } from 'react'\n \n import { ApiContext } from '../../components/apiContext'\n+import fetcher from '../../components/fetcher'\n import { useMode } from '../../theme'\n \n const SUPPORTED_LANGUAGES_DICT = { en: 'English', zh: 'Chinese' }\n@@ -36,7 +37,7 @@ const RegisterRerankModel = () => {\n }\n \n try {\n- const response = await fetch(\n+ const response = await fetcher(\n endPoint + '/v1/model_registrations/rerank',\n {\n method: 'POST',\ndiff --git a/xinference/web/ui/src/scenes/running_models/index.js b/xinference/web/ui/src/scenes/running_models/index.js\nindex e87bd57aea..7a755403b3 100644\n--- a/xinference/web/ui/src/scenes/running_models/index.js\n+++ b/xinference/web/ui/src/scenes/running_models/index.js\n@@ -4,8 +4,12 @@ import { TabContext, TabList, TabPanel } from '@mui/lab'\n import { Box, Stack, Tab } from '@mui/material'\n import { DataGrid } from '@mui/x-data-grid'\n import React, { useContext, useEffect, useState } from 'react'\n+import { useCookies } from 'react-cookie'\n+import { useNavigate } from 'react-router-dom'\n \n import { ApiContext } from '../../components/apiContext'\n+import ErrorMessageSnackBar from '../../components/errorMessageSnackBar'\n+import fetcher from '../../components/fetcher'\n import Title from '../../components/Title'\n \n const RunningModels = () => {\n@@ -16,6 +20,9 @@ const RunningModels = () => {\n const [rerankModelData, setRerankModelData] = useState([])\n const { isCallingApi, setIsCallingApi } = useContext(ApiContext)\n const { isUpdatingModel, setIsUpdatingModel } = useContext(ApiContext)\n+ const { setErrorMsg } = useContext(ApiContext)\n+ const [cookie] = useCookies(['token'])\n+ const navigate = useNavigate()\n const endPoint = useContext(ApiContext).endPoint\n \n const handleTabChange = (event, newValue) => {\n@@ -23,6 +30,13 @@ const RunningModels = () => {\n }\n \n const update = (isCallingApi) => {\n+ if (cookie.token === '' || cookie.token === undefined) {\n+ return\n+ }\n+ if (cookie.token === 'need_auth') {\n+ navigate('/login', { replace: true })\n+ return\n+ }\n if (isCallingApi) {\n setLlmData([{ id: 'Loading, do not refresh page...', url: 'IS_LOADING' }])\n setEmbeddingModelData([\n@@ -36,36 +50,47 @@ const RunningModels = () => {\n ])\n } else {\n setIsUpdatingModel(true)\n- fetch(`${endPoint}/v1/models/`, {\n+ fetcher(`${endPoint}/v1/models/`, {\n method: 'GET',\n })\n- .then((response) => response.json())\n- .then((data) => {\n- const newLlmData = []\n- const newEmbeddingModelData = []\n- const newImageModelData = []\n- const newRerankModelData = []\n- Object.entries(data).forEach(([key, value]) => {\n- let newValue = {\n- ...value,\n- id: key,\n- url: key,\n- }\n- if (newValue.model_type === 'LLM') {\n- newLlmData.push(newValue)\n- } else if (newValue.model_type === 'embedding') {\n- newEmbeddingModelData.push(newValue)\n- } else if (newValue.model_type === 'image') {\n- newImageModelData.push(newValue)\n- } else if (newValue.model_type === 'rerank') {\n- newRerankModelData.push(newValue)\n- }\n- })\n- setLlmData(newLlmData)\n- setEmbeddingModelData(newEmbeddingModelData)\n- setImageModelData(newImageModelData)\n- setRerankModelData(newRerankModelData)\n- setIsUpdatingModel(false)\n+ .then((response) => {\n+ if (!response.ok) {\n+ response.json().then((errorData) => {\n+ setErrorMsg(\n+ `Login failed: ${response.status} - ${\n+ errorData.detail || 'Unknown error'\n+ }`\n+ )\n+ })\n+ } else {\n+ response.json().then((data) => {\n+ const newLlmData = []\n+ const newEmbeddingModelData = []\n+ const newImageModelData = []\n+ const newRerankModelData = []\n+ Object.entries(data).forEach(([key, value]) => {\n+ let newValue = {\n+ ...value,\n+ id: key,\n+ url: key,\n+ }\n+ if (newValue.model_type === 'LLM') {\n+ newLlmData.push(newValue)\n+ } else if (newValue.model_type === 'embedding') {\n+ newEmbeddingModelData.push(newValue)\n+ } else if (newValue.model_type === 'image') {\n+ newImageModelData.push(newValue)\n+ } else if (newValue.model_type === 'rerank') {\n+ newRerankModelData.push(newValue)\n+ }\n+ })\n+ setLlmData(newLlmData)\n+ setEmbeddingModelData(newEmbeddingModelData)\n+ setImageModelData(newImageModelData)\n+ setRerankModelData(newRerankModelData)\n+ setIsUpdatingModel(false)\n+ })\n+ }\n })\n .catch((error) => {\n console.error('Error:', error)\n@@ -77,7 +102,7 @@ const RunningModels = () => {\n useEffect(() => {\n update(isCallingApi)\n // eslint-disable-next-line\n- }, [isCallingApi])\n+ }, [isCallingApi, cookie.token])\n \n const llmColumns = [\n {\n@@ -154,14 +179,14 @@ const RunningModels = () => {\n \n setIsCallingApi(true)\n \n- fetch(openUrl, {\n+ fetcher(openUrl, {\n method: 'HEAD',\n })\n .then((response) => {\n if (response.status === 404) {\n // If web UI doesn't exist (404 Not Found)\n console.log('UI does not exist, creating new...')\n- return fetch(gradioUrl, {\n+ return fetcher(gradioUrl, {\n method: 'POST',\n headers: {\n 'Content-Type': 'application/json',\n@@ -231,7 +256,7 @@ const RunningModels = () => {\n return\n }\n setIsCallingApi(true)\n- fetch(closeUrl, {\n+ fetcher(closeUrl, {\n method: 'DELETE',\n })\n .then((response) => {\n@@ -328,7 +353,7 @@ const RunningModels = () => {\n return\n }\n setIsCallingApi(true)\n- fetch(closeUrl, {\n+ fetcher(closeUrl, {\n method: 'DELETE',\n })\n .then((response) => {\n@@ -414,6 +439,7 @@ const RunningModels = () => {\n }}\n >\n <Title title=\"Running Models\" />\n+ <ErrorMessageSnackBar />\n <TabContext value={tabValue}>\n <Box sx={{ borderBottom: 1, borderColor: 'divider' }}>\n <TabList\n" }
[ { "diff_hunk": "@@ -0,0 +1,225 @@\n+# SOME DESCRIPTIVE TITLE.\n+# Copyright (C) 2023, Xorbits Inc.\n+# This file is distributed under the same license as the Xinference package.\n+# FIRST AUTHOR <EMAIL@ADDRESS>, 2024.\n+#\n+#, fuzzy\n+msgid \"\"\n+msgstr \"\"\n+\"Project-Id-Version: Xinference \\n\"\n+\"Report-Msgid-Bugs-To: \\n\"\n+\"POT-Creation-Date: 2024-01-10 11:33+0800\\n\"\n+\"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\\n\"\n+\"Last-Translator: FULL NAME <EMAIL@ADDRESS>\\n\"\n+\"Language: zh_CN\\n\"\n+\"Language-Team: zh_CN <[email protected]>\\n\"\n+\"Plural-Forms: nplurals=1; plural=0;\\n\"\n+\"MIME-Version: 1.0\\n\"\n+\"Content-Type: text/plain; charset=utf-8\\n\"\n+\"Content-Transfer-Encoding: 8bit\\n\"\n+\"Generated-By: Babel 2.12.1\\n\"\n+\n+#: ../../source/user_guide/auth_system.rst:5\n+msgid \"Simple OAuth2 System (experimental)\"\n+msgstr \"OAuth2 系统(实验性质)\"\n+\n+#: ../../source/user_guide/auth_system.rst:7\n+msgid \"\"\n+\"Xinference builds an In-memory OAuth2 authentication and authorization \"\n+\"system using the account-password mode.\"\n+msgstr \"\"\n+\"Xinference 使用了账号密码的模式构建了一个基于内存的 OAuth2 的身份验证和授权系统。\"\n+\n+#: ../../source/user_guide/auth_system.rst:10\n+msgid \"\"\n+\"If you don't have authentication and authorization requirements, you can \"\n+\"use Xinference as before, without any changes.\"\n+msgstr \"\"\n+\"如果没有身份验证和授权的要求,可以像之前一样使用 Xinference,无需任何改动。\"\n+\n+#: ../../source/user_guide/auth_system.rst:14\n+msgid \"Permissions\"\n+msgstr \"权限\"\n+\n+#: ../../source/user_guide/auth_system.rst:15\n+msgid \"\"\n+\"Currently, Xinference system internally defines some interface \"\n+\"permissions:\"\n+msgstr \"\"\n+\"目前,Xinference 内部定义了以下几个接口权限:\"\n+\n+#: ../../source/user_guide/auth_system.rst:17\n+msgid \"``models:list``: Permission to list models and get models' information.\"\n+msgstr \"``models:list``: 获取模型列表和信息的权限。\"\n+\n+#: ../../source/user_guide/auth_system.rst:18\n+msgid \"``models:read``: Permission to use models.\"\n+msgstr \"``models:read``: 使用模型的权限。\"\n+\n+#: ../../source/user_guide/auth_system.rst:19\n+msgid \"``models:register``: Permission to register custom models.\"\n+msgstr \"``models:register``: 注册模型的权限。\"\n+\n+#: ../../source/user_guide/auth_system.rst:20\n+msgid \"``models:unregister``: Permission to unregister custom models.\"\n+msgstr \"``models:unregister``: 取消注册模型的权限。\"\n+\n+#: ../../source/user_guide/auth_system.rst:21\n+msgid \"``models:start``: Permission to launch models.\"\n+msgstr \"``models:start``: 启动模型的权限。\"\n+\n+#: ../../source/user_guide/auth_system.rst:22\n+msgid \"``models:stop``: Permission to stop running models.\"\n+msgstr \"``models:stop``: 停止模型的权限。\"\n+\n+#: ../../source/user_guide/auth_system.rst:23\n+msgid \"``admin``: Administrators have permissions for all interfaces.\"\n+msgstr \"``admin``: 管理员拥有所有接口的权限。\"\n+\n+#: ../../source/user_guide/auth_system.rst:27\n+msgid \"Startup\"\n+msgstr \"开始使用\"\n+\n+#: ../../source/user_guide/auth_system.rst:28\n+msgid \"\"\n+\"All authentication and authorization information needs to be specified \"\n+\"and loaded into memory when Xinference is started. Xinference requires a \"\n+\"JSON-formatted file with the following specific fields:\"\n+msgstr \"\"\n+\"在启动 Xinference 时,需要指定所有的验证和授权信息。当前,Xinference 需要一个\"\n+\" JSON 文件,其中包含以下特定字段:\"\n+\n+#: ../../source/user_guide/auth_system.rst:59\n+msgid \"\"\n+\"``auth_config``: This field is used to configure security-related \"\n+\"information.\"\n+msgstr \"\"\n+\"``auth_config``: 这个字段配置与安全相关的信息。\"\n+\n+#: ../../source/user_guide/auth_system.rst:61\n+msgid \"\"\n+\"``algorithm``: The algorithm used for token generation and parsing. \"\n+\"``HS256`` or ``RS256`` is recommended.\"\n+msgstr \"``algorithm``: 用于令牌生成与解析的算法。推荐使用 `HS256`` 或者 ``RS256`` 。\"", "line": null, "original_line": 103, "original_start_line": null, "path": "doc/source/locale/zh_CN/LC_MESSAGES/user_guide/auth_system.po", "start_line": null, "text": "@author:\n```suggestion\r\nmsgstr \"``algorithm``: 用于令牌生成与解析的算法。推荐使用 ``HS256`` 或者 ``RS256`` 。\"\r\n```" }, { "diff_hunk": "@@ -0,0 +1,225 @@\n+# SOME DESCRIPTIVE TITLE.\n+# Copyright (C) 2023, Xorbits Inc.\n+# This file is distributed under the same license as the Xinference package.\n+# FIRST AUTHOR <EMAIL@ADDRESS>, 2024.\n+#\n+#, fuzzy\n+msgid \"\"\n+msgstr \"\"\n+\"Project-Id-Version: Xinference \\n\"\n+\"Report-Msgid-Bugs-To: \\n\"\n+\"POT-Creation-Date: 2024-01-10 11:33+0800\\n\"\n+\"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\\n\"\n+\"Last-Translator: FULL NAME <EMAIL@ADDRESS>\\n\"\n+\"Language: zh_CN\\n\"\n+\"Language-Team: zh_CN <[email protected]>\\n\"\n+\"Plural-Forms: nplurals=1; plural=0;\\n\"\n+\"MIME-Version: 1.0\\n\"\n+\"Content-Type: text/plain; charset=utf-8\\n\"\n+\"Content-Transfer-Encoding: 8bit\\n\"\n+\"Generated-By: Babel 2.12.1\\n\"\n+\n+#: ../../source/user_guide/auth_system.rst:5\n+msgid \"Simple OAuth2 System (experimental)\"\n+msgstr \"OAuth2 系统(实验性质)\"\n+\n+#: ../../source/user_guide/auth_system.rst:7\n+msgid \"\"\n+\"Xinference builds an In-memory OAuth2 authentication and authorization \"\n+\"system using the account-password mode.\"\n+msgstr \"\"\n+\"Xinference 使用了账号密码的模式构建了一个基于内存的 OAuth2 的身份验证和授权系统。\"\n+\n+#: ../../source/user_guide/auth_system.rst:10\n+msgid \"\"\n+\"If you don't have authentication and authorization requirements, you can \"\n+\"use Xinference as before, without any changes.\"\n+msgstr \"\"\n+\"如果没有身份验证和授权的要求,可以像之前一样使用 Xinference,无需任何改动。\"\n+\n+#: ../../source/user_guide/auth_system.rst:14\n+msgid \"Permissions\"\n+msgstr \"权限\"\n+\n+#: ../../source/user_guide/auth_system.rst:15\n+msgid \"\"\n+\"Currently, Xinference system internally defines some interface \"\n+\"permissions:\"\n+msgstr \"\"\n+\"目前,Xinference 内部定义了以下几个接口权限:\"\n+\n+#: ../../source/user_guide/auth_system.rst:17\n+msgid \"``models:list``: Permission to list models and get models' information.\"\n+msgstr \"``models:list``: 获取模型列表和信息的权限。\"\n+\n+#: ../../source/user_guide/auth_system.rst:18\n+msgid \"``models:read``: Permission to use models.\"\n+msgstr \"``models:read``: 使用模型的权限。\"\n+\n+#: ../../source/user_guide/auth_system.rst:19\n+msgid \"``models:register``: Permission to register custom models.\"\n+msgstr \"``models:register``: 注册模型的权限。\"\n+\n+#: ../../source/user_guide/auth_system.rst:20\n+msgid \"``models:unregister``: Permission to unregister custom models.\"\n+msgstr \"``models:unregister``: 取消注册模型的权限。\"\n+\n+#: ../../source/user_guide/auth_system.rst:21\n+msgid \"``models:start``: Permission to launch models.\"\n+msgstr \"``models:start``: 启动模型的权限。\"\n+\n+#: ../../source/user_guide/auth_system.rst:22\n+msgid \"``models:stop``: Permission to stop running models.\"\n+msgstr \"``models:stop``: 停止模型的权限。\"\n+\n+#: ../../source/user_guide/auth_system.rst:23\n+msgid \"``admin``: Administrators have permissions for all interfaces.\"\n+msgstr \"``admin``: 管理员拥有所有接口的权限。\"\n+\n+#: ../../source/user_guide/auth_system.rst:27\n+msgid \"Startup\"\n+msgstr \"开始使用\"\n+\n+#: ../../source/user_guide/auth_system.rst:28\n+msgid \"\"\n+\"All authentication and authorization information needs to be specified \"\n+\"and loaded into memory when Xinference is started. Xinference requires a \"\n+\"JSON-formatted file with the following specific fields:\"\n+msgstr \"\"\n+\"在启动 Xinference 时,需要指定所有的验证和授权信息。当前,Xinference 需要一个\"\n+\" JSON 文件,其中包含以下特定字段:\"\n+\n+#: ../../source/user_guide/auth_system.rst:59\n+msgid \"\"\n+\"``auth_config``: This field is used to configure security-related \"\n+\"information.\"\n+msgstr \"\"\n+\"``auth_config``: 这个字段配置与安全相关的信息。\"\n+\n+#: ../../source/user_guide/auth_system.rst:61\n+msgid \"\"\n+\"``algorithm``: The algorithm used for token generation and parsing. \"\n+\"``HS256`` or ``RS256`` is recommended.\"\n+msgstr \"``algorithm``: 用于令牌生成与解析的算法。推荐使用 `HS256`` 或者 ``RS256`` 。\"\n+\n+#: ../../source/user_guide/auth_system.rst:63\n+msgid \"\"\n+\"``secret_key``: The secret_key used for token generation and parsing. Use\"\n+\" this command to generate: ``openssl rand -hex 32``.\"\n+msgstr \"``secret_key``: 用于令牌生成和解析的密钥。可以使用该命令生成: ``openssl rand -hex 32`` 。\"\n+\n+#: ../../source/user_guide/auth_system.rst:65\n+msgid \"\"\n+\"``token_expire_in_minutes``: Reserved field indicating the expiration \"\n+\"time of the token. The current open-source version of Xinference does not\"\n+\" check the expiration time of tokens.\"\n+msgstr \"\"\n+\"``token_expire_in_minutes``: 保留字段,表示令牌失效时间。目前 Xinference 开源版本\"\n+\"不会检查令牌过期时间。\"\n+\n+#: ../../source/user_guide/auth_system.rst:67\n+msgid \"\"\n+\"``user_config``: This field is used to configure user and permission \"\n+\"information. Each user information is composed of these fields:\"\n+msgstr \"\"\n+\"``user_config``: 这个字段用来配置用户和权限信息。每个用户信息由以下字段组成:\"\n+\n+#: ../../source/user_guide/auth_system.rst:69\n+msgid \"``username``: string field for username.\"\n+msgstr \"``username``: 字符串,表示用户名\"\n+\n+#: ../../source/user_guide/auth_system.rst:71\n+msgid \"``password``: string field for password.\"\n+msgstr \"``password``: 字符串,表示密码\"\n+\n+#: ../../source/user_guide/auth_system.rst:73\n+msgid \"\"\n+\"``permissions``: A list containing strings representing the permissions \"\n+\"that this user has. The permissions are described as above.\"\n+msgstr \"\"\n+\"``permissions``: 字符串列表,表示该用户拥有的权限。权限描述如上权限部分文档所述。\"\n+\n+#: ../../source/user_guide/auth_system.rst:76\n+msgid \"\"\n+\"Once you have configured such a JSON file, use the ``--auth-config`` \"\n+\"option to enable Xinference with the authentication and authorization \"\n+\"system. For example, for local startup:\"\n+msgstr \"\"\n+\"配置好这样一个 JSON 文件后,可以使用 ``--auth-config`` 选项启用具有身份验证和\"\n+\"授权系统的 Xinference。例如,本地启动的命令如下所示:\"\n+\n+#: ../../source/user_guide/auth_system.rst:83\n+msgid \"\"\n+\"For distributed startup, just specify this option when starting the \"\n+\"supervisor:\"\n+msgstr \"\"\n+\"在分布式环境下,只需要在启动 supervisor 的是指定这个选项:\"", "line": null, "original_line": 156, "original_start_line": null, "path": "doc/source/locale/zh_CN/LC_MESSAGES/user_guide/auth_system.po", "start_line": null, "text": "@author:\n```suggestion\r\n\"在分布式环境下,只需要在启动 supervisor 时指定这个选项:\"\r\n```" } ]
18d7007f3dcbdd8c370ef51806d31eaf0916ac62
diff --git a/.github/workflows/python.yaml b/.github/workflows/python.yaml index b5f969f797..70fa563fed 100644 --- a/.github/workflows/python.yaml +++ b/.github/workflows/python.yaml @@ -128,6 +128,8 @@ jobs: ${{ env.SELF_HOST_PYTHON }} -m pip install -U modelscope ${{ env.SELF_HOST_PYTHON }} -m pip install -U sse_starlette ${{ env.SELF_HOST_PYTHON }} -m pip install -U xoscar + ${{ env.SELF_HOST_PYTHON }} -m pip install -U "python-jose[cryptography]" + ${{ env.SELF_HOST_PYTHON }} -m pip install -U "passlib[bcrypt]" ${{ env.SELF_HOST_PYTHON }} -m pytest --timeout=1500 \ -W ignore::PendingDeprecationWarning \ --cov-config=setup.cfg --cov-report=xml --cov=xinference xinference/model/image/tests/test_stable_diffusion.py diff --git a/doc/source/locale/zh_CN/LC_MESSAGES/user_guide/auth_system.po b/doc/source/locale/zh_CN/LC_MESSAGES/user_guide/auth_system.po new file mode 100644 index 0000000000..21a605aeae --- /dev/null +++ b/doc/source/locale/zh_CN/LC_MESSAGES/user_guide/auth_system.po @@ -0,0 +1,225 @@ +# SOME DESCRIPTIVE TITLE. +# Copyright (C) 2023, Xorbits Inc. +# This file is distributed under the same license as the Xinference package. +# FIRST AUTHOR <EMAIL@ADDRESS>, 2024. +# +#, fuzzy +msgid "" +msgstr "" +"Project-Id-Version: Xinference \n" +"Report-Msgid-Bugs-To: \n" +"POT-Creation-Date: 2024-01-10 11:33+0800\n" +"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" +"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n" +"Language: zh_CN\n" +"Language-Team: zh_CN <[email protected]>\n" +"Plural-Forms: nplurals=1; plural=0;\n" +"MIME-Version: 1.0\n" +"Content-Type: text/plain; charset=utf-8\n" +"Content-Transfer-Encoding: 8bit\n" +"Generated-By: Babel 2.12.1\n" + +#: ../../source/user_guide/auth_system.rst:5 +msgid "Simple OAuth2 System (experimental)" +msgstr "OAuth2 系统(实验性质)" + +#: ../../source/user_guide/auth_system.rst:7 +msgid "" +"Xinference builds an In-memory OAuth2 authentication and authorization " +"system using the account-password mode." +msgstr "" +"Xinference 使用了账号密码的模式构建了一个基于内存的 OAuth2 的身份验证和授权系统。" + +#: ../../source/user_guide/auth_system.rst:10 +msgid "" +"If you don't have authentication and authorization requirements, you can " +"use Xinference as before, without any changes." +msgstr "" +"如果没有身份验证和授权的要求,可以像之前一样使用 Xinference,无需任何改动。" + +#: ../../source/user_guide/auth_system.rst:14 +msgid "Permissions" +msgstr "权限" + +#: ../../source/user_guide/auth_system.rst:15 +msgid "" +"Currently, Xinference system internally defines some interface " +"permissions:" +msgstr "" +"目前,Xinference 内部定义了以下几个接口权限:" + +#: ../../source/user_guide/auth_system.rst:17 +msgid "``models:list``: Permission to list models and get models' information." +msgstr "``models:list``: 获取模型列表和信息的权限。" + +#: ../../source/user_guide/auth_system.rst:18 +msgid "``models:read``: Permission to use models." +msgstr "``models:read``: 使用模型的权限。" + +#: ../../source/user_guide/auth_system.rst:19 +msgid "``models:register``: Permission to register custom models." +msgstr "``models:register``: 注册模型的权限。" + +#: ../../source/user_guide/auth_system.rst:20 +msgid "``models:unregister``: Permission to unregister custom models." +msgstr "``models:unregister``: 取消注册模型的权限。" + +#: ../../source/user_guide/auth_system.rst:21 +msgid "``models:start``: Permission to launch models." +msgstr "``models:start``: 启动模型的权限。" + +#: ../../source/user_guide/auth_system.rst:22 +msgid "``models:stop``: Permission to stop running models." +msgstr "``models:stop``: 停止模型的权限。" + +#: ../../source/user_guide/auth_system.rst:23 +msgid "``admin``: Administrators have permissions for all interfaces." +msgstr "``admin``: 管理员拥有所有接口的权限。" + +#: ../../source/user_guide/auth_system.rst:27 +msgid "Startup" +msgstr "开始使用" + +#: ../../source/user_guide/auth_system.rst:28 +msgid "" +"All authentication and authorization information needs to be specified " +"and loaded into memory when Xinference is started. Xinference requires a " +"JSON-formatted file with the following specific fields:" +msgstr "" +"在启动 Xinference 时,需要指定所有的验证和授权信息。当前,Xinference 需要一个" +" JSON 文件,其中包含以下特定字段:" + +#: ../../source/user_guide/auth_system.rst:59 +msgid "" +"``auth_config``: This field is used to configure security-related " +"information." +msgstr "" +"``auth_config``: 这个字段配置与安全相关的信息。" + +#: ../../source/user_guide/auth_system.rst:61 +msgid "" +"``algorithm``: The algorithm used for token generation and parsing. " +"``HS256`` or ``RS256`` is recommended." +msgstr "``algorithm``: 用于令牌生成与解析的算法。推荐使用 ``HS256`` 或者 ``RS256`` 。" + +#: ../../source/user_guide/auth_system.rst:63 +msgid "" +"``secret_key``: The secret_key used for token generation and parsing. Use" +" this command to generate: ``openssl rand -hex 32``." +msgstr "``secret_key``: 用于令牌生成和解析的密钥。可以使用该命令生成: ``openssl rand -hex 32`` 。" + +#: ../../source/user_guide/auth_system.rst:65 +msgid "" +"``token_expire_in_minutes``: Reserved field indicating the expiration " +"time of the token. The current open-source version of Xinference does not" +" check the expiration time of tokens." +msgstr "" +"``token_expire_in_minutes``: 保留字段,表示令牌失效时间。目前 Xinference 开源版本" +"不会检查令牌过期时间。" + +#: ../../source/user_guide/auth_system.rst:67 +msgid "" +"``user_config``: This field is used to configure user and permission " +"information. Each user information is composed of these fields:" +msgstr "" +"``user_config``: 这个字段用来配置用户和权限信息。每个用户信息由以下字段组成:" + +#: ../../source/user_guide/auth_system.rst:69 +msgid "``username``: string field for username." +msgstr "``username``: 字符串,表示用户名" + +#: ../../source/user_guide/auth_system.rst:71 +msgid "``password``: string field for password." +msgstr "``password``: 字符串,表示密码" + +#: ../../source/user_guide/auth_system.rst:73 +msgid "" +"``permissions``: A list containing strings representing the permissions " +"that this user has. The permissions are described as above." +msgstr "" +"``permissions``: 字符串列表,表示该用户拥有的权限。权限描述如上权限部分文档所述。" + +#: ../../source/user_guide/auth_system.rst:76 +msgid "" +"Once you have configured such a JSON file, use the ``--auth-config`` " +"option to enable Xinference with the authentication and authorization " +"system. For example, for local startup:" +msgstr "" +"配置好这样一个 JSON 文件后,可以使用 ``--auth-config`` 选项启用具有身份验证和" +"授权系统的 Xinference。例如,本地启动的命令如下所示:" + +#: ../../source/user_guide/auth_system.rst:83 +msgid "" +"For distributed startup, just specify this option when starting the " +"supervisor:" +msgstr "" +"在分布式环境下,只需要在启动 supervisor 时指定这个选项:" + +#: ../../source/user_guide/auth_system.rst:91 +msgid "Usage" +msgstr "使用" + +#: ../../source/user_guide/auth_system.rst:92 +msgid "" +"For Xinference with the authentication and authorization system enabled, " +"all usage remains the same, except for the addition of a login step at " +"the beginning." +msgstr "" +"使用带有权限管理的 Xinference 服务与正常的版本保持一致,只是在开始阶段添加了登录步骤。" + +#: ../../source/user_guide/auth_system.rst:94 +msgid "Signin for command line users:" +msgstr "使用命令行登录:" + +#: ../../source/user_guide/auth_system.rst:101 +msgid "For python SDK users:" +msgstr "使用 Python SDK 登录:" + +#: ../../source/user_guide/auth_system.rst:110 +msgid "" +"For web UI users, when opening the web UI, you will first be directed to " +"the login page. After logging in, you can use the web UI normally." +msgstr "" +"对于 Web UI 的用户,在打开 Web UI 时,将首先跳转到登录页面。登录后,就可以正常使用" +"Web UI 的功能。" + +#: ../../source/user_guide/auth_system.rst:114 +msgid "Http Status Code" +msgstr "Http 状态码" + +#: ../../source/user_guide/auth_system.rst:115 +msgid "Add the following two HTTP status codes:" +msgstr "添加了以下两种 HTTP 状态码:" + +#: ../../source/user_guide/auth_system.rst:117 +msgid "``401 Unauthorized``: login information or token verifies failed." +msgstr "``401 Unauthorized``: 登录信息或者令牌验证失效。" + +#: ../../source/user_guide/auth_system.rst:118 +msgid "``403 Forbidden``: No enough permissions when accessing interfaces." +msgstr "``403 Forbidden``: 没有足够的权限访问接口。" + +#: ../../source/user_guide/auth_system.rst:120 +msgid "" +"For the command line, SDK, or web UI users, there will be clear " +"information prompts when encountering authorization and permissions " +"issues." +msgstr "对于命令行、SDK 或 Web UI 用户,在遇到授权和权限问题时,会有明确的信息提示。" + +#: ../../source/user_guide/auth_system.rst:124 +msgid "Note" +msgstr "注意" + +#: ../../source/user_guide/auth_system.rst:125 +msgid "" +"This feature is still in an experimental stage. Feel free to provide " +"feedback on usage issues or improvement suggestions through `GitHub " +"issues <https://github.com/xorbitsai/inference/issues>`_ or `our Slack " +"<https://join.slack.com/t/xorbitsio/shared_invite/zt-1o3z9ucdh-" +"RbfhbPVpx7prOVdM1CAuxg>`_." +msgstr "" +"该功能处于实验阶段。欢迎通过 `GitHub " +"issues <https://github.com/xorbitsai/inference/issues>`_ 或者" +" `Slack <https://join.slack.com/t/xorbitsio/shared_invite/zt-1o3z9ucdh-" +"RbfhbPVpx7prOVdM1CAuxg>`_ 提供反馈和建议。" + diff --git a/doc/source/locale/zh_CN/LC_MESSAGES/user_guide/client_api.po b/doc/source/locale/zh_CN/LC_MESSAGES/user_guide/client_api.po index c4080f0eab..134def615b 100644 --- a/doc/source/locale/zh_CN/LC_MESSAGES/user_guide/client_api.po +++ b/doc/source/locale/zh_CN/LC_MESSAGES/user_guide/client_api.po @@ -8,7 +8,7 @@ msgid "" msgstr "" "Project-Id-Version: Xinference \n" "Report-Msgid-Bugs-To: \n" -"POT-Creation-Date: 2023-12-25 17:11+0800\n" +"POT-Creation-Date: 2024-01-10 11:33+0800\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "Last-Translator: FULL NAME <EMAIL@ADDRESS>\n" "Language: zh_CN\n" @@ -17,7 +17,7 @@ msgstr "" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=utf-8\n" "Content-Transfer-Encoding: 8bit\n" -"Generated-By: Babel 2.11.0\n" +"Generated-By: Babel 2.12.1\n" #: ../../source/user_guide/client_api.rst:5 msgid "Client API" @@ -39,8 +39,8 @@ msgid "" "can connect to the xinference server through this endpoint using the " "Client." msgstr "" -"在命令日志里会打印服务地址,上述日志中为 `http://127.0.0.1:9997`。用户可以通过 " -"Client 连接 Xinference 服务。" +"在命令日志里会打印服务地址,上述日志中为 `http://127.0.0.1:9997`。用户可以通过 Client 连接 Xinference " +"服务。" #: ../../source/user_guide/client_api.rst:20 msgid "" @@ -60,41 +60,89 @@ msgstr "列出所有内置支持的 LLM 模型:" msgid "To initialize an LLM and chat:" msgstr "初始化一个大语言模型并且与之对话:" -#: ../../source/user_guide/client_api.rst:63 +#: ../../source/user_guide/client_api.rst:41 +#: ../../source/user_guide/client_api.rst:162 +#: ../../source/user_guide/client_api.rst:233 +msgid "Xinference Client" +msgstr "Xinference Client" + +#: ../../source/user_guide/client_api.rst:66 +#: ../../source/user_guide/client_api.rst:194 +#: ../../source/user_guide/client_api.rst:257 +msgid "OpenAI Client" +msgstr "OpenAI Client" + +#: ../../source/user_guide/client_api.rst:68 +msgid "" +"Openai client request with the same function as before, excluding launch " +"model. More details refer to: https://platform.openai.com/docs/api-" +"reference/chat?lang=python" +msgstr "" +"使用 Openai 发送请求时,除了创建模型,其余的请求都保持与 Openai 的接口兼容。" +"Openai 使用方式可以参考 https://platform.openai.com/docs/api-reference/chat?lang=python" + +#: ../../source/user_guide/client_api.rst:90 +msgid "OpenAI Client Tool Calls" +msgstr "OpenAI 工具调用" + +#: ../../source/user_guide/client_api.rst:135 +#: ../../source/user_guide/client_api.rst:176 +#: ../../source/user_guide/client_api.rst:208 +#: ../../source/user_guide/client_api.rst:248 +#: ../../source/user_guide/client_api.rst:272 +#: ../../source/user_guide/client_api.rst:300 +msgid "Output:" +msgstr "输出:" + +#: ../../source/user_guide/client_api.rst:144 msgid "Embedding" msgstr "Embedding" -#: ../../source/user_guide/client_api.rst:65 +#: ../../source/user_guide/client_api.rst:146 msgid "To list the available built-in embedding models:" msgstr "列出所有内置支持的 embedding 模型:" -#: ../../source/user_guide/client_api.rst:78 +#: ../../source/user_guide/client_api.rst:159 msgid "To launch an embedding model and embed text:" msgstr "拉起 embedding 模型并使用文本向量化:" -#: ../../source/user_guide/client_api.rst:92 -#: ../../source/user_guide/client_api.rst:138 -#: ../../source/user_guide/client_api.rst:168 -msgid "Output:" -msgstr "输出:" +#: ../../source/user_guide/client_api.rst:196 +msgid "" +"Openai client request with the same function as before, excluding launch " +"model. More details refer to: https://platform.openai.com/docs/api-" +"reference/embeddings?lang=python" +msgstr "" +"使用 Openai 发送请求时,除了创建模型,其余的请求都保持与 Openai 的接口兼容。" +"Openai 使用方式可以参考 https://platform.openai.com/docs/api-reference/embeddings?lang=python" + -#: ../../source/user_guide/client_api.rst:110 +#: ../../source/user_guide/client_api.rst:215 msgid "Image" msgstr "图片" -#: ../../source/user_guide/client_api.rst:112 +#: ../../source/user_guide/client_api.rst:217 msgid "To list the available built-in image models:" msgstr "列出所有内置的文生图模型:" -#: ../../source/user_guide/client_api.rst:123 +#: ../../source/user_guide/client_api.rst:230 msgid "To initiate an image model and generate an image using a text prompt:" msgstr "初始化一个文生图模型并通过提示词生成图片:" -#: ../../source/user_guide/client_api.rst:147 +#: ../../source/user_guide/client_api.rst:259 +msgid "" +"Openai client request with the same function as before, excluding launch " +"model. More details refer to: https://platform.openai.com/docs/api-" +"reference/images/create?lang=python" +msgstr "" +"使用 Openai 发送请求时,除了创建模型,其余的请求都保持与 Openai 的接口兼容。" +"Openai 使用方式可以参考 https://platform.openai.com/docs/api-reference/images/create?lang=python" + + +#: ../../source/user_guide/client_api.rst:279 msgid "Rerank" msgstr "Rerank" -#: ../../source/user_guide/client_api.rst:148 +#: ../../source/user_guide/client_api.rst:280 msgid "To launch a rerank model and compute the similarity scores:" msgstr "拉起 rerank 模型并计算文本相似度:" diff --git a/doc/source/user_guide/auth_system.rst b/doc/source/user_guide/auth_system.rst new file mode 100644 index 0000000000..eeeed7181c --- /dev/null +++ b/doc/source/user_guide/auth_system.rst @@ -0,0 +1,127 @@ +.. _user_guide_auth_system: + +=================================== +Simple OAuth2 System (experimental) +=================================== + +Xinference builds an In-memory OAuth2 authentication and authorization system using the account-password mode. + +.. note:: + If you don't have authentication and authorization requirements, you can use Xinference as before, without any changes. + + +Permissions +=========== +Currently, Xinference system internally defines some interface permissions: + +* ``models:list``: Permission to list models and get models' information. +* ``models:read``: Permission to use models. +* ``models:register``: Permission to register custom models. +* ``models:unregister``: Permission to unregister custom models. +* ``models:start``: Permission to launch models. +* ``models:stop``: Permission to stop running models. +* ``admin``: Administrators have permissions for all interfaces. + + +Startup +======= +All authentication and authorization information needs to be specified and loaded into memory when Xinference is started. +Xinference requires a JSON-formatted file with the following specific fields: + +.. code-block:: json + + { + "auth_config": { + "algorithm": "HS256", + "secret_key": "09d25e094faa6ca2556c818166b7a9563b93f7099f6f0f4caa6cf63b88e8d3e7", + "token_expire_in_minutes": 30 + }, + "user_config": [ + { + "username": "user1", + "password": "secret1", + "permissions": [ + "admin" + ] + }, + { + "username": "user2", + "password": "secret2", + "permissions": [ + "models:list", + "models:read" + ] + } + ] + } + + +* ``auth_config``: This field is used to configure security-related information. + + * ``algorithm``: The algorithm used for token generation and parsing. ``HS256`` or ``RS256`` is recommended. + + * ``secret_key``: The secret_key used for token generation and parsing. Use this command to generate: ``openssl rand -hex 32``. + + * ``token_expire_in_minutes``: Reserved field indicating the expiration time of the token. The current open-source version of Xinference does not check the expiration time of tokens. + +* ``user_config``: This field is used to configure user and permission information. Each user information is composed of these fields: + + * ``username``: string field for username. + + * ``password``: string field for password. + + * ``permissions``: A list containing strings representing the permissions that this user has. The permissions are described as above. + + +Once you have configured such a JSON file, use the ``--auth-config`` option to enable Xinference with the authentication and authorization system. For example, for local startup: + +.. code-block:: bash + + xinference-local -H 0.0.0.0 --auth-config /path/to/your_json_config_file + + +For distributed startup, just specify this option when starting the supervisor: + +.. code-block:: bash + + xinference-supervisor -H <supervisor_ip> --auth-config /path/to/your_json_config_file + + +Usage +===== +For Xinference with the authentication and authorization system enabled, all usage remains the same, except for the addition of a login step at the beginning. + +Signin for command line users: + +.. code-block:: bash + + xinference login -e <endpoint> --username <username> --password <password> + + +For python SDK users: + +.. code-block:: python + + from xinference.client import Client + client = Client('<endpoint>') + client.login('<name>', '<pass>') + + +For web UI users, when opening the web UI, you will first be directed to the login page. After logging in, you can use the web UI normally. + + +Http Status Code +================ +Add the following two HTTP status codes: + +* ``401 Unauthorized``: login information or token verifies failed. +* ``403 Forbidden``: No enough permissions when accessing interfaces. + +For the command line, SDK, or web UI users, there will be clear information prompts when encountering authorization and permissions issues. + + +Note +==== +This feature is still in an experimental stage. +Feel free to provide feedback on usage issues or improvement suggestions through `GitHub issues <https://github.com/xorbitsai/inference/issues>`_ or +`our Slack <https://join.slack.com/t/xorbitsio/shared_invite/zt-1o3z9ucdh-RbfhbPVpx7prOVdM1CAuxg>`_. diff --git a/doc/source/user_guide/index.rst b/doc/source/user_guide/index.rst index bca65cec4f..5cdf96e95e 100644 --- a/doc/source/user_guide/index.rst +++ b/doc/source/user_guide/index.rst @@ -11,3 +11,4 @@ User Guide backends client_api spec_decoding + auth_system diff --git a/setup.cfg b/setup.cfg index 893f9c334b..aa037fa46e 100644 --- a/setup.cfg +++ b/setup.cfg @@ -41,6 +41,8 @@ install_requires = modelscope>=1.10.0 sse_starlette>=1.6.5 # ensure_bytes API break change: https://github.com/sysid/sse-starlette/issues/65 openai>1 # For typing + python-jose[cryptography] + passlib[bcrypt] [options.packages.find] exclude = diff --git a/xinference/api/oauth2/__init__.py b/xinference/api/oauth2/__init__.py new file mode 100644 index 0000000000..37f6558d95 --- /dev/null +++ b/xinference/api/oauth2/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2022-2023 XProbe Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/xinference/api/oauth2/common.py b/xinference/api/oauth2/common.py new file mode 100644 index 0000000000..3d74b66482 --- /dev/null +++ b/xinference/api/oauth2/common.py @@ -0,0 +1,14 @@ +# Copyright 2022-2023 XProbe Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +XINFERENCE_OAUTH2_CONFIG = None diff --git a/xinference/api/oauth2/core.py b/xinference/api/oauth2/core.py new file mode 100644 index 0000000000..e1a6724de0 --- /dev/null +++ b/xinference/api/oauth2/core.py @@ -0,0 +1,93 @@ +# Copyright 2022-2023 XProbe Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +import logging +from typing import List, Optional, Union + +from fastapi import Depends, HTTPException, status +from fastapi.security import OAuth2PasswordBearer, SecurityScopes +from jose import JWTError, jwt +from pydantic import BaseModel, ValidationError +from typing_extensions import Annotated + +from .types import AuthStartupConfig, User + +logger = logging.getLogger(__name__) + + +oauth2_scheme = OAuth2PasswordBearer(tokenUrl="token") + + +def get_db(): + from .common import XINFERENCE_OAUTH2_CONFIG + + # In a real enterprise-level environment, this should be the database + yield XINFERENCE_OAUTH2_CONFIG + + +def get_user(db_users: List[User], username: str) -> Optional[User]: + for user in db_users: + if user.username == username: + return user + return None + + +class TokenData(BaseModel): + username: Union[str, None] = None + scopes: List[str] = [] + + +def verify_token( + security_scopes: SecurityScopes, + token: Annotated[str, Depends(oauth2_scheme)], + config: Optional[AuthStartupConfig] = Depends(get_db), +): + if security_scopes.scopes: + authenticate_value = f'Bearer scope="{security_scopes.scope_str}"' + else: + authenticate_value = "Bearer" + credentials_exception = HTTPException( + status_code=status.HTTP_401_UNAUTHORIZED, + detail="Could not validate credentials", + headers={"WWW-Authenticate": authenticate_value}, + ) + + try: + assert config is not None + payload = jwt.decode( + token, + config.auth_config.secret_key, + algorithms=[config.auth_config.algorithm], + options={"verify_exp": False}, # TODO: supports token expiration + ) + username: str = payload.get("sub") + if username is None: + raise credentials_exception + token_scopes = payload.get("scopes", []) + # TODO: check expire + token_data = TokenData(scopes=token_scopes, username=username) + except (JWTError, ValidationError): + raise credentials_exception + user = get_user(config.user_config, username=token_data.username) # type: ignore + if user is None: + raise credentials_exception + if "admin" in token_data.scopes: + return user + for scope in security_scopes.scopes: + if scope not in token_data.scopes: + raise HTTPException( + status_code=status.HTTP_403_FORBIDDEN, + detail="Not enough permissions", + headers={"WWW-Authenticate": authenticate_value}, + ) + return user diff --git a/xinference/api/oauth2/types.py b/xinference/api/oauth2/types.py new file mode 100644 index 0000000000..b0a86a5314 --- /dev/null +++ b/xinference/api/oauth2/types.py @@ -0,0 +1,36 @@ +# Copyright 2022-2023 XProbe Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +from typing import List + +from pydantic import BaseModel + + +class LoginUserForm(BaseModel): + username: str + password: str + + +class User(LoginUserForm): + permissions: List[str] + + +class AuthConfig(BaseModel): + algorithm: str = "HS256" + secret_key: str + token_expire_in_minutes: int + + +class AuthStartupConfig(BaseModel): + auth_config: AuthConfig + user_config: List[User] diff --git a/xinference/api/oauth2/utils.py b/xinference/api/oauth2/utils.py new file mode 100644 index 0000000000..9980b7722a --- /dev/null +++ b/xinference/api/oauth2/utils.py @@ -0,0 +1,44 @@ +# Copyright 2022-2023 XProbe Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +from datetime import datetime, timedelta +from typing import Union + +from jose import jwt +from passlib.context import CryptContext + +pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto") + + +def create_access_token( + data: dict, + secret_key: str, + algorithm: str, + expires_delta: Union[timedelta, None] = None, +): + to_encode = data.copy() + if expires_delta: + expire = datetime.utcnow() + expires_delta + else: + expire = datetime.utcnow() + timedelta(minutes=15) + to_encode.update({"exp": expire}) + encoded_jwt = jwt.encode(to_encode, secret_key, algorithm=algorithm) + return encoded_jwt + + +def verify_password(plain_password, hashed_password): + return pwd_context.verify(plain_password, hashed_password) + + +def get_password_hash(password): + return pwd_context.hash(password) diff --git a/xinference/api/restful_api.py b/xinference/api/restful_api.py index dd628a08d9..3bdee59210 100644 --- a/xinference/api/restful_api.py +++ b/xinference/api/restful_api.py @@ -21,9 +21,11 @@ import pprint import sys import warnings +from datetime import timedelta from typing import Any, List, Optional, Union import gradio as gr +import pydantic import xoscar as xo from fastapi import ( APIRouter, @@ -34,9 +36,12 @@ Query, Request, Response, + Security, UploadFile, + status, ) from fastapi.middleware.cors import CORSMiddleware +from fastapi.responses import JSONResponse from fastapi.staticfiles import StaticFiles from PIL import Image from pydantic import BaseModel, Field @@ -57,11 +62,14 @@ CreateCompletion, ImageList, ) +from .oauth2.core import get_user, verify_token +from .oauth2.types import AuthStartupConfig, LoginUserForm, User +from .oauth2.utils import create_access_token, get_password_hash, verify_password logger = logging.getLogger(__name__) -class JSONResponse(StarletteJSONResponse): +class JSONResponse(StarletteJSONResponse): # type: ignore # noqa: F811 def render(self, content: Any) -> bytes: return json_dumps(content) @@ -125,16 +133,48 @@ class BuildGradioInterfaceRequest(BaseModel): model_lang: List[str] +def authenticate_user(db_users: List[User], username: str, password: str): + user = get_user(db_users, username) + if not user: + return False + if not verify_password(password, user.password): + return False + return user + + class RESTfulAPI: - def __init__(self, supervisor_address: str, host: str, port: int): + def __init__( + self, + supervisor_address: str, + host: str, + port: int, + auth_config_file: Optional[str] = None, + ): super().__init__() self._supervisor_address = supervisor_address self._host = host self._port = port self._supervisor_ref = None + self._auth_config: AuthStartupConfig = self.init_auth_config(auth_config_file) self._router = APIRouter() self._app = FastAPI() + @staticmethod + def init_auth_config(auth_config_file: Optional[str]): + from .oauth2 import common + + if auth_config_file: + config: AuthStartupConfig = pydantic.parse_file_as( + path=auth_config_file, type_=AuthStartupConfig + ) + for user in config.user_config: + user.password = get_password_hash(user.password) + common.XINFERENCE_OAUTH2_CONFIG = config # type: ignore + return config + + def is_authenticated(self): + return False if self._auth_config is None else True + @staticmethod def handle_request_limit_error(e: Exception): if "Rate limit reached" in str(e): @@ -147,6 +187,33 @@ async def _get_supervisor_ref(self) -> xo.ActorRefType[SupervisorActor]: ) return self._supervisor_ref + async def login_for_access_token(self, form_data: LoginUserForm) -> JSONResponse: + user = authenticate_user( + self._auth_config.user_config, form_data.username, form_data.password + ) + if not user: + raise HTTPException( + status_code=status.HTTP_401_UNAUTHORIZED, + detail="Incorrect username or password", + headers={"WWW-Authenticate": "Bearer"}, + ) + assert user is not None and isinstance(user, User) + access_token_expires = timedelta( + minutes=self._auth_config.auth_config.token_expire_in_minutes + ) + access_token = create_access_token( + data={"sub": user.username, "scopes": user.permissions}, + secret_key=self._auth_config.auth_config.secret_key, + algorithm=self._auth_config.auth_config.algorithm, + expires_delta=access_token_expires, + ) + return JSONResponse( + content={"access_token": access_token, "token_type": "bearer"} + ) + + async def is_cluster_authenticated(self) -> JSONResponse: + return JSONResponse(content={"auth": self.is_authenticated()}) + def serve(self, logging_conf: Optional[dict] = None): self._app.add_middleware( CORSMiddleware, @@ -155,8 +222,10 @@ def serve(self, logging_conf: Optional[dict] = None): allow_methods=["*"], allow_headers=["*"], ) + + # internal interface self._router.add_api_route("/status", self.get_status, methods=["GET"]) - self._router.add_api_route("/v1/models", self.list_models, methods=["GET"]) + # conflict with /v1/models/{model_uid} below, so register this first self._router.add_api_route( "/v1/models/prompts", self._get_builtin_prompts, methods=["GET"] ) @@ -166,52 +235,115 @@ def serve(self, logging_conf: Optional[dict] = None): self._router.add_api_route( "/v1/cluster/devices", self._get_devices_count, methods=["GET"] ) + self._router.add_api_route("/v1/address", self.get_address, methods=["GET"]) + + # user interface + self._router.add_api_route( + "/v1/ui/{model_uid}", + self.build_gradio_interface, + methods=["POST"], + dependencies=[Security(verify_token, scopes=["models:read"])] + if self.is_authenticated() + else None, + ) + self._router.add_api_route( + "/token", self.login_for_access_token, methods=["POST"] + ) + self._router.add_api_route( + "/v1/cluster/auth", self.is_cluster_authenticated, methods=["GET"] + ) + self._router.add_api_route( + "/v1/models", + self.list_models, + methods=["GET"], + dependencies=[Security(verify_token, scopes=["models:list"])] + if self.is_authenticated() + else None, + ) + + self._router.add_api_route( + "/v1/models/{model_uid}", + self.describe_model, + methods=["GET"], + dependencies=[Security(verify_token, scopes=["models:list"])] + if self.is_authenticated() + else None, + ) self._router.add_api_route( - "/v1/models/{model_uid}", self.describe_model, methods=["GET"] + "/v1/models", + self.launch_model, + methods=["POST"], + dependencies=[Security(verify_token, scopes=["models:start"])] + if self.is_authenticated() + else None, ) - self._router.add_api_route("/v1/models", self.launch_model, methods=["POST"]) self._router.add_api_route( "/experimental/speculative_llms", self.launch_speculative_llm, methods=["POST"], + dependencies=[Security(verify_token, scopes=["models:start"])] + if self.is_authenticated() + else None, ) self._router.add_api_route( - "/v1/models/{model_uid}", self.terminate_model, methods=["DELETE"] + "/v1/models/{model_uid}", + self.terminate_model, + methods=["DELETE"], + dependencies=[Security(verify_token, scopes=["models:stop"])] + if self.is_authenticated() + else None, ) - self._router.add_api_route("/v1/address", self.get_address, methods=["GET"]) self._router.add_api_route( "/v1/completions", self.create_completion, methods=["POST"], response_model=Completion, + dependencies=[Security(verify_token, scopes=["models:read"])] + if self.is_authenticated() + else None, ) self._router.add_api_route( "/v1/embeddings", self.create_embedding, methods=["POST"], + dependencies=[Security(verify_token, scopes=["models:read"])] + if self.is_authenticated() + else None, ) self._router.add_api_route( "/v1/rerank", self.rerank, methods=["POST"], + dependencies=[Security(verify_token, scopes=["models:read"])] + if self.is_authenticated() + else None, ) self._router.add_api_route( "/v1/images/generations", self.create_images, methods=["POST"], response_model=ImageList, + dependencies=[Security(verify_token, scopes=["models:read"])] + if self.is_authenticated() + else None, ) self._router.add_api_route( "/v1/images/variations", self.create_variations, methods=["POST"], response_model=ImageList, + dependencies=[Security(verify_token, scopes=["models:read"])] + if self.is_authenticated() + else None, ) self._router.add_api_route( "/v1/chat/completions", self.create_chat_completion, methods=["POST"], response_model=ChatCompletion, + dependencies=[Security(verify_token, scopes=["models:read"])] + if self.is_authenticated() + else None, ) # for custom models @@ -219,25 +351,33 @@ def serve(self, logging_conf: Optional[dict] = None): "/v1/model_registrations/{model_type}", self.register_model, methods=["POST"], + dependencies=[Security(verify_token, scopes=["models:register"])] + if self.is_authenticated() + else None, ) self._router.add_api_route( "/v1/model_registrations/{model_type}/{model_name}", self.unregister_model, methods=["DELETE"], + dependencies=[Security(verify_token, scopes=["models:unregister"])] + if self.is_authenticated() + else None, ) self._router.add_api_route( "/v1/model_registrations/{model_type}", self.list_model_registrations, methods=["GET"], + dependencies=[Security(verify_token, scopes=["models:list"])] + if self.is_authenticated() + else None, ) self._router.add_api_route( "/v1/model_registrations/{model_type}/{model_name}", self.get_model_registrations, methods=["GET"], - ) - - self._router.add_api_route( - "/v1/ui/{model_uid}", self.build_gradio_interface, methods=["POST"] + dependencies=[Security(verify_token, scopes=["models:list"])] + if self.is_authenticated() + else None, ) self._app.include_router(self._router) @@ -467,7 +607,7 @@ async def launch_model(self, request: Request) -> JSONResponse: return JSONResponse(content={"model_uid": model_uid}) async def build_gradio_interface( - self, model_uid: str, body: BuildGradioInterfaceRequest + self, model_uid: str, body: BuildGradioInterfaceRequest, request: Request ) -> JSONResponse: """ Separate build_interface with launch_model @@ -492,6 +632,7 @@ async def build_gradio_interface( from ..core.chat_interface import LLMInterface try: + access_token = request.headers.get("Authorization") internal_host = "localhost" if self._host == "0.0.0.0" else self._host interface = LLMInterface( endpoint=f"http://{internal_host}:{self._port}", @@ -504,6 +645,7 @@ async def build_gradio_interface( model_ability=body.model_ability, model_description=body.model_description, model_lang=body.model_lang, + access_token=access_token, ).build() gr.mount_gradio_app(self._app, interface, f"/{model_uid}") except ValueError as ve: @@ -921,11 +1063,20 @@ async def get_model_registrations( def run( - supervisor_address: str, host: str, port: int, logging_conf: Optional[dict] = None + supervisor_address: str, + host: str, + port: int, + logging_conf: Optional[dict] = None, + auth_config_file: Optional[str] = None, ): logger.info(f"Starting Xinference at endpoint: http://{host}:{port}") try: - api = RESTfulAPI(supervisor_address=supervisor_address, host=host, port=port) + api = RESTfulAPI( + supervisor_address=supervisor_address, + host=host, + port=port, + auth_config_file=auth_config_file, + ) api.serve(logging_conf=logging_conf) except SystemExit: logger.warning("Failed to create socket with port %d", port) @@ -936,7 +1087,10 @@ def run( logger.info(f"Found available port: {port}") logger.info(f"Starting Xinference at endpoint: http://{host}:{port}") api = RESTfulAPI( - supervisor_address=supervisor_address, host=host, port=port + supervisor_address=supervisor_address, + host=host, + port=port, + auth_config_file=auth_config_file, ) api.serve(logging_conf=logging_conf) else: @@ -944,10 +1098,15 @@ def run( def run_in_subprocess( - supervisor_address: str, host: str, port: int, logging_conf: Optional[dict] = None + supervisor_address: str, + host: str, + port: int, + logging_conf: Optional[dict] = None, + auth_config_file: Optional[str] = None, ) -> multiprocessing.Process: p = multiprocessing.Process( - target=run, args=(supervisor_address, host, port, logging_conf) + target=run, + args=(supervisor_address, host, port, logging_conf, auth_config_file), ) p.daemon = True p.start() diff --git a/xinference/client/restful/restful_client.py b/xinference/client/restful/restful_client.py index 6a8c918c50..c081ede84c 100644 --- a/xinference/client/restful/restful_client.py +++ b/xinference/client/restful/restful_client.py @@ -53,9 +53,10 @@ class RESTfulModelHandle: programmatically. """ - def __init__(self, model_uid: str, base_url: str): + def __init__(self, model_uid: str, base_url: str, auth_headers: Dict): self._model_uid = model_uid self._base_url = base_url + self.auth_headers = auth_headers class RESTfulEmbeddingModelHandle(RESTfulModelHandle): @@ -82,7 +83,7 @@ def create_embedding(self, input: Union[str, List[str]]) -> "Embedding": """ url = f"{self._base_url}/v1/embeddings" request_body = {"model": self._model_uid, "input": input} - response = requests.post(url, json=request_body) + response = requests.post(url, json=request_body, headers=self.auth_headers) if response.status_code != 200: raise RuntimeError( f"Failed to create the embeddings, detail: {_get_error_string(response)}" @@ -135,7 +136,7 @@ def rerank( "max_chunks_per_doc": max_chunks_per_doc, "return_documents": return_documents, } - response = requests.post(url, json=request_body) + response = requests.post(url, json=request_body, headers=self.auth_headers) if response.status_code != 200: raise RuntimeError( f"Failed to rerank documents, detail: {response.json()['detail']}" @@ -182,7 +183,7 @@ def text_to_image( "response_format": response_format, "kwargs": json.dumps(kwargs), } - response = requests.post(url, json=request_body) + response = requests.post(url, json=request_body, headers=self.auth_headers) if response.status_code != 200: raise RuntimeError( f"Failed to create the images, detail: {_get_error_string(response)}" @@ -246,10 +247,7 @@ def image_to_image( for key, value in params.items(): files.append((key, (None, value))) files.append(("image", ("image", image, "application/octet-stream"))) - response = requests.post( - url, - files=files, - ) + response = requests.post(url, files=files, headers=self.auth_headers) if response.status_code != 200: raise RuntimeError( f"Failed to variants the images, detail: {_get_error_string(response)}" @@ -302,7 +300,9 @@ def generate( stream = bool(generate_config and generate_config.get("stream")) - response = requests.post(url, json=request_body, stream=stream) + response = requests.post( + url, json=request_body, stream=stream, headers=self.auth_headers + ) if response.status_code != 200: raise RuntimeError( f"Failed to generate completion, detail: {_get_error_string(response)}" @@ -384,7 +384,9 @@ def chat( request_body[key] = value stream = bool(generate_config and generate_config.get("stream")) - response = requests.post(url, json=request_body, stream=stream) + response = requests.post( + url, json=request_body, stream=stream, headers=self.auth_headers + ) if response.status_code != 200: raise RuntimeError( @@ -468,7 +470,9 @@ def chat( request_body[key] = value stream = bool(generate_config and generate_config.get("stream")) - response = requests.post(url, json=request_body, stream=stream) + response = requests.post( + url, json=request_body, stream=stream, headers=self.auth_headers + ) if response.status_code != 200: raise RuntimeError( @@ -536,7 +540,9 @@ def chat( request_body[key] = value stream = bool(generate_config and generate_config.get("stream")) - response = requests.post(url, json=request_body, stream=stream) + response = requests.post( + url, json=request_body, stream=stream, headers=self.auth_headers + ) if response.status_code != 200: raise RuntimeError( @@ -589,7 +595,9 @@ def generate( stream = bool(generate_config and generate_config.get("stream")) - response = requests.post(url, json=request_body, stream=stream) + response = requests.post( + url, json=request_body, stream=stream, headers=self.auth_headers + ) if response.status_code != 200: raise RuntimeError( f"Failed to generate completion, detail: {response.json()['detail']}" @@ -605,6 +613,47 @@ def generate( class Client: def __init__(self, base_url): self.base_url = base_url + self._headers = {} + self._cluster_authed = False + self._check_cluster_authenticated() + + def _set_token(self, token: Optional[str]): + if not self._cluster_authed or token is None: + return + self._headers["Authorization"] = f"Bearer {token}" + + def _get_token(self) -> Optional[str]: + return ( + str(self._headers["Authorization"]).replace("Bearer ", "") + if "Authorization" in self._headers + else None + ) + + def _check_cluster_authenticated(self): + url = f"{self.base_url}/v1/cluster/auth" + response = requests.get(url) + if response.status_code != 200: + raise RuntimeError( + f"Failed to get cluster information, detail: {response.json()['detail']}" + ) + response_data = response.json() + self._cluster_authed = bool(response_data["auth"]) + + def login(self, username: str, password: str): + if not self._cluster_authed: + return + url = f"{self.base_url}/token" + + payload = {"username": username, "password": password} + + response = requests.post(url, json=payload) + if response.status_code != 200: + raise RuntimeError(f"Failed to login, detail: {response.json()['detail']}") + + response_data = response.json() + # Only bearer token for now + access_token = response_data["access_token"] + self._headers["Authorization"] = f"Bearer {access_token}" def list_models(self) -> Dict[str, Dict[str, Any]]: """ @@ -619,7 +668,7 @@ def list_models(self) -> Dict[str, Dict[str, Any]]: url = f"{self.base_url}/v1/models" - response = requests.get(url) + response = requests.get(url, headers=self._headers) if response.status_code != 200: raise RuntimeError( f"Failed to list model, detail: {_get_error_string(response)}" @@ -664,7 +713,7 @@ def launch_speculative_llm( } url = f"{self.base_url}/experimental/speculative_llms" - response = requests.post(url, json=payload) + response = requests.post(url, json=payload, headers=self._headers) if response.status_code != 200: raise RuntimeError( f"Failed to launch model, detail: {_get_error_string(response)}" @@ -739,7 +788,7 @@ def launch_model( for key, value in kwargs.items(): payload[str(key)] = value - response = requests.post(url, json=payload) + response = requests.post(url, json=payload, headers=self._headers) if response.status_code != 200: raise RuntimeError( f"Failed to launch model, detail: {_get_error_string(response)}" @@ -766,7 +815,7 @@ def terminate_model(self, model_uid: str): url = f"{self.base_url}/v1/models/{model_uid}" - response = requests.delete(url) + response = requests.delete(url, headers=self._headers) if response.status_code != 200: raise RuntimeError( f"Failed to terminate model, detail: {_get_error_string(response)}" @@ -774,7 +823,7 @@ def terminate_model(self, model_uid: str): def _get_supervisor_internal_address(self): url = f"{self.base_url}/v1/address" - response = requests.get(url) + response = requests.get(url, headers=self._headers) if response.status_code != 200: raise RuntimeError(f"Failed to get supervisor internal address") response_data = response.json() @@ -806,7 +855,7 @@ def get_model(self, model_uid: str) -> RESTfulModelHandle: """ url = f"{self.base_url}/v1/models/{model_uid}" - response = requests.get(url) + response = requests.get(url, headers=self._headers) if response.status_code != 200: raise RuntimeError( f"Failed to get the model description, detail: {_get_error_string(response)}" @@ -815,21 +864,35 @@ def get_model(self, model_uid: str) -> RESTfulModelHandle: if desc["model_type"] == "LLM": if desc["model_format"] == "ggmlv3" and "chatglm" in desc["model_name"]: - return RESTfulChatglmCppGenerateModelHandle(model_uid, self.base_url) + return RESTfulChatglmCppGenerateModelHandle( + model_uid, self.base_url, auth_headers=self._headers + ) elif "chat" in desc["model_ability"]: - return RESTfulChatModelHandle(model_uid, self.base_url) + return RESTfulChatModelHandle( + model_uid, self.base_url, auth_headers=self._headers + ) elif "generate" in desc["model_ability"]: - return RESTfulGenerateModelHandle(model_uid, self.base_url) + return RESTfulGenerateModelHandle( + model_uid, self.base_url, auth_headers=self._headers + ) else: raise ValueError(f"Unrecognized model ability: {desc['model_ability']}") elif desc["model_type"] == "embedding": - return RESTfulEmbeddingModelHandle(model_uid, self.base_url) + return RESTfulEmbeddingModelHandle( + model_uid, self.base_url, auth_headers=self._headers + ) elif desc["model_type"] == "image": - return RESTfulImageModelHandle(model_uid, self.base_url) + return RESTfulImageModelHandle( + model_uid, self.base_url, auth_headers=self._headers + ) elif desc["model_type"] == "rerank": - return RESTfulRerankModelHandle(model_uid, self.base_url) + return RESTfulRerankModelHandle( + model_uid, self.base_url, auth_headers=self._headers + ) elif desc["model_type"] == "multimodal": - return RESTfulMultimodalModelHandle(model_uid, self.base_url) + return RESTfulMultimodalModelHandle( + model_uid, self.base_url, auth_headers=self._headers + ) else: raise ValueError(f"Unknown model type:{desc['model_type']}") @@ -876,7 +939,7 @@ def describe_model(self, model_uid: str): """ url = f"{self.base_url}/v1/models/{model_uid}" - response = requests.get(url) + response = requests.get(url, headers=self._headers) if response.status_code != 200: raise RuntimeError( f"Failed to get the model description, detail: {_get_error_string(response)}" @@ -903,7 +966,7 @@ def register_model(self, model_type: str, model: str, persist: bool): """ url = f"{self.base_url}/v1/model_registrations/{model_type}" request_body = {"model": model, "persist": persist} - response = requests.post(url, json=request_body) + response = requests.post(url, json=request_body, headers=self._headers) if response.status_code != 200: raise RuntimeError( f"Failed to register model, detail: {_get_error_string(response)}" @@ -929,7 +992,7 @@ def unregister_model(self, model_type: str, model_name: str): Report failure to unregister the custom model. Provide details of failure through error message. """ url = f"{self.base_url}/v1/model_registrations/{model_type}/{model_name}" - response = requests.delete(url) + response = requests.delete(url, headers=self._headers) if response.status_code != 200: raise RuntimeError( f"Failed to register model, detail: {_get_error_string(response)}" @@ -959,7 +1022,7 @@ def list_model_registrations(self, model_type: str) -> List[Dict[str, Any]]: """ url = f"{self.base_url}/v1/model_registrations/{model_type}" - response = requests.get(url) + response = requests.get(url, headers=self._headers) if response.status_code != 200: raise RuntimeError( f"Failed to list model registration, detail: {_get_error_string(response)}" @@ -987,7 +1050,7 @@ def get_model_registration( The collection of registered models on the server. """ url = f"{self.base_url}/v1/model_registrations/{model_type}/{model_name}" - response = requests.get(url) + response = requests.get(url, headers=self._headers) if response.status_code != 200: raise RuntimeError( f"Failed to list model registration, detail: {_get_error_string(response)}" diff --git a/xinference/client/tests/test_client_with_auth.py b/xinference/client/tests/test_client_with_auth.py new file mode 100644 index 0000000000..5be0df8fbf --- /dev/null +++ b/xinference/client/tests/test_client_with_auth.py @@ -0,0 +1,51 @@ +# Copyright 2022-2023 XProbe Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import pytest + +from ..restful.restful_client import Client as RESTfulClient +from ..restful.restful_client import RESTfulEmbeddingModelHandle + + +def test_client_auth(setup_with_auth): + endpoint, _ = setup_with_auth + client = RESTfulClient(endpoint) + with pytest.raises(RuntimeError): + client.list_models() + + client.login("user2", "pass2") + assert len(client.list_models()) == 0 + + with pytest.raises(RuntimeError): + client.launch_model( + model_name="jina-embeddings-v2-small-en", model_type="embedding" + ) + + client.login("user3", "pass3") + model_uid = client.launch_model( + model_name="jina-embeddings-v2-small-en", model_type="embedding" + ) + model = client.get_model(model_uid=model_uid) + assert isinstance(model, RESTfulEmbeddingModelHandle) + + completion = model.create_embedding("write a poem.") + assert len(completion["data"][0]["embedding"]) == 512 + + with pytest.raises(RuntimeError): + client.terminate_model(model_uid=model_uid) + + client.login("user1", "pass1") + assert len(client.list_models()) == 1 + client.terminate_model(model_uid=model_uid) + assert len(client.list_models()) == 0 diff --git a/xinference/conftest.py b/xinference/conftest.py index 4d41c326c4..5c0394d688 100644 --- a/xinference/conftest.py +++ b/xinference/conftest.py @@ -13,16 +13,19 @@ # limitations under the License. import asyncio +import json import logging import multiprocessing import os import signal import sys +import tempfile from typing import Dict, Optional import pytest import xoscar as xo +from .api.oauth2.types import AuthConfig, AuthStartupConfig, User from .constants import XINFERENCE_LOG_BACKUP_COUNT, XINFERENCE_LOG_MAX_BYTES from .core.supervisor import SupervisorActor from .deploy.utils import create_worker_actor_pool, get_log_file, get_timestamp_ms @@ -233,3 +236,58 @@ def setup_with_file_logging(): local_cluster_proc.terminate() restful_api_proc.terminate() + + [email protected] +def setup_with_auth(): + from .api.restful_api import run_in_subprocess as run_restful_api + from .deploy.utils import health_check as cluster_health_check + + logging.config.dictConfig(TEST_LOGGING_CONF) # type: ignore + + supervisor_addr = f"localhost:{xo.utils.get_next_port()}" + local_cluster_proc = run_test_cluster_in_subprocess( + supervisor_addr, TEST_LOGGING_CONF + ) + if not cluster_health_check(supervisor_addr, max_attempts=10, sleep_interval=3): + raise RuntimeError("Cluster is not available after multiple attempts") + + user1 = User(username="user1", password="pass1", permissions=["admin"]) + user2 = User(username="user2", password="pass2", permissions=["models:list"]) + user3 = User( + username="user3", + password="pass3", + permissions=["models:list", "models:read", "models:start"], + ) + auth_config = AuthConfig( + algorithm="HS256", + secret_key="09d25e094faa6ca2556c818166b7a9563b93f7099f6f0f4caa6cf63b88e8d3e7", + token_expire_in_minutes=30, + ) + startup_config = AuthStartupConfig( + auth_config=auth_config, user_config=[user1, user2, user3] + ) + _, auth_file = tempfile.mkstemp() + with open(auth_file, "w") as fd: + fd.write(json.dumps(startup_config.dict())) + + port = xo.utils.get_next_port() + restful_api_proc = run_restful_api( + supervisor_addr, + host="localhost", + port=port, + logging_conf=TEST_LOGGING_CONF, + auth_config_file=auth_file, + ) + endpoint = f"http://localhost:{port}" + if not api_health_check(endpoint, max_attempts=10, sleep_interval=5): + raise RuntimeError("Endpoint is not available after multiple attempts") + + yield f"http://localhost:{port}", supervisor_addr + + local_cluster_proc.terminate() + restful_api_proc.terminate() + try: + os.remove(auth_file) + except: + pass diff --git a/xinference/constants.py b/xinference/constants.py index 5fee675083..294ea71e02 100644 --- a/xinference/constants.py +++ b/xinference/constants.py @@ -39,6 +39,7 @@ def get_xinference_home() -> str: XINFERENCE_MODEL_DIR = os.path.join(XINFERENCE_HOME, "model") XINFERENCE_LOG_DIR = os.path.join(XINFERENCE_HOME, "logs") XINFERENCE_IMAGE_DIR = os.path.join(XINFERENCE_HOME, "image") +XINFERENCE_AUTH_DIR = os.path.join(XINFERENCE_HOME, "auth") XINFERENCE_DEFAULT_LOCAL_HOST = "127.0.0.1" XINFERENCE_DEFAULT_DISTRIBUTED_HOST = "0.0.0.0" diff --git a/xinference/core/chat_interface.py b/xinference/core/chat_interface.py index aa5b284a72..3adbbd36b2 100644 --- a/xinference/core/chat_interface.py +++ b/xinference/core/chat_interface.py @@ -14,7 +14,7 @@ import logging import os -from typing import Generator, List +from typing import Generator, List, Optional import gradio as gr from gradio.components import Markdown, Textbox @@ -43,6 +43,7 @@ def __init__( model_ability: List[str], model_description: str, model_lang: List[str], + access_token: Optional[str], ): self.endpoint = endpoint self.model_uid = model_uid @@ -54,6 +55,9 @@ def __init__( self.model_ability = model_ability self.model_description = model_description self.model_lang = model_lang + self._access_token = ( + access_token.replace("Bearer ", "") if access_token is not None else None + ) def build(self) -> "gr.Blocks": if "chat" in self.model_ability: @@ -102,6 +106,7 @@ def generate_wrapper( from ..client import RESTfulClient client = RESTfulClient(self.endpoint) + client._set_token(self._access_token) model = client.get_model(self.model_uid) assert isinstance( model, (RESTfulChatModelHandle, RESTfulChatglmCppChatModelHandle) @@ -198,6 +203,7 @@ def complete(text, hist, max_tokens, temperature) -> Generator: from ..client import RESTfulClient client = RESTfulClient(self.endpoint) + client._set_token(self._access_token) model = client.get_model(self.model_uid) assert isinstance(model, RESTfulGenerateModelHandle) @@ -234,6 +240,7 @@ def retry(text, hist, max_tokens, temperature) -> Generator: from ..client import RESTfulClient client = RESTfulClient(self.endpoint) + client._set_token(self._access_token) model = client.get_model(self.model_uid) assert isinstance(model, RESTfulGenerateModelHandle) diff --git a/xinference/deploy/cmdline.py b/xinference/deploy/cmdline.py index 0910ae3c9b..28099bdf1a 100644 --- a/xinference/deploy/cmdline.py +++ b/xinference/deploy/cmdline.py @@ -24,13 +24,13 @@ from .. import __version__ from ..client import RESTfulClient -from ..client.oscar.actor_client import ActorClient from ..client.restful.restful_client import ( RESTfulChatglmCppChatModelHandle, RESTfulChatModelHandle, RESTfulGenerateModelHandle, ) from ..constants import ( + XINFERENCE_AUTH_DIR, XINFERENCE_DEFAULT_DISTRIBUTED_HOST, XINFERENCE_DEFAULT_ENDPOINT_PORT, XINFERENCE_DEFAULT_LOCAL_HOST, @@ -62,10 +62,32 @@ def get_endpoint(endpoint: Optional[str]) -> str: return endpoint +def get_hash_endpoint(endpoint: str) -> str: + import hashlib + + m = hashlib.sha256() + m.update(bytes(endpoint, "utf-8")) + return m.hexdigest() + + +def get_stored_token( + endpoint: str, client: Optional[RESTfulClient] = None +) -> Optional[str]: + rest_client = RESTfulClient(endpoint) if client is None else client + authed = rest_client._cluster_authed + if not authed: + return None + + token_path = os.path.join(XINFERENCE_AUTH_DIR, get_hash_endpoint(endpoint)) + if not os.path.exists(token_path): + raise RuntimeError("Cannot find access token, please login first!") + with open(token_path, "r") as f: + access_token = str(f.read()) + return access_token + + def start_local_cluster( - log_level: str, - host: str, - port: int, + log_level: str, host: str, port: int, auth_config_file: Optional[str] = None ): from .local import main @@ -81,6 +103,7 @@ def start_local_cluster( host=host, port=port, logging_conf=dict_config, + auth_config_file=auth_config_file, ) @@ -159,12 +182,15 @@ def cli( type=int, help="Specify the port number for the Xinference server.", ) -def local( - log_level: str, - host: str, - port: int, -): - start_local_cluster(log_level=log_level, host=host, port=port) [email protected]( + "--auth-config", + type=str, + help="Specify the auth config json file.", +) +def local(log_level: str, host: str, port: int, auth_config: Optional[str]): + start_local_cluster( + log_level=log_level, host=host, port=port, auth_config_file=auth_config + ) @click.command( @@ -196,7 +222,18 @@ def local( type=int, help="Specify the port number for the Xinference supervisor.", ) -def supervisor(log_level: str, host: str, port: int, supervisor_port: Optional[int]): [email protected]( + "--auth-config", + type=str, + help="Specify the auth config json file.", +) +def supervisor( + log_level: str, + host: str, + port: int, + supervisor_port: Optional[int], + auth_config: Optional[str], +): from ..deploy.supervisor import main dict_config = get_config_dict( @@ -208,7 +245,11 @@ def supervisor(log_level: str, host: str, port: int, supervisor_port: Optional[i logging.config.dictConfig(dict_config) # type: ignore main( - host=host, port=port, supervisor_port=supervisor_port, logging_conf=dict_config + host=host, + port=port, + supervisor_port=supervisor_port, + logging_conf=dict_config, + auth_config_file=auth_config, ) @@ -288,6 +329,7 @@ def register_model( model = fd.read() client = RESTfulClient(base_url=endpoint) + client._set_token(get_stored_token(endpoint, client)) client.register_model( model_type=model_type, model=model, @@ -316,6 +358,7 @@ def unregister_model( endpoint = get_endpoint(endpoint) client = RESTfulClient(base_url=endpoint) + client._set_token(get_stored_token(endpoint, client)) client.unregister_model( model_type=model_type, model_name=model_name, @@ -343,8 +386,9 @@ def list_model_registrations( from tabulate import tabulate endpoint = get_endpoint(endpoint) - client = RESTfulClient(base_url=endpoint) + client._set_token(get_stored_token(endpoint, client)) + registrations = client.list_model_registrations(model_type=model_type) table = [] @@ -518,8 +562,9 @@ def model_launch( if size_in_billions is None or "_" in size_in_billions else int(size_in_billions) ) - client = RESTfulClient(base_url=endpoint) + client._set_token(get_stored_token(endpoint, client)) + model_uid = client.launch_model( model_name=model_name, model_type=model_type, @@ -550,6 +595,7 @@ def model_list(endpoint: Optional[str]): endpoint = get_endpoint(endpoint) client = RESTfulClient(base_url=endpoint) + client._set_token(get_stored_token(endpoint, client)) llm_table = [] embedding_table = [] @@ -626,8 +672,8 @@ def model_terminate( model_uid: str, ): endpoint = get_endpoint(endpoint) - client = RESTfulClient(base_url=endpoint) + client._set_token(get_stored_token(endpoint, client)) client.terminate_model(model_uid=model_uid) @@ -657,6 +703,8 @@ def model_generate( stream: bool, ): endpoint = get_endpoint(endpoint) + client = RESTfulClient(base_url=endpoint) + client._set_token(get_stored_token(endpoint, client)) if stream: # TODO: when stream=True, RestfulClient cannot generate words one by one. # So use Client in temporary. The implementation needs to be changed to @@ -669,7 +717,7 @@ async def generate_internal(): if prompt == "": break print(f"Completion: {prompt}", end="", file=sys.stdout) - async for chunk in model.generate( + for chunk in model.generate( prompt=prompt, generate_config={"stream": stream, "max_tokens": max_tokens}, ): @@ -680,7 +728,6 @@ async def generate_internal(): print(choice["text"], end="", flush=True, file=sys.stdout) print("", file=sys.stdout) - client = ActorClient(endpoint=endpoint) model = client.get_model(model_uid=model_uid) loop = asyncio.get_event_loop() @@ -700,8 +747,7 @@ async def generate_internal(): # avoid displaying exception-unhandled warnings task.exception() else: - restful_client = RESTfulClient(base_url=endpoint) - restful_model = restful_client.get_model(model_uid=model_uid) + restful_model = client.get_model(model_uid=model_uid) if not isinstance( restful_model, (RESTfulChatModelHandle, RESTfulGenerateModelHandle) ): @@ -744,6 +790,9 @@ def model_chat( ): # TODO: chat model roles may not be user and assistant. endpoint = get_endpoint(endpoint) + client = RESTfulClient(base_url=endpoint) + client._set_token(get_stored_token(endpoint, client)) + chat_history: "List[ChatCompletionMessage]" = [] if stream: # TODO: when stream=True, RestfulClient cannot generate words one by one. @@ -758,7 +807,7 @@ async def chat_internal(): break print("Assistant: ", end="", file=sys.stdout) response_content = "" - async for chunk in model.chat( + for chunk in model.chat( prompt=prompt, chat_history=chat_history, generate_config={"stream": stream, "max_tokens": max_tokens}, @@ -775,7 +824,6 @@ async def chat_internal(): ChatCompletionMessage(role="assistant", content=response_content) ) - client = ActorClient(endpoint=endpoint) model = client.get_model(model_uid=model_uid) loop = asyncio.get_event_loop() @@ -795,8 +843,7 @@ async def chat_internal(): # avoid displaying exception-unhandled warnings task.exception() else: - restful_client = RESTfulClient(base_url=endpoint) - restful_model = restful_client.get_model(model_uid=model_uid) + restful_model = client.get_model(model_uid=model_uid) if not isinstance( restful_model, (RESTfulChatModelHandle, RESTfulChatglmCppChatModelHandle) ): @@ -822,5 +869,31 @@ async def chat_internal(): ) [email protected]("login", help="Login when the cluster is authenticated.") [email protected]("--endpoint", "-e", type=str, help="Xinference endpoint.") [email protected]("--username", type=str, required=True, help="Username.") [email protected]( + "--password", + type=str, + required=True, + help="Password.", +) +def cluster_login( + endpoint: Optional[str], + username: str, + password: str, +): + endpoint = get_endpoint(endpoint) + restful_client = RESTfulClient(base_url=endpoint) + if restful_client._cluster_authed: + restful_client.login(username, password) + access_token = restful_client._get_token() + assert access_token is not None + os.makedirs(XINFERENCE_AUTH_DIR, exist_ok=True) + hashed_ep = get_hash_endpoint(endpoint) + with open(os.path.join(XINFERENCE_AUTH_DIR, hashed_ep), "w") as f: + f.write(access_token) + + if __name__ == "__main__": cli() diff --git a/xinference/deploy/local.py b/xinference/deploy/local.py index d646f80906..a152c45edc 100644 --- a/xinference/deploy/local.py +++ b/xinference/deploy/local.py @@ -79,7 +79,12 @@ def run_in_subprocess( return p -def main(host: str, port: int, logging_conf: Optional[Dict] = None): +def main( + host: str, + port: int, + logging_conf: Optional[Dict] = None, + auth_config_file: Optional[str] = None, +): supervisor_address = f"{host}:{get_next_port()}" local_cluster = run_in_subprocess(supervisor_address, logging_conf) @@ -98,6 +103,7 @@ def main(host: str, port: int, logging_conf: Optional[Dict] = None): host=host, port=port, logging_conf=logging_conf, + auth_config_file=auth_config_file, ) finally: local_cluster.terminate() diff --git a/xinference/deploy/supervisor.py b/xinference/deploy/supervisor.py index ddc4f25224..57f03c99c6 100644 --- a/xinference/deploy/supervisor.py +++ b/xinference/deploy/supervisor.py @@ -75,6 +75,7 @@ def main( port: int, supervisor_port: Optional[int], logging_conf: Optional[Dict] = None, + auth_config_file: Optional[str] = None, ): supervisor_address = f"{host}:{supervisor_port or get_next_port()}" local_cluster = run_in_subprocess(supervisor_address, logging_conf) @@ -94,6 +95,7 @@ def main( host=host, port=port, logging_conf=logging_conf, + auth_config_file=auth_config_file, ) finally: local_cluster.terminate() diff --git a/xinference/web/ui/package-lock.json b/xinference/web/ui/package-lock.json index dfd1fc9d1b..6cbc3b3670 100644 --- a/xinference/web/ui/package-lock.json +++ b/xinference/web/ui/package-lock.json @@ -27,7 +27,9 @@ "@testing-library/react": "^13.4.0", "@testing-library/user-event": "^13.5.0", "formik": "^2.4.2", + "jsonwebtoken": "^9.0.2", "react": "^18.2.0", + "react-cookie": "^6.1.1", "react-dom": "^18.2.0", "react-pro-sidebar": "^1.1.0-alpha.1", "react-router-dom": "^6.14.1", @@ -4960,6 +4962,11 @@ "@types/node": "*" } }, + "node_modules/@types/cookie": { + "version": "0.5.4", + "resolved": "https://registry.npmjs.org/@types/cookie/-/cookie-0.5.4.tgz", + "integrity": "sha512-7z/eR6O859gyWIAjuvBWFzNURmf2oPBmJlfVWkwehU5nzIyjwBsTh7WMmEEV4JFnHuQ3ex4oyTvfKzcyJVDBNA==" + }, "node_modules/@types/d3-color": { "version": "2.0.3", "resolved": "https://registry.npmjs.org/@types/d3-color/-/d3-color-2.0.3.tgz", @@ -5069,6 +5076,15 @@ "@types/node": "*" } }, + "node_modules/@types/hoist-non-react-statics": { + "version": "3.3.5", + "resolved": "https://registry.npmjs.org/@types/hoist-non-react-statics/-/hoist-non-react-statics-3.3.5.tgz", + "integrity": "sha512-SbcrWzkKBw2cdwRTwQAswfpB9g9LJWfjtUeW/jvNwbhC8cpmmNYVePa+ncbUe0rGTQ7G3Ff6mYUN2VMfLVr+Sg==", + "dependencies": { + "@types/react": "*", + "hoist-non-react-statics": "^3.3.0" + } + }, "node_modules/@types/html-minifier-terser": { "version": "6.1.0", "resolved": "https://registry.npmjs.org/@types/html-minifier-terser/-/html-minifier-terser-6.1.0.tgz", @@ -6822,6 +6838,11 @@ "node-int64": "^0.4.0" } }, + "node_modules/buffer-equal-constant-time": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/buffer-equal-constant-time/-/buffer-equal-constant-time-1.0.1.tgz", + "integrity": "sha512-zRpUiDwd/xk6ADqPMATG8vc9VPrkck7T07OIx0gnjmJAnHnTVXNQG3vfvWNuiZIkwu9KrKdA1iJKfsfTVxE6NA==" + }, "node_modules/buffer-from": { "version": "1.1.2", "resolved": "https://registry.npmjs.org/buffer-from/-/buffer-from-1.1.2.tgz", @@ -8224,6 +8245,14 @@ "resolved": "https://registry.npmjs.org/duplexer/-/duplexer-0.1.2.tgz", "integrity": "sha512-jtD6YG370ZCIi/9GTaJKQxWTZD045+4R4hTk/x1UyoqadyJ9x9CgSi1RlVDQF8U2sxLLSnFkCaMihqljHIWgMg==" }, + "node_modules/ecdsa-sig-formatter": { + "version": "1.0.11", + "resolved": "https://registry.npmjs.org/ecdsa-sig-formatter/-/ecdsa-sig-formatter-1.0.11.tgz", + "integrity": "sha512-nagl3RYrbNv6kQkeJIpt6NJZy8twLB/2vtz6yN9Z4vRKHN4/QZJIEbqohALSgwKdnksuY3k5Addp5lg8sVoVcQ==", + "dependencies": { + "safe-buffer": "^5.0.1" + } + }, "node_modules/ee-first": { "version": "1.1.1", "resolved": "https://registry.npmjs.org/ee-first/-/ee-first-1.1.1.tgz", @@ -13249,6 +13278,27 @@ "node": ">=0.10.0" } }, + "node_modules/jsonwebtoken": { + "version": "9.0.2", + "resolved": "https://registry.npmjs.org/jsonwebtoken/-/jsonwebtoken-9.0.2.tgz", + "integrity": "sha512-PRp66vJ865SSqOlgqS8hujT5U4AOgMfhrwYIuIhfKaoSCZcirrmASQr8CX7cUg+RMih+hgznrjp99o+W4pJLHQ==", + "dependencies": { + "jws": "^3.2.2", + "lodash.includes": "^4.3.0", + "lodash.isboolean": "^3.0.3", + "lodash.isinteger": "^4.0.4", + "lodash.isnumber": "^3.0.3", + "lodash.isplainobject": "^4.0.6", + "lodash.isstring": "^4.0.1", + "lodash.once": "^4.0.0", + "ms": "^2.1.1", + "semver": "^7.5.4" + }, + "engines": { + "node": ">=12", + "npm": ">=6" + } + }, "node_modules/jsx-ast-utils": { "version": "3.3.5", "resolved": "https://registry.npmjs.org/jsx-ast-utils/-/jsx-ast-utils-3.3.5.tgz", @@ -13263,6 +13313,25 @@ "node": ">=4.0" } }, + "node_modules/jwa": { + "version": "1.4.1", + "resolved": "https://registry.npmjs.org/jwa/-/jwa-1.4.1.tgz", + "integrity": "sha512-qiLX/xhEEFKUAJ6FiBMbes3w9ATzyk5W7Hvzpa/SLYdxNtng+gcurvrI7TbACjIXlsJyr05/S1oUhZrc63evQA==", + "dependencies": { + "buffer-equal-constant-time": "1.0.1", + "ecdsa-sig-formatter": "1.0.11", + "safe-buffer": "^5.0.1" + } + }, + "node_modules/jws": { + "version": "3.2.2", + "resolved": "https://registry.npmjs.org/jws/-/jws-3.2.2.tgz", + "integrity": "sha512-YHlZCB6lMTllWDtSPHz/ZXTsi8S00usEV6v1tjq8tOUZzw7DpSDWVXjXDre6ed1w/pd495ODpHZYSdkRTsa0HA==", + "dependencies": { + "jwa": "^1.4.1", + "safe-buffer": "^5.0.1" + } + }, "node_modules/kind-of": { "version": "6.0.3", "resolved": "https://registry.npmjs.org/kind-of/-/kind-of-6.0.3.tgz", @@ -13395,6 +13464,36 @@ "resolved": "https://registry.npmjs.org/lodash.debounce/-/lodash.debounce-4.0.8.tgz", "integrity": "sha512-FT1yDzDYEoYWhnSGnpE/4Kj1fLZkDFyqRb7fNt6FdYOSxlUWAtp42Eh6Wb0rGIv/m9Bgo7x4GhQbm5Ys4SG5ow==" }, + "node_modules/lodash.includes": { + "version": "4.3.0", + "resolved": "https://registry.npmjs.org/lodash.includes/-/lodash.includes-4.3.0.tgz", + "integrity": "sha512-W3Bx6mdkRTGtlJISOvVD/lbqjTlPPUDTMnlXZFnVwi9NKJ6tiAk6LVdlhZMm17VZisqhKcgzpO5Wz91PCt5b0w==" + }, + "node_modules/lodash.isboolean": { + "version": "3.0.3", + "resolved": "https://registry.npmjs.org/lodash.isboolean/-/lodash.isboolean-3.0.3.tgz", + "integrity": "sha512-Bz5mupy2SVbPHURB98VAcw+aHh4vRV5IPNhILUCsOzRmsTmSQ17jIuqopAentWoehktxGd9e/hbIXq980/1QJg==" + }, + "node_modules/lodash.isinteger": { + "version": "4.0.4", + "resolved": "https://registry.npmjs.org/lodash.isinteger/-/lodash.isinteger-4.0.4.tgz", + "integrity": "sha512-DBwtEWN2caHQ9/imiNeEA5ys1JoRtRfY3d7V9wkqtbycnAmTvRRmbHKDV4a0EYc678/dia0jrte4tjYwVBaZUA==" + }, + "node_modules/lodash.isnumber": { + "version": "3.0.3", + "resolved": "https://registry.npmjs.org/lodash.isnumber/-/lodash.isnumber-3.0.3.tgz", + "integrity": "sha512-QYqzpfwO3/CWf3XP+Z+tkQsfaLL/EnUlXWVkIk5FUPc4sBdTehEqZONuyRt2P67PXAk+NXmTBcc97zw9t1FQrw==" + }, + "node_modules/lodash.isplainobject": { + "version": "4.0.6", + "resolved": "https://registry.npmjs.org/lodash.isplainobject/-/lodash.isplainobject-4.0.6.tgz", + "integrity": "sha512-oSXzaWypCMHkPC3NvBEaPHf0KsA5mvPrOPgQWDsbg8n7orZ290M0BmC/jgRZ4vcJ6DTAhjrsSYgdsW/F+MFOBA==" + }, + "node_modules/lodash.isstring": { + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/lodash.isstring/-/lodash.isstring-4.0.1.tgz", + "integrity": "sha512-0wJxfxH1wgO3GrbuP+dTTk7op+6L41QCXbGINEmD+ny/G/eCqGzxyCsh7159S+mgDDcoarnBw6PC1PS5+wUGgw==" + }, "node_modules/lodash.memoize": { "version": "4.1.2", "resolved": "https://registry.npmjs.org/lodash.memoize/-/lodash.memoize-4.1.2.tgz", @@ -13405,6 +13504,11 @@ "resolved": "https://registry.npmjs.org/lodash.merge/-/lodash.merge-4.6.2.tgz", "integrity": "sha512-0KpjqXRVvrYyCsX1swR/XTK0va6VQkQM6MNo7PqW77ByjAhoARA8EfrP1N4+KlKj8YS0ZUCtRT/YUuhyYDujIQ==" }, + "node_modules/lodash.once": { + "version": "4.1.1", + "resolved": "https://registry.npmjs.org/lodash.once/-/lodash.once-4.1.1.tgz", + "integrity": "sha512-Sb487aTOCr9drQVL8pIxOzVhafOjZN9UU54hiN8PU3uAiSV7lx1yYNpbNmex2PK6dSJoNTSJUUswT651yww3Mg==" + }, "node_modules/lodash.sortby": { "version": "4.7.0", "resolved": "https://registry.npmjs.org/lodash.sortby/-/lodash.sortby-4.7.0.tgz", @@ -15863,6 +15967,19 @@ "node": ">=14" } }, + "node_modules/react-cookie": { + "version": "6.1.1", + "resolved": "https://registry.npmjs.org/react-cookie/-/react-cookie-6.1.1.tgz", + "integrity": "sha512-fuFRpf8LH6SfmVMowDUIRywJF5jAUDUWrm0EI5VdXfTl5bPcJ7B0zWbuYpT0Tvikx7Gs18MlvAT+P+744dUz2g==", + "dependencies": { + "@types/hoist-non-react-statics": "^3.3.1", + "hoist-non-react-statics": "^3.3.2", + "universal-cookie": "^6.0.0" + }, + "peerDependencies": { + "react": ">= 16.3.0" + } + }, "node_modules/react-dev-utils": { "version": "12.0.1", "resolved": "https://registry.npmjs.org/react-dev-utils/-/react-dev-utils-12.0.1.tgz", @@ -18450,6 +18567,15 @@ "node": ">=8" } }, + "node_modules/universal-cookie": { + "version": "6.1.1", + "resolved": "https://registry.npmjs.org/universal-cookie/-/universal-cookie-6.1.1.tgz", + "integrity": "sha512-33S9x3CpdUnnjwTNs2Fgc41WGve2tdLtvaK2kPSbZRc5pGpz2vQFbRWMxlATsxNNe/Cy8SzmnmbuBM85jpZPtA==", + "dependencies": { + "@types/cookie": "^0.5.1", + "cookie": "^0.5.0" + } + }, "node_modules/universalify": { "version": "2.0.0", "resolved": "https://registry.npmjs.org/universalify/-/universalify-2.0.0.tgz", diff --git a/xinference/web/ui/package.json b/xinference/web/ui/package.json index 86abc72cfb..2c0fe23afa 100644 --- a/xinference/web/ui/package.json +++ b/xinference/web/ui/package.json @@ -11,9 +11,9 @@ "@fullcalendar/list": "^6.1.8", "@fullcalendar/timegrid": "^6.1.8", "@mui/icons-material": "^5.14.0", + "@mui/lab": "latest", "@mui/material": "^5.14.0", "@mui/x-data-grid": "^6.10.0", - "@mui/lab": "latest", "@nivo/bar": "^0.83.0", "@nivo/core": "^0.83.0", "@nivo/geo": "^0.83.0", @@ -24,6 +24,7 @@ "@testing-library/user-event": "^13.5.0", "formik": "^2.4.2", "react": "^18.2.0", + "react-cookie": "^6.1.1", "react-dom": "^18.2.0", "react-pro-sidebar": "^1.1.0-alpha.1", "react-router-dom": "^6.14.1", @@ -58,9 +59,9 @@ ] }, "devDependencies": { - "@babel/plugin-proposal-private-property-in-object": "^7.21.11", "@babel/core": "^7.21.0", "@babel/eslint-parser": "^7.19.1", + "@babel/plugin-proposal-private-property-in-object": "^7.21.11", "eslint": "^7.32.0", "eslint-config-prettier": "^8.5.0", "eslint-plugin-react": "^7.24.0", diff --git a/xinference/web/ui/src/App.js b/xinference/web/ui/src/App.js index 693c872584..beffd7fc97 100644 --- a/xinference/web/ui/src/App.js +++ b/xinference/web/ui/src/App.js @@ -1,22 +1,98 @@ import { CssBaseline, ThemeProvider } from '@mui/material' +import Snackbar from '@mui/material/Snackbar' +import React, { useEffect, useState } from 'react' +import { useCookies } from 'react-cookie' import { HashRouter, Route, Routes } from 'react-router-dom' +import { Alert } from './components/alertComponent' import { ApiContextProvider } from './components/apiContext' +import AuthAlertDialog from './components/authAlertDialog' +import { getEndpoint, isValidBearerToken } from './components/utils' import Layout from './scenes/_layout' import LaunchModel from './scenes/launch_model' +import Login from './scenes/login/login' import RegisterModel from './scenes/register_model' import RunningModels from './scenes/running_models' import { useMode } from './theme' function App() { const [theme] = useMode() + const [cookie, setCookie, removeCookie] = useCookies(['token']) + const [msg, setMsg] = useState('') + + const endPoint = getEndpoint() + + const removeToken = () => { + removeCookie('token', { path: '/' }) + } + + useEffect(() => { + // token possible value: no_auth / need_auth / <real bearer token> + fetch(endPoint + '/v1/cluster/auth', { + method: 'GET', + headers: { + 'Content-Type': 'application/json', + }, + }).then((res) => { + if (!res.ok) { + res.json().then((errorData) => { + setMsg( + `Server error: ${res.status} - ${ + errorData.detail || 'Unknown error' + }` + ) + }) + } else { + res.json().then((data) => { + if (data['auth'] === false) { + if (cookie.token !== 'no_auth') { + setCookie('token', 'no_auth', { path: '/' }) + } + } else { + // TODO: validate bearer token + if ( + cookie.token === undefined || + !isValidBearerToken(cookie.token) + ) { + // not a bearer token, need a bearer token here + setCookie('token', 'need_auth', { path: '/' }) + } + } + }) + } + }) + // return a function in useEffect means doing something on component unmount + return () => { + removeToken() + } + }, []) + + const handleClose = (event, reason) => { + if (reason === 'clickaway') { + return + } + setMsg('') + } + return ( <div className="app"> + <Snackbar + open={msg !== ''} + autoHideDuration={10000} + anchorOrigin={{ vertical: 'top', horizontal: 'center' }} + onClose={handleClose} + > + <Alert severity="error" onClose={handleClose} sx={{ width: '100%' }}> + {msg} + </Alert> + </Snackbar> <HashRouter> <ThemeProvider theme={theme}> <ApiContextProvider> <CssBaseline /> + <AuthAlertDialog /> <Routes> + <Route path="/login" element={<Login />} /> <Route element={<Layout />}> <Route path="/" element={<LaunchModel />} /> <Route path="/running_models" element={<RunningModels />} /> diff --git a/xinference/web/ui/src/components/Title.js b/xinference/web/ui/src/components/Title.js index a28cfd5d67..05e124a281 100644 --- a/xinference/web/ui/src/components/Title.js +++ b/xinference/web/ui/src/components/Title.js @@ -1,19 +1,42 @@ -import { Box, Typography } from '@mui/material' +import ExitToAppIcon from '@mui/icons-material/ExitToApp' +import { Box, Stack, Typography } from '@mui/material' +import Button from '@mui/material/Button' +import { useCookies } from 'react-cookie' +import { useNavigate } from 'react-router-dom' + +import { isValidBearerToken } from './utils' + +const Title = ({ title }) => { + const [cookie, , removeCookie] = useCookies(['token']) + const navigate = useNavigate() + + const handleLogout = () => { + removeCookie('token', { path: '/' }) + navigate('/login', { replace: true }) + } -const Title = ({ title, subtitle }) => { return ( <Box mb="30px"> - <Typography - variant="h2" - color="#141414" - fontWeight="bold" - sx={{ m: '0 0 5px 0' }} - > - {title} - </Typography> - <Typography variant="h5" color="#3d3d3d"> - {subtitle} - </Typography> + <Stack direction="row" alignItems="center" justifyContent="space-between"> + <Typography + variant="h2" + color="#141414" + fontWeight="bold" + sx={{ m: '0 0 5px 0' }} + > + {title} + </Typography> + {isValidBearerToken(cookie.token) && ( + <Button + variant="outlined" + size="large" + onClick={handleLogout} + startIcon={<ExitToAppIcon />} + > + LOG OUT + </Button> + )} + </Stack> </Box> ) } diff --git a/xinference/web/ui/src/components/alertComponent.js b/xinference/web/ui/src/components/alertComponent.js new file mode 100644 index 0000000000..4603b3f9c6 --- /dev/null +++ b/xinference/web/ui/src/components/alertComponent.js @@ -0,0 +1,8 @@ +import MuiAlert from '@mui/material/Alert' +import React from 'react' + +const Alert = React.forwardRef(function Alert(props, ref) { + return <MuiAlert elevation={6} ref={ref} variant="filled" {...props} /> +}) + +export { Alert } diff --git a/xinference/web/ui/src/components/apiContext.js b/xinference/web/ui/src/components/apiContext.js index 0b8f5aa748..f4f223e648 100644 --- a/xinference/web/ui/src/components/apiContext.js +++ b/xinference/web/ui/src/components/apiContext.js @@ -1,18 +1,14 @@ import React, { createContext, useState } from 'react' +import { getEndpoint } from './utils' + export const ApiContext = createContext() export const ApiContextProvider = ({ children }) => { const [isCallingApi, setIsCallingApi] = useState(false) const [isUpdatingModel, setIsUpdatingModel] = useState(false) const [errorMsg, setErrorMsg] = useState('') - let endPoint = '' - if (!process.env.NODE_ENV || process.env.NODE_ENV === 'development') { - endPoint = 'http://127.0.0.1:9997' - } else { - const fullUrl = window.location.href - endPoint = fullUrl.split('/ui')[0] - } + const endPoint = getEndpoint() return ( <ApiContext.Provider diff --git a/xinference/web/ui/src/components/authAlertDialog.js b/xinference/web/ui/src/components/authAlertDialog.js new file mode 100644 index 0000000000..4150ac54b2 --- /dev/null +++ b/xinference/web/ui/src/components/authAlertDialog.js @@ -0,0 +1,92 @@ +import Button from '@mui/material/Button' +import Dialog from '@mui/material/Dialog' +import DialogActions from '@mui/material/DialogActions' +import DialogContent from '@mui/material/DialogContent' +import DialogContentText from '@mui/material/DialogContentText' +import DialogTitle from '@mui/material/DialogTitle' +import * as React from 'react' +import { useEffect, useState } from 'react' +import { useCookies } from 'react-cookie' +import { useNavigate } from 'react-router-dom' + +export default function AuthAlertDialog() { + const navigate = useNavigate() + const [authStatus, setAuthStatus] = useState('') + const [, , removeCookie] = useCookies(['token']) + + const handleAuthStatus = () => { + const status = localStorage.getItem('authStatus') + if (status) { + setAuthStatus(status) + } else { + setAuthStatus('') + } + } + + useEffect(() => { + localStorage.removeItem('authStatus') + window.addEventListener('auth-status', handleAuthStatus) + + return () => { + window.removeEventListener('auth-status', handleAuthStatus) + } + }, []) + + const handleClose = () => { + // trigger first + const code = localStorage.getItem('authStatus') + localStorage.removeItem('authStatus') + setAuthStatus('') + if (code === '401') { + removeCookie('token', { path: '/' }) + navigate('/login', { replace: true }) + } + } + + const handleDialogClose = (event, reason) => { + if (reason && reason === 'backdropClick') { + return + } + localStorage.removeItem('authStatus') + setAuthStatus('') + } + + return ( + <React.Fragment> + <Dialog + fullWidth + maxWidth="md" + open={authStatus === '401' || authStatus === '403'} + onClose={handleDialogClose} + aria-labelledby="alert-dialog-title" + aria-describedby="alert-dialog-description" + > + {authStatus === '403' && ( + <DialogTitle id="alert-dialog-title"> + {'Permission Error'} + </DialogTitle> + )} + {authStatus === '401' && ( + <DialogTitle id="alert-dialog-title"> + {'Authentication Error'} + </DialogTitle> + )} + <DialogContent> + {authStatus === '403' && ( + <DialogContentText id="alert-dialog-description"> + {'You do not have permissions to do this!'} + </DialogContentText> + )} + {authStatus === '401' && ( + <DialogContentText id="alert-dialog-description"> + {'Invalid credentials! Please login.'} + </DialogContentText> + )} + </DialogContent> + <DialogActions> + <Button onClick={handleClose}>CONFIRMED</Button> + </DialogActions> + </Dialog> + </React.Fragment> + ) +} diff --git a/xinference/web/ui/src/components/errorMessageSnackBar.js b/xinference/web/ui/src/components/errorMessageSnackBar.js index 2c4802adea..905f84d50b 100644 --- a/xinference/web/ui/src/components/errorMessageSnackBar.js +++ b/xinference/web/ui/src/components/errorMessageSnackBar.js @@ -1,13 +1,9 @@ -import MuiAlert from '@mui/material/Alert' import Snackbar from '@mui/material/Snackbar' import React, { useContext } from 'react' +import { Alert } from './alertComponent' import { ApiContext } from './apiContext' -const Alert = React.forwardRef(function Alert(props, ref) { - return <MuiAlert elevation={6} ref={ref} variant="filled" {...props} /> -}) - const ErrorMessageSnackBar = () => { const { errorMsg, setErrorMsg } = useContext(ApiContext) diff --git a/xinference/web/ui/src/components/fetcher.js b/xinference/web/ui/src/components/fetcher.js new file mode 100644 index 0000000000..6d1544ebe0 --- /dev/null +++ b/xinference/web/ui/src/components/fetcher.js @@ -0,0 +1,36 @@ +import { Cookies } from 'react-cookie' + +import { isValidBearerToken } from './utils' + +const cookies = new Cookies() + +const updateOptions = (url, options) => { + const update = { ...options } + if (cookies.get('token') !== 'no_auth') { + update.headers = { + ...update.headers, + Authorization: 'Bearer ' + cookies.get('token'), + } + } + return update +} + +export default function fetcher(url, options) { + return fetch(url, updateOptions(url, options)).then((res) => { + // For the situation that server has already been restarted, the current token may become invalid, + // which leads to UI hangs. + if (res.status === 401 && isValidBearerToken(cookies.get('token'))) { + if (localStorage.getItem('authStatus') !== '401') { + localStorage.setItem('authStatus', '401') + window.dispatchEvent(new Event('auth-status')) + } + } else if (res.status === 403 && isValidBearerToken(cookies.get('token'))) { + if (localStorage.getItem('authStatus') !== '403') { + localStorage.setItem('authStatus', '403') + window.dispatchEvent(new Event('auth-status')) + } + } else { + return res + } + }) +} diff --git a/xinference/web/ui/src/components/utils.js b/xinference/web/ui/src/components/utils.js new file mode 100644 index 0000000000..fe995efe03 --- /dev/null +++ b/xinference/web/ui/src/components/utils.js @@ -0,0 +1,18 @@ +const getEndpoint = () => { + let endPoint = '' + if (!process.env.NODE_ENV || process.env.NODE_ENV === 'development') { + endPoint = 'http://127.0.0.1:9997' + } else { + const fullUrl = window.location.href + endPoint = fullUrl.split('/ui')[0] + } + return endPoint +} + +const isValidBearerToken = (token) => { + return ( + token !== '' && token !== undefined && token !== null && token.length > 10 + ) +} + +export { getEndpoint, isValidBearerToken } diff --git a/xinference/web/ui/src/index.js b/xinference/web/ui/src/index.js index 34d5faf9cb..eed265203d 100644 --- a/xinference/web/ui/src/index.js +++ b/xinference/web/ui/src/index.js @@ -1,4 +1,5 @@ import React from 'react' +import { CookiesProvider } from 'react-cookie' import ReactDOM from 'react-dom/client' import App from './App' @@ -6,6 +7,8 @@ import App from './App' const root = ReactDOM.createRoot(document.getElementById('root')) root.render( <React.StrictMode> - <App /> + <CookiesProvider> + <App /> + </CookiesProvider> </React.StrictMode> ) diff --git a/xinference/web/ui/src/scenes/launch_model/embeddingCard.js b/xinference/web/ui/src/scenes/launch_model/embeddingCard.js index 07e01749f9..14c37a1a7d 100644 --- a/xinference/web/ui/src/scenes/launch_model/embeddingCard.js +++ b/xinference/web/ui/src/scenes/launch_model/embeddingCard.js @@ -12,6 +12,7 @@ import IconButton from '@mui/material/IconButton' import React, { useContext, useEffect, useState } from 'react' import { ApiContext } from '../../components/apiContext' +import fetcher from '../../components/fetcher' const CARD_HEIGHT = 270 const CARD_WIDTH = 270 @@ -46,8 +47,8 @@ const EmbeddingCard = ({ model_type: 'embedding', } - // First fetch request to initiate the model - fetch(url + '/v1/models', { + // First fetcher request to initiate the model + fetcher(url + '/v1/models', { method: 'POST', headers: { 'Content-Type': 'application/json', @@ -204,7 +205,7 @@ const EmbeddingCard = ({ const handeCustomDelete = (e) => { e.stopPropagation() - fetch(url + `/v1/model_registrations/embedding/${modelData.model_name}`, { + fetcher(url + `/v1/model_registrations/embedding/${modelData.model_name}`, { method: 'DELETE', headers: { 'Content-Type': 'application/json', diff --git a/xinference/web/ui/src/scenes/launch_model/index.js b/xinference/web/ui/src/scenes/launch_model/index.js index 33aceee695..b6c89bbb94 100644 --- a/xinference/web/ui/src/scenes/launch_model/index.js +++ b/xinference/web/ui/src/scenes/launch_model/index.js @@ -1,6 +1,8 @@ import { TabContext, TabList, TabPanel } from '@mui/lab' import { Box, Tab } from '@mui/material' import React, { useContext, useEffect, useState } from 'react' +import { useCookies } from 'react-cookie' +import { useNavigate } from 'react-router-dom' import { ApiContext } from '../../components/apiContext' import ErrorMessageSnackBar from '../../components/errorMessageSnackBar' @@ -16,12 +18,22 @@ const LaunchModel = () => { const [gpuAvailable, setGPUAvailable] = useState(-1) const { setErrorMsg } = useContext(ApiContext) + const [cookie] = useCookies(['token']) + const navigate = useNavigate() const handleTabChange = (event, newValue) => { setValue(newValue) } useEffect(() => { + if (cookie.token === '' || cookie.token === undefined) { + return + } + if (cookie.token === 'need_auth') { + navigate('/login', { replace: true }) + return + } + if (gpuAvailable === -1) { fetch(endPoint + '/v1/cluster/devices', { method: 'GET', @@ -45,7 +57,7 @@ const LaunchModel = () => { } }) } - }, []) + }, [cookie.token]) return ( <Box m="20px"> diff --git a/xinference/web/ui/src/scenes/launch_model/launchCustom.js b/xinference/web/ui/src/scenes/launch_model/launchCustom.js index eda5599680..fbbf683ab1 100644 --- a/xinference/web/ui/src/scenes/launch_model/launchCustom.js +++ b/xinference/web/ui/src/scenes/launch_model/launchCustom.js @@ -2,6 +2,7 @@ import { Box, FormControl, TextField } from '@mui/material' import React, { useContext, useEffect, useState } from 'react' import { ApiContext } from '../../components/apiContext' +import fetcher from '../../components/fetcher' import EmbeddingCard from './embeddingCard' import ModelCard from './modelCard' import RerankCard from './rerankCard' @@ -33,7 +34,7 @@ const LaunchCustom = ({ gpuAvailable }) => { try { setIsCallingApi(true) - const rerankResponse = await fetch( + const rerankResponse = await fetcher( `${endPoint}/v1/model_registrations/rerank`, { method: 'GET', @@ -44,7 +45,7 @@ const LaunchCustom = ({ gpuAvailable }) => { (data) => !data.is_builtin ) - const embeddingResponse = await fetch( + const embeddingResponse = await fetcher( `${endPoint}/v1/model_registrations/embedding`, { method: 'GET', @@ -56,7 +57,7 @@ const LaunchCustom = ({ gpuAvailable }) => { (data) => !data.is_builtin ) - const llmResponse = await fetch( + const llmResponse = await fetcher( `${endPoint}/v1/model_registrations/LLM`, { method: 'GET', @@ -69,7 +70,7 @@ const LaunchCustom = ({ gpuAvailable }) => { const newEmbeddingData = await Promise.all( customEmbeddingRegistrations.map(async (registration) => { - const desc = await fetch( + const desc = await fetcher( `${endPoint}/v1/model_registrations/embedding/${registration.model_name}`, { method: 'GET', @@ -85,7 +86,7 @@ const LaunchCustom = ({ gpuAvailable }) => { const newLLMData = await Promise.all( customLLMRegistrations.map(async (registration) => { - const desc = await fetch( + const desc = await fetcher( `${endPoint}/v1/model_registrations/LLM/${registration.model_name}`, { method: 'GET', @@ -101,7 +102,7 @@ const LaunchCustom = ({ gpuAvailable }) => { const newRerankData = await Promise.all( customRerankRegistrations.map(async (registration) => { - const desc = await fetch( + const desc = await fetcher( `${endPoint}/v1/model_registrations/rerank/${registration.model_name}`, { method: 'GET', diff --git a/xinference/web/ui/src/scenes/launch_model/launchEmbedding.js b/xinference/web/ui/src/scenes/launch_model/launchEmbedding.js index 9d54c95ebe..fecbbf9eb0 100644 --- a/xinference/web/ui/src/scenes/launch_model/launchEmbedding.js +++ b/xinference/web/ui/src/scenes/launch_model/launchEmbedding.js @@ -2,6 +2,7 @@ import { Box, FormControl, TextField } from '@mui/material' import React, { useContext, useEffect, useState } from 'react' import { ApiContext } from '../../components/apiContext' +import fetcher from '../../components/fetcher' import EmbeddingCard from './embeddingCard' const LaunchEmbedding = () => { @@ -31,7 +32,7 @@ const LaunchEmbedding = () => { try { setIsCallingApi(true) - const response = await fetch( + const response = await fetcher( `${endPoint}/v1/model_registrations/embedding?detailed=true`, { method: 'GET', @@ -41,7 +42,7 @@ const LaunchEmbedding = () => { const registrations = await response.json() const newRegistrationData = await Promise.all( registrations.map(async (registration) => { - const desc = await fetch( + const desc = await fetcher( `${endPoint}/v1/model_registrations/embedding/${registration.model_name}`, { method: 'GET', diff --git a/xinference/web/ui/src/scenes/launch_model/launchLLM.js b/xinference/web/ui/src/scenes/launch_model/launchLLM.js index 9755c13462..9b3eabc18d 100644 --- a/xinference/web/ui/src/scenes/launch_model/launchLLM.js +++ b/xinference/web/ui/src/scenes/launch_model/launchLLM.js @@ -7,19 +7,22 @@ import { TextField, } from '@mui/material' import React, { useContext, useEffect, useState } from 'react' +import { useCookies } from 'react-cookie' import { ApiContext } from '../../components/apiContext' +import fetcher from '../../components/fetcher' import ModelCard from './modelCard' const LaunchLLM = ({ gpuAvailable }) => { let endPoint = useContext(ApiContext).endPoint - const [registrationData, setRegistrationData] = useState([]) const { isCallingApi, setIsCallingApi } = useContext(ApiContext) const { isUpdatingModel } = useContext(ApiContext) + const { setErrorMsg } = useContext(ApiContext) + const [cookie] = useCookies(['token']) + const [registrationData, setRegistrationData] = useState([]) // States used for filtering const [searchTerm, setSearchTerm] = useState('') - const [modelAbility, setModelAbility] = useState('all') const handleChange = (event) => { @@ -53,23 +56,39 @@ const LaunchLLM = ({ gpuAvailable }) => { return true } - const update = async () => { - if (isCallingApi || isUpdatingModel) return + const update = () => { + if ( + isCallingApi || + isUpdatingModel || + cookie.token === '' || + cookie.token === undefined || + cookie.token === 'need_auth' + ) + return try { setIsCallingApi(true) - const response = await fetch( - `${endPoint}/v1/model_registrations/LLM?detailed=true`, - { - method: 'GET', + fetcher(`${endPoint}/v1/model_registrations/LLM?detailed=true`, { + method: 'GET', + }).then((response) => { + if (!response.ok) { + response + .json() + .then((errData) => + setErrorMsg( + `Server error: ${response.status} - ${ + errData.detail || 'Unknown error' + }` + ) + ) + } else { + response.json().then((data) => { + const builtinRegistrations = data.filter((v) => v.is_builtin) + setRegistrationData(builtinRegistrations) + }) } - ) - - const registrations = await response.json() - const builtinRegistrations = registrations.filter((v) => v.is_builtin) - - setRegistrationData(builtinRegistrations) + }) } catch (error) { console.error('Error:', error) } finally { @@ -78,8 +97,8 @@ const LaunchLLM = ({ gpuAvailable }) => { } useEffect(() => { - update().catch(console.error) - }, []) + update() + }, [cookie.token]) const style = { display: 'grid', diff --git a/xinference/web/ui/src/scenes/launch_model/launchRerank.js b/xinference/web/ui/src/scenes/launch_model/launchRerank.js index a342d3c9ca..bb26b629f4 100644 --- a/xinference/web/ui/src/scenes/launch_model/launchRerank.js +++ b/xinference/web/ui/src/scenes/launch_model/launchRerank.js @@ -2,6 +2,7 @@ import { Box, FormControl, TextField } from '@mui/material' import React, { useContext, useEffect, useState } from 'react' import { ApiContext } from '../../components/apiContext' +import fetcher from '../../components/fetcher' import RerankCard from './rerankCard' const LaunchRerank = () => { @@ -31,7 +32,7 @@ const LaunchRerank = () => { try { setIsCallingApi(true) - const response = await fetch( + const response = await fetcher( `${endPoint}/v1/model_registrations/rerank?detailed=true`, { method: 'GET', @@ -41,7 +42,7 @@ const LaunchRerank = () => { const registrations = await response.json() const newRegistrationData = await Promise.all( registrations.map(async (registration) => { - const desc = await fetch( + const desc = await fetcher( `${endPoint}/v1/model_registrations/rerank/${registration.model_name}`, { method: 'GET', diff --git a/xinference/web/ui/src/scenes/launch_model/modelCard.js b/xinference/web/ui/src/scenes/launch_model/modelCard.js index 8f3224d28a..b4d0330b05 100644 --- a/xinference/web/ui/src/scenes/launch_model/modelCard.js +++ b/xinference/web/ui/src/scenes/launch_model/modelCard.js @@ -21,8 +21,10 @@ import { import IconButton from '@mui/material/IconButton' import Typography from '@mui/material/Typography' import React, { useContext, useEffect, useState } from 'react' +import { useNavigate } from 'react-router-dom' import { ApiContext } from '../../components/apiContext' +import fetcher from '../../components/fetcher' const CARD_HEIGHT = 380 const CARD_WIDTH = 300 @@ -33,6 +35,7 @@ const ModelCard = ({ url, modelData, gpuAvailable, is_custom = false }) => { const { isCallingApi, setIsCallingApi } = useContext(ApiContext) const { isUpdatingModel } = useContext(ApiContext) const { setErrorMsg } = useContext(ApiContext) + const navigate = useNavigate() // Model parameter selections const [modelUID, setModelUID] = useState('') @@ -122,8 +125,8 @@ const ModelCard = ({ url, modelData, gpuAvailable, is_custom = false }) => { nGPU === '0' ? null : nGPU === 'auto' ? 'auto' : parseInt(nGPU, 10), } - // First fetch request to initiate the model - fetch(url + '/v1/models', { + // First fetcher request to initiate the model + fetcher(url + '/v1/models', { method: 'POST', headers: { 'Content-Type': 'application/json', @@ -141,7 +144,7 @@ const ModelCard = ({ url, modelData, gpuAvailable, is_custom = false }) => { ) }) } else { - window.open(url + '/ui/#/running_models', '_blank', 'noreferrer') + navigate('/running_models') } setIsCallingApi(false) }) @@ -281,7 +284,7 @@ const ModelCard = ({ url, modelData, gpuAvailable, is_custom = false }) => { const handeCustomDelete = (e) => { e.stopPropagation() - fetch(url + `/v1/model_registrations/LLM/${modelData.model_name}`, { + fetcher(url + `/v1/model_registrations/LLM/${modelData.model_name}`, { method: 'DELETE', headers: { 'Content-Type': 'application/json', diff --git a/xinference/web/ui/src/scenes/launch_model/rerankCard.js b/xinference/web/ui/src/scenes/launch_model/rerankCard.js index a38ff6c39c..f5b8c1d21e 100644 --- a/xinference/web/ui/src/scenes/launch_model/rerankCard.js +++ b/xinference/web/ui/src/scenes/launch_model/rerankCard.js @@ -12,6 +12,7 @@ import IconButton from '@mui/material/IconButton' import React, { useContext, useEffect, useState } from 'react' import { ApiContext } from '../../components/apiContext' +import fetcher from '../../components/fetcher' const CARD_HEIGHT = 270 const CARD_WIDTH = 270 @@ -45,8 +46,8 @@ const RerankCard = ({ model_type: 'rerank', } - // First fetch request to initiate the model - fetch(url + '/v1/models', { + // First fetcher request to initiate the model + fetcher(url + '/v1/models', { method: 'POST', headers: { 'Content-Type': 'application/json', @@ -190,7 +191,7 @@ const RerankCard = ({ const handeCustomDelete = (e) => { e.stopPropagation() - fetch(url + `/v1/model_registrations/rerank/${modelData.model_name}`, { + fetcher(url + `/v1/model_registrations/rerank/${modelData.model_name}`, { method: 'DELETE', headers: { 'Content-Type': 'application/json', diff --git a/xinference/web/ui/src/scenes/login/header.js b/xinference/web/ui/src/scenes/login/header.js new file mode 100644 index 0000000000..247951c19a --- /dev/null +++ b/xinference/web/ui/src/scenes/login/header.js @@ -0,0 +1,37 @@ +import { AppBar, Box, Toolbar } from '@mui/material' +import Typography from '@mui/material/Typography' +import * as React from 'react' + +import icon from '../../media/icon.webp' + +export default function Header() { + return ( + <AppBar + elevation={0} + color="transparent" + sx={{ + backdropFilter: 'blur(20px)', + borderBottom: 1, + borderColor: 'grey.300', + zIndex: (theme) => theme.zIndex.drawer + 1, + }} + > + <Toolbar sx={{ justifyContent: 'start' }}> + <Box + component="img" + alt="profile" + src={icon} + height="60px" + width="60px" + borderRadius="50%" + sx={{ objectFit: 'cover', mr: 1.5 }} + /> + <Box textAlign="left"> + <Typography fontWeight="bold" fontSize="1.7rem"> + {'Xinference'} + </Typography> + </Box> + </Toolbar> + </AppBar> + ) +} diff --git a/xinference/web/ui/src/scenes/login/login.js b/xinference/web/ui/src/scenes/login/login.js new file mode 100644 index 0000000000..05a26b1b93 --- /dev/null +++ b/xinference/web/ui/src/scenes/login/login.js @@ -0,0 +1,112 @@ +import { Box } from '@mui/material' +import Button from '@mui/material/Button' +import Container from '@mui/material/Container' +import TextField from '@mui/material/TextField' +import Typography from '@mui/material/Typography' +import * as React from 'react' +import { Fragment, useContext, useState } from 'react' +import { useCookies } from 'react-cookie' +import { useNavigate } from 'react-router-dom' + +import { ApiContext } from '../../components/apiContext' +import ErrorMessageSnackBar from '../../components/errorMessageSnackBar' +import { getEndpoint } from '../../components/utils' +import Header from './header' + +function Login() { + const [, setCookie] = useCookies(['token']) + const navigate = useNavigate() + const [username, setUsername] = useState('') + const [password, setPassword] = useState('') + const { setErrorMsg } = useContext(ApiContext) + const endpoint = getEndpoint() + + const handleSubmit = () => { + fetch(endpoint + '/token', { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + }, + body: JSON.stringify({ + username: username, + password: password, + }), + }).then((res) => { + if (!res.ok) { + res.json().then((errorData) => { + setErrorMsg( + `Login failed: ${res.status} - ${ + errorData.detail || 'Unknown error' + }` + ) + }) + } else { + res.json().then((data) => { + setCookie('token', data['access_token'], { path: '/' }) + navigate('/') + }) + } + }) + } + + return ( + <Fragment> + <Header /> + <Container component="main" maxWidth="xl" sx={{ marginTop: 20 }}> + <ErrorMessageSnackBar /> + <Box + sx={{ + marginTop: 8, + display: 'flex', + flexDirection: 'column', + alignItems: 'center', + }} + > + <Typography component="h1" variant="h5"> + LOGIN + </Typography> + <Box component="main" noValidate sx={{ mt: 1 }}> + <TextField + margin="normal" + required + fullWidth + id="username" + label="Username" + name="username" + value={username} + onChange={(e) => { + setUsername(e.target.value) + }} + autoFocus + /> + <TextField + margin="normal" + required + fullWidth + name="password" + label="Password" + type="password" + id="password" + autoComplete="current-password" + value={password} + onChange={(e) => { + setPassword(e.target.value) + }} + /> + <Button + type="submit" + fullWidth + variant="contained" + sx={{ mt: 3, mb: 2 }} + onClick={handleSubmit} + > + Sign In + </Button> + </Box> + </Box> + </Container> + </Fragment> + ) +} + +export default Login diff --git a/xinference/web/ui/src/scenes/register_model/index.js b/xinference/web/ui/src/scenes/register_model/index.js index 85dc8c8090..5d8a760368 100644 --- a/xinference/web/ui/src/scenes/register_model/index.js +++ b/xinference/web/ui/src/scenes/register_model/index.js @@ -14,9 +14,12 @@ import AlertTitle from '@mui/material/AlertTitle' import Button from '@mui/material/Button' import TextField from '@mui/material/TextField' import React, { useContext, useEffect, useState } from 'react' +import { useCookies } from 'react-cookie' +import { useNavigate } from 'react-router-dom' import { ApiContext } from '../../components/apiContext' import ErrorMessageSnackBar from '../../components/errorMessageSnackBar' +import fetcher from '../../components/fetcher' import Title from '../../components/Title' import { useMode } from '../../theme' import RegisterEmbeddingModel from './register_embedding' @@ -54,6 +57,8 @@ const RegisterModel = () => { }) const [familyLabel, setFamilyLabel] = useState('') const [tabValue, setTabValue] = React.useState('1') + const [cookie] = useCookies(['token']) + const navigate = useNavigate() const errorModelName = formData.model_name.trim().length <= 0 const errorModelDescription = formData.model_description.length < 0 @@ -81,6 +86,14 @@ const RegisterModel = () => { errorFamily useEffect(() => { + if (cookie.token === '' || cookie.token === undefined) { + return + } + if (cookie.token === 'need_auth') { + navigate('/login', { replace: true }) + return + } + const getBuiltinFamilies = async () => { const response = await fetch(endPoint + '/v1/models/families', { method: 'GET', @@ -147,7 +160,7 @@ const RegisterModel = () => { console.error('Error: ', error) }) } - }) + }, [cookie.token]) const getFamilyByAbility = () => { if (formData.model_ability.includes('chat')) { @@ -232,7 +245,7 @@ const RegisterModel = () => { } try { - const response = await fetch(endPoint + '/v1/model_registrations/LLM', { + const response = await fetcher(endPoint + '/v1/model_registrations/LLM', { method: 'POST', headers: { 'Content-Type': 'application/json', diff --git a/xinference/web/ui/src/scenes/register_model/register_embedding.js b/xinference/web/ui/src/scenes/register_model/register_embedding.js index 29ce0335b5..ac7ab8d4ae 100644 --- a/xinference/web/ui/src/scenes/register_model/register_embedding.js +++ b/xinference/web/ui/src/scenes/register_model/register_embedding.js @@ -6,6 +6,7 @@ import TextField from '@mui/material/TextField' import React, { useContext, useState } from 'react' import { ApiContext } from '../../components/apiContext' +import fetcher from '../../components/fetcher' import { useMode } from '../../theme' const SUPPORTED_LANGUAGES_DICT = { en: 'English', zh: 'Chinese' } @@ -41,7 +42,7 @@ const RegisterEmbeddingModel = () => { } try { - const response = await fetch( + const response = await fetcher( endPoint + '/v1/model_registrations/embedding', { method: 'POST', diff --git a/xinference/web/ui/src/scenes/register_model/register_rerank.js b/xinference/web/ui/src/scenes/register_model/register_rerank.js index ed8f7255c9..075b35ff9d 100644 --- a/xinference/web/ui/src/scenes/register_model/register_rerank.js +++ b/xinference/web/ui/src/scenes/register_model/register_rerank.js @@ -6,6 +6,7 @@ import TextField from '@mui/material/TextField' import React, { useContext, useState } from 'react' import { ApiContext } from '../../components/apiContext' +import fetcher from '../../components/fetcher' import { useMode } from '../../theme' const SUPPORTED_LANGUAGES_DICT = { en: 'English', zh: 'Chinese' } @@ -36,7 +37,7 @@ const RegisterRerankModel = () => { } try { - const response = await fetch( + const response = await fetcher( endPoint + '/v1/model_registrations/rerank', { method: 'POST', diff --git a/xinference/web/ui/src/scenes/running_models/index.js b/xinference/web/ui/src/scenes/running_models/index.js index e87bd57aea..7a755403b3 100644 --- a/xinference/web/ui/src/scenes/running_models/index.js +++ b/xinference/web/ui/src/scenes/running_models/index.js @@ -4,8 +4,12 @@ import { TabContext, TabList, TabPanel } from '@mui/lab' import { Box, Stack, Tab } from '@mui/material' import { DataGrid } from '@mui/x-data-grid' import React, { useContext, useEffect, useState } from 'react' +import { useCookies } from 'react-cookie' +import { useNavigate } from 'react-router-dom' import { ApiContext } from '../../components/apiContext' +import ErrorMessageSnackBar from '../../components/errorMessageSnackBar' +import fetcher from '../../components/fetcher' import Title from '../../components/Title' const RunningModels = () => { @@ -16,6 +20,9 @@ const RunningModels = () => { const [rerankModelData, setRerankModelData] = useState([]) const { isCallingApi, setIsCallingApi } = useContext(ApiContext) const { isUpdatingModel, setIsUpdatingModel } = useContext(ApiContext) + const { setErrorMsg } = useContext(ApiContext) + const [cookie] = useCookies(['token']) + const navigate = useNavigate() const endPoint = useContext(ApiContext).endPoint const handleTabChange = (event, newValue) => { @@ -23,6 +30,13 @@ const RunningModels = () => { } const update = (isCallingApi) => { + if (cookie.token === '' || cookie.token === undefined) { + return + } + if (cookie.token === 'need_auth') { + navigate('/login', { replace: true }) + return + } if (isCallingApi) { setLlmData([{ id: 'Loading, do not refresh page...', url: 'IS_LOADING' }]) setEmbeddingModelData([ @@ -36,36 +50,47 @@ const RunningModels = () => { ]) } else { setIsUpdatingModel(true) - fetch(`${endPoint}/v1/models/`, { + fetcher(`${endPoint}/v1/models/`, { method: 'GET', }) - .then((response) => response.json()) - .then((data) => { - const newLlmData = [] - const newEmbeddingModelData = [] - const newImageModelData = [] - const newRerankModelData = [] - Object.entries(data).forEach(([key, value]) => { - let newValue = { - ...value, - id: key, - url: key, - } - if (newValue.model_type === 'LLM') { - newLlmData.push(newValue) - } else if (newValue.model_type === 'embedding') { - newEmbeddingModelData.push(newValue) - } else if (newValue.model_type === 'image') { - newImageModelData.push(newValue) - } else if (newValue.model_type === 'rerank') { - newRerankModelData.push(newValue) - } - }) - setLlmData(newLlmData) - setEmbeddingModelData(newEmbeddingModelData) - setImageModelData(newImageModelData) - setRerankModelData(newRerankModelData) - setIsUpdatingModel(false) + .then((response) => { + if (!response.ok) { + response.json().then((errorData) => { + setErrorMsg( + `Login failed: ${response.status} - ${ + errorData.detail || 'Unknown error' + }` + ) + }) + } else { + response.json().then((data) => { + const newLlmData = [] + const newEmbeddingModelData = [] + const newImageModelData = [] + const newRerankModelData = [] + Object.entries(data).forEach(([key, value]) => { + let newValue = { + ...value, + id: key, + url: key, + } + if (newValue.model_type === 'LLM') { + newLlmData.push(newValue) + } else if (newValue.model_type === 'embedding') { + newEmbeddingModelData.push(newValue) + } else if (newValue.model_type === 'image') { + newImageModelData.push(newValue) + } else if (newValue.model_type === 'rerank') { + newRerankModelData.push(newValue) + } + }) + setLlmData(newLlmData) + setEmbeddingModelData(newEmbeddingModelData) + setImageModelData(newImageModelData) + setRerankModelData(newRerankModelData) + setIsUpdatingModel(false) + }) + } }) .catch((error) => { console.error('Error:', error) @@ -77,7 +102,7 @@ const RunningModels = () => { useEffect(() => { update(isCallingApi) // eslint-disable-next-line - }, [isCallingApi]) + }, [isCallingApi, cookie.token]) const llmColumns = [ { @@ -154,14 +179,14 @@ const RunningModels = () => { setIsCallingApi(true) - fetch(openUrl, { + fetcher(openUrl, { method: 'HEAD', }) .then((response) => { if (response.status === 404) { // If web UI doesn't exist (404 Not Found) console.log('UI does not exist, creating new...') - return fetch(gradioUrl, { + return fetcher(gradioUrl, { method: 'POST', headers: { 'Content-Type': 'application/json', @@ -231,7 +256,7 @@ const RunningModels = () => { return } setIsCallingApi(true) - fetch(closeUrl, { + fetcher(closeUrl, { method: 'DELETE', }) .then((response) => { @@ -328,7 +353,7 @@ const RunningModels = () => { return } setIsCallingApi(true) - fetch(closeUrl, { + fetcher(closeUrl, { method: 'DELETE', }) .then((response) => { @@ -414,6 +439,7 @@ const RunningModels = () => { }} > <Title title="Running Models" /> + <ErrorMessageSnackBar /> <TabContext value={tabValue}> <Box sx={{ borderBottom: 1, borderColor: 'divider' }}> <TabList
{ "difficulty": "medium", "estimated_review_effort": 4, "problem_domain": "Security Patches / Vulnerability Fixes" }
sympy__sympy-27577@857ca2b
sympy/sympy
Python
27,577
polys: Add PuiseuxRing and remove the ring cache
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> Fixes gh-24581. Includes the changes from gh-24585 to remove the PolyRing ring cache and the dynamically created PolyElement classes. This is analogous to gh-25691 which made changes to DMP in preparation for being able to make it so that DMP can use python-flint internally in gh-25722. #### Brief description of what is fixed or changed Remove the ring cache for polynomial rings and the dynamically created PolyElement subclasses. Some other changes are needed to avoid performance regressions as a result of not having all rings cached. Add a new PuiseuxRing type that can represent "polynomials" that have negative or fractional exponents like `x**(1/2)*y**-1`. This is needed by the `ring_series` module but currently `PolyElement` is abused for this just by inserting invalid exponents into the data structure. That abuse makes it impossible to use python-flint's multivariate polynomials as the internal representation of PolyElement. The ring_series module tests and docs are changed to use PuiseuxRing when necessary instead of PolyElement to represent series having negative or fractional exponents. I have also been through the whole test suite verifying that the types of coefficients in PolyElement are always correct. This required a few changes in some places. Various places that create a polynomial by mutating the zero polynomial have been changed to build up a dict instead. True division (`a / b`) with polynomials is no longer equivalent to floor division (`a // b`) and now uses `exquo` instead. This is a Python 2 hangover where division like `a / b` with integers would give floor division. Since we now have two separate division operators it does not make sense for true division to return floor division. Various other methods like `__rdiv__` etc where changed to be more consistent with true division. #### Other comments In future it would be good to expand on PuiseuxRing to make a proper PuiseuxSeries type that can keep track of the "precision" i.e. the number of terms in a series. This would require changes in the `ring_series` module functions though so for now the PuiseuxRing type just provides the functionality that the ring_series module currently expects. #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> * polys * A memory leak caused by using many polynomial rings is fixed by not "caching" all rings permanently in memory. Elements of polynomial rings no longer use dynamically created classes. * A new PuiseuxRing type is added that can represent "polynomials" with negative or fractional exponents. These are used in the ring_series module to represent truncated Puiseux series. * **BREAKING CHANGE**: Using polynomial rings with the ring_series module now only works for series that have nonnegative integer exponents. For series with negative or fractional exponents the PuiseuxRing type must be used instead. * **BREAKING CHANGE**: True division like `a / b` with elements of polynomial rings now computes an exact division (`exquo`) rather than floor division. For floor division use `a // b`. <!-- END RELEASE NOTES -->
2025-02-09T15:12:47Z
memory leak, not a cache issue Hello I have the following code that uses the latest sympy (1.11.1) tested with python 3.8.10 on Windows 10 and with python 3.11.1 on macOS 13.1, the memory is constantly growing. Any ideas why? ``` from sympy import Point, Ray, Circle while True: circle = Circle(Point(0.0, 0.0), 0.5) ray = Ray(Point(0.2, 0.3), Point(0.3, 0.2)) res = circle.intersection(ray) ``` NOTE: the growth in geometry has been addressed, but the scope of the issue is greater than that module so the issue is still open.
I can reproduce this but no idea why it happens. Clearing the cache doesn't help. Calling `gc.collect` doesn't help. With this diff the memory usage stays constant: ```diff diff --git a/sympy/geometry/ellipse.py b/sympy/geometry/ellipse.py index 0c1c5d0..35d92af 100644 --- a/sympy/geometry/ellipse.py +++ b/sympy/geometry/ellipse.py @@ -14,7 +14,7 @@ from sympy.core.logic import fuzzy_bool from sympy.core.numbers import Rational, oo from sympy.core.sorting import ordered -from sympy.core.symbol import Dummy, uniquely_named_symbol, _symbol +from sympy.core.symbol import Symbol, Dummy, uniquely_named_symbol, _symbol from sympy.simplify import simplify, trigsimp from sympy.functions.elementary.miscellaneous import sqrt, Max from sympy.functions.elementary.trigonometric import cos, sin @@ -664,8 +664,8 @@ def intersection(self, o): [Point2D(-17/5, -12/5), Point2D(-17/5, 12/5), Point2D(7/5, -12/5), Point2D(7/5, 12/5)] """ # TODO: Replace solve with nonlinsolve, when nonlinsolve will be able to solve in real domain - x = Dummy('x', real=True) - y = Dummy('y', real=True) + x = Symbol('x', real=True) + y = Symbol('y', real=True) if isinstance(o, Point): if o in self: ``` So what is happening is that every time `intersection` is called two new `Dummy` symbols are created. Somewhere there is some sort of cache that grows without bound if more and more symbols are created. It's here (and a similar one in fields.py): https://github.com/sympy/sympy/blob/f8e33851e174bd686ac8dc88d75f955cf5dc8eeb/sympy/polys/rings.py#L194 This cache grows without bound every time a polynomial ring with new symbols is used. A fix could be to use `lru_cache` instead of a global cache. I am not entirely sure that this is only a cache though because it also associates with a particular PolyElement class and it is possible that that class needs to be globally unique. Can't that be solved by using a WeakDictionary? I've never been a huge fan of this class design in the polys FWIW. I think that a WeakValue dictionary could work but it would probably not achieve the performance benefits intended by the cache in the first place. The point is that this second call is faster: ```python In [1]: %time K = QQ[x,y] CPU times: user 4 ms, sys: 0 ns, total: 4 ms Wall time: 2.38 ms In [2]: %time K = QQ[x,y] CPU times: user 0 ns, sys: 0 ns, total: 0 ns Wall time: 89.6 µs ``` Usually these rings are created transiently as part of an operation like `factor`, `cancel` etc. Since the ring is only a temporary object there will probably not be many in existence at any one time so with a WeakValue dictionary the ring will always disappear. In that case we would be better off just not having any cache. On the other hand of many operations like `factor`, `cancel` etc are used as part of a complex operation it is likely that the same ring will be reconstructed many times so keeping a cache can save time on that. I think that probably a LRU cache is good here but given the size of these rings it should probably be kept quite small: ```diff diff --git a/sympy/polys/fields.py b/sympy/polys/fields.py index a3f239c..cdb079a 100644 --- a/sympy/polys/fields.py +++ b/sympy/polys/fields.py @@ -6,6 +6,7 @@ from operator import add, mul, lt, le, gt, ge +from sympy.core.cache import __cacheit as _cacheit from sympy.core.expr import Expr from sympy.core.mod import Mod from sympy.core.numbers import Exp1 @@ -99,11 +100,11 @@ def sfield(exprs, *symbols, **options): else: return (_field, fracs) -_field_cache: dict[Any, Any] = {} class FracField(DefaultPrinting): """Multivariate distributed rational function field. """ + @_cacheit(10) def __new__(cls, symbols, domain, order=lex): from sympy.polys.rings import PolyRing ring = PolyRing(symbols, domain, order) @@ -113,7 +114,7 @@ def __new__(cls, symbols, domain, order=lex): order = ring.order _hash_tuple = (cls.__name__, symbols, ngens, domain, order) - obj = _field_cache.get(_hash_tuple) + obj = None if obj is None: obj = object.__new__(cls) @@ -138,8 +139,6 @@ def __new__(cls, symbols, domain, order=lex): if not hasattr(obj, name): setattr(obj, name, generator) - _field_cache[_hash_tuple] = obj - return obj def _gens(self): diff --git a/sympy/polys/rings.py b/sympy/polys/rings.py index 0db1897..3a56486 100644 --- a/sympy/polys/rings.py +++ b/sympy/polys/rings.py @@ -7,6 +7,7 @@ from functools import reduce from types import GeneratorType +from sympy.core.cache import __cacheit as _cacheit from sympy.core.expr import Expr from sympy.core.numbers import igcd, oo from sympy.core.symbol import Symbol, symbols as _symbols @@ -191,11 +192,11 @@ def _parse_symbols(symbols): raise GeneratorsError("expected a string, Symbol or expression or a non-empty sequence of strings, Symbols or expressions") -_ring_cache: dict[Any, Any] = {} class PolyRing(DefaultPrinting, IPolys): """Multivariate distributed polynomial ring. """ + @_cacheit(10) def __new__(cls, symbols, domain, order=lex): symbols = tuple(_parse_symbols(symbols)) ngens = len(symbols) @@ -203,7 +204,7 @@ def __new__(cls, symbols, domain, order=lex): order = OrderOpt.preprocess(order) _hash_tuple = (cls.__name__, symbols, ngens, domain, order) - obj = _ring_cache.get(_hash_tuple) + obj = None if obj is None: if domain.is_Composite and set(symbols) & set(domain.symbols): @@ -257,8 +258,6 @@ def __new__(cls, symbols, domain, order=lex): if not hasattr(obj, name): setattr(obj, name, generator) - _ring_cache[_hash_tuple] = obj - return obj def _gens(self): ``` In the long run the best solution would be to make the ring objects lighter weight and faster to construct. > With this diff the memory usage stays constant: Those two Dummy symbols should be created at the top of ellipse.py and then used the 4 times when needed in the various classes -- there is no need to keep creating new Dummy symbols. To be clear plenty of other operations can cause this cache to grow and even to grow more quickly: ``` while True: cancel(Dummy('x') + 1) ``` I thought the point was that these objects break if they aren't singletonized? I seem to remember that sort of thing being the case for these classes in the polys, but correct me if I am wrong. If that's the case, you'd need a version of lru_cache that also acts like a weak dictionary (i.e., least recently used items aren't removed from the cache if they are also referenced somewhere). > Those two Dummy symbols should be created at the top of ellipse.py and then used the 4 times when needed in the various classes -- there is no need to keep creating new Dummy symbols. Wouldn't that change the semantics? A dummy symbol is supposed to be unequal to everything except for itself. Two separate intersections should have dummies that are unequal, but reusing the same Dummy would make them equal. > I thought the point was that these objects break if they aren't singletonized? Yes, actually they do. I just tried removing the cache and a bunch of poly tests failed. The problem is e.g. this: https://github.com/sympy/sympy/blob/f8e33851e174bd686ac8dc88d75f955cf5dc8eeb/sympy/polys/rings.py#L419 Each ring dynamically creates a class for its elements and distinct copies of the same ring would have different classes. I've opened #24585 which for now removes the ring cache altogether so we can see if there's a noticeable impact on performance and if the need for the ring cache is gone. Longer term I would like to remove the dynamic class generation altogether. Also we should just make the construction of rings faster.
[ { "body": "Hello\r\n\r\nI have the following code that uses the latest sympy (1.11.1) tested with python 3.8.10 on Windows 10 and with python 3.11.1 on macOS 13.1, the memory is constantly growing. Any ideas why?\r\n\r\n```\r\nfrom sympy import Point, Ray, Circle\r\nwhile True:\r\n circle = Circle(Point(0.0, 0.0), 0.5)\r\n ray = Ray(Point(0.2, 0.3), Point(0.3, 0.2))\r\n res = circle.intersection(ray)\r\n```\r\n\r\nNOTE: the growth in geometry has been addressed, but the scope of the issue is greater than that module so the issue is still open.", "number": 24581, "title": "memory leak, not a cache issue" } ]
b5646b90853edbacd6406447efd4a5c302bde16a
{ "head_commit": "857ca2ba78d0023e39868c4cba0d675544183beb", "head_commit_message": "fix(integrals): Fix PRDE inexact division\n\nIn the risch integrator Poly.quo_ground was used when the ground domain\nwas a ring leading to an invalid PolyElement with negative exponent.\nThis commit converts the ground domain to a field so that the division\ncan be exact.", "patch_to_review": "diff --git a/doc/src/modules/polys/ringseries.rst b/doc/src/modules/polys/ringseries.rst\nindex 43d5a9e6ce7e..0a57ede91961 100644\n--- a/doc/src/modules/polys/ringseries.rst\n+++ b/doc/src/modules/polys/ringseries.rst\n@@ -35,47 +35,22 @@ Taylor series, we extend it to allow Laurent and even Puiseux series (with\n fractional exponents)::\n \n >>> from sympy.polys.ring_series import rs_cos, rs_tan\n- >>> R, x, y = ring('x, y', QQ)\n+ >>> from sympy.polys.puiseux import puiseux_ring\n+ >>> R, x, y = puiseux_ring('x, y', QQ)\n \n >>> rs_cos(x + x*y, x, 3)/x**3\n- -1/2*x**(-1)*y**2 - x**(-1)*y - 1/2*x**(-1) + x**(-3)\n+ x**(-3) + -1/2*x**(-1) + -1*x**(-1)*y + -1/2*x**(-1)*y**2\n \n >>> rs_tan(x**QQ(2, 5)*y**QQ(1, 2), x, 2)\n- 1/3*x**(6/5)*y**(3/2) + x**(2/5)*y**(1/2)\n-\n-By default, ``PolyElement`` did not allow non-natural numbers as exponents. It\n-converted a fraction to an integer and raised an error on getting negative\n-exponents. The goal of the ``ring series`` module is fast series expansion, and\n-not to use the ``polys`` module. The reason we use it as our backend is simply\n-because it implements a sparse representation and most of the basic functions\n-that we need. However, this default behaviour of ``polys`` was limiting for\n-``ring series``.\n-\n-Note that there is no such constraint (in having rational exponents) in the\n-data-structure used by ``polys``- ``dict``. Sparse polynomials\n-(``PolyElement``) use the Python dict to store a polynomial term by term, where\n-a tuple of exponents is the key and the coefficient of that term is the value.\n-There is no reason why we can't have rational values in the ``dict`` so as to\n-support rational exponents.\n-\n-So the approach we took was to modify sparse ``polys`` to allow non-natural\n-exponents. And it turned out to be quite simple. We only had to delete the\n-conversion to ``int`` of exponents in the ``__pow__`` method of\n-``PolyElement``. So::\n-\n- >>> x**QQ(3, 4)\n- x**(3/4)\n-\n-and not ``1`` as was the case earlier.\n-\n-Though this change violates the definition of a polynomial, it doesn't break\n-anything yet. Ideally, we shouldn't modify ``polys`` in any way. But to have\n-all the ``series`` capabilities we want, no other simple way was found. If need\n-be, we can separate the modified part of ``polys`` from core ``polys``. It\n-would be great if any other elegant solution is found.\n-\n-All series returned by the functions of this module are instances of the\n-``PolyElement`` class. To use them with other SymPy types, convert them to\n+ x**(2/5)*y**(1/2) + 1/3*x**(6/5)*y**(3/2)\n+\n+Since polynomial rings cannot handle negative or fractional exponents, we use\n+the :func:`sympy.polys.puiseux.puiseux_ring` function to create a ring that can\n+represent such series.\n+\n+All series returned by the functions of this module are instances of\n+``PolyElement`` or ``PuiseuxPoly``. To use them with other SymPy types, convert\n+them to\n ``Expr``::\n \n >>> from sympy.polys.ring_series import rs_exp\n@@ -213,6 +188,7 @@ by ``polys.ring.ring``.\n \n **Utility functions**\n \n+.. autofunction:: rs_series\n .. autofunction:: rs_is_puiseux\n .. autofunction:: rs_puiseux\n .. autofunction:: rs_puiseux2\n@@ -220,3 +196,15 @@ by ``polys.ring.ring``.\n .. autofunction:: rs_fun\n .. autofunction:: mul_xin\n .. autofunction:: pow_xin\n+\n+**Puiseux rings**\n+\n+.. currentmodule:: sympy.polys.puiseux\n+\n+.. autofunction:: puiseux_ring\n+\n+.. autoclass:: PuiseuxRing\n+ :members:\n+\n+.. autoclass:: PuiseuxPoly\n+ :members:\ndiff --git a/pyproject.toml b/pyproject.toml\nindex 3430924e523c..67391201b5f9 100644\n--- a/pyproject.toml\n+++ b/pyproject.toml\n@@ -30,6 +30,14 @@ markers = [\n \"tooslow\",\n ]\n \n+[tool.coverage.report]\n+\n+exclude_lines = [\n+ \"pragma: no cover\",\n+ \"if TYPE_CHECKING:\",\n+ \"assert False\",\n+]\n+\n [tool.ruff]\n # Enable Pyflakes `E` and `F` codes by default.\n lint.select = [\ndiff --git a/sympy/integrals/prde.py b/sympy/integrals/prde.py\nindex 4488cbfc4000..28e91ea0ff3a 100644\n--- a/sympy/integrals/prde.py\n+++ b/sympy/integrals/prde.py\n@@ -533,7 +533,7 @@ def param_poly_rischDE(a, b, q, n, DE):\n if a.is_ground:\n # Normalization: a = 1.\n a = a.LC()\n- b, q = b.quo_ground(a), [qi.quo_ground(a) for qi in q]\n+ b, q = b.to_field().exquo_ground(a), [qi.to_field().exquo_ground(a) for qi in q]\n \n if not b.is_zero and (DE.case == 'base' or\n b.degree() > max(0, DE.d.degree() - 1)):\ndiff --git a/sympy/integrals/tests/test_integrals.py b/sympy/integrals/tests/test_integrals.py\nindex 51b81775bb14..41e1ef3aa363 100644\n--- a/sympy/integrals/tests/test_integrals.py\n+++ b/sympy/integrals/tests/test_integrals.py\n@@ -1813,8 +1813,8 @@ def test_issue_15810():\n def test_issue_21024():\n x = Symbol('x', real=True, nonzero=True)\n f = log(x)*log(4*x) + log(3*x + exp(2))\n- F = x*log(x)**2 + x*(1 - 2*log(2)) + (-2*x + 2*x*log(2))*log(x) + \\\n- (x + exp(2)/6)*log(3*x + exp(2)) + exp(2)*log(3*x + exp(2))/6\n+ F = x*log(x)**2 + x*log(3*x + exp(2)) + x*(1 - 2*log(2)) + \\\n+ (-2*x + 2*x*log(2))*log(x) + exp(2)*log(3*x + exp(2))/3\n assert F == integrate(f, x)\n \n f = (x + exp(3))/x**2\ndiff --git a/sympy/polys/domains/complexfield.py b/sympy/polys/domains/complexfield.py\nindex de02e46d190b..69f0bff2c1b3 100644\n--- a/sympy/polys/domains/complexfield.py\n+++ b/sympy/polys/domains/complexfield.py\n@@ -142,10 +142,7 @@ def from_RealField(self, element, base):\n return self.dtype(element)\n \n def from_ComplexField(self, element, base):\n- if self == base:\n- return element\n- else:\n- return self.dtype(element)\n+ return self.dtype(element)\n \n def get_ring(self):\n \"\"\"Returns a ring associated with ``self``. \"\"\"\ndiff --git a/sympy/polys/domains/domain.py b/sympy/polys/domains/domain.py\nindex 1c2b0d3171d6..1d7fc1eac618 100644\n--- a/sympy/polys/domains/domain.py\n+++ b/sympy/polys/domains/domain.py\n@@ -116,7 +116,7 @@ class (``dtype``) for the elements of the domain. For example the\n ZZ[x]\n >>> type(K) # class of the domain\n <class 'sympy.polys.domains.polynomialring.PolynomialRing'>\n- >>> K.dtype # class of the elements\n+ >>> K.dtype # doctest: +SKIP\n <class 'sympy.polys.rings.PolyElement'>\n >>> p_expr = x**2 + 1 # Expr\n >>> p_expr\n@@ -469,7 +469,7 @@ def convert(self, element, base=None):\n \n def of_type(self, element):\n \"\"\"Check if ``a`` is of type ``dtype``. \"\"\"\n- return isinstance(element, self.tp) # XXX: this isn't correct, e.g. PolyElement\n+ return isinstance(element, self.tp)\n \n def __contains__(self, a):\n \"\"\"Check if ``a`` belongs to this domain. \"\"\"\ndiff --git a/sympy/polys/domains/fractionfield.py b/sympy/polys/domains/fractionfield.py\nindex 47bc25436b8e..78f5054ddd54 100644\n--- a/sympy/polys/domains/fractionfield.py\n+++ b/sympy/polys/domains/fractionfield.py\n@@ -37,6 +37,10 @@ def __init__(self, domain_or_field, symbols=None, order=None):\n def new(self, element):\n return self.field.field_new(element)\n \n+ def of_type(self, element):\n+ \"\"\"Check if ``a`` is of type ``dtype``. \"\"\"\n+ return self.field.is_element(element)\n+\n @property\n def zero(self):\n return self.field.zero\n@@ -53,13 +57,13 @@ def __str__(self):\n return str(self.domain) + '(' + ','.join(map(str, self.symbols)) + ')'\n \n def __hash__(self):\n- return hash((self.__class__.__name__, self.dtype.field, self.domain, self.symbols))\n+ return hash((self.__class__.__name__, self.field, self.domain, self.symbols))\n \n def __eq__(self, other):\n \"\"\"Returns ``True`` if two domains are equivalent. \"\"\"\n- return isinstance(other, FractionField) and \\\n- (self.dtype.field, self.domain, self.symbols) ==\\\n- (other.dtype.field, other.domain, other.symbols)\n+ if not isinstance(other, FractionField):\n+ return NotImplemented\n+ return self.field == other.field\n \n def to_sympy(self, a):\n \"\"\"Convert ``a`` to a SymPy object. \"\"\"\ndiff --git a/sympy/polys/domains/polynomialring.py b/sympy/polys/domains/polynomialring.py\nindex bad73208f866..daccdcdede4d 100644\n--- a/sympy/polys/domains/polynomialring.py\n+++ b/sympy/polys/domains/polynomialring.py\n@@ -43,6 +43,10 @@ def __init__(self, domain_or_ring, symbols=None, order=None):\n def new(self, element):\n return self.ring.ring_new(element)\n \n+ def of_type(self, element):\n+ \"\"\"Check if ``a`` is of type ``dtype``. \"\"\"\n+ return self.ring.is_element(element)\n+\n @property\n def zero(self):\n return self.ring.zero\n@@ -59,13 +63,13 @@ def __str__(self):\n return str(self.domain) + '[' + ','.join(map(str, self.symbols)) + ']'\n \n def __hash__(self):\n- return hash((self.__class__.__name__, self.dtype.ring, self.domain, self.symbols))\n+ return hash((self.__class__.__name__, self.ring, self.domain, self.symbols))\n \n def __eq__(self, other):\n \"\"\"Returns `True` if two domains are equivalent. \"\"\"\n- return isinstance(other, PolynomialRing) and \\\n- (self.dtype.ring, self.domain, self.symbols) == \\\n- (other.dtype.ring, other.domain, other.symbols)\n+ if not isinstance(other, PolynomialRing):\n+ return NotImplemented\n+ return self.ring == other.ring\n \n def is_unit(self, a):\n \"\"\"Returns ``True`` if ``a`` is a unit of ``self``\"\"\"\ndiff --git a/sympy/polys/domains/realfield.py b/sympy/polys/domains/realfield.py\nindex 79ada6f70737..cb7fac2218c1 100644\n--- a/sympy/polys/domains/realfield.py\n+++ b/sympy/polys/domains/realfield.py\n@@ -171,10 +171,7 @@ def from_AlgebraicField(self, element, base):\n return self.from_sympy(base.to_sympy(element).evalf(self.dps))\n \n def from_RealField(self, element, base):\n- if self == base:\n- return element\n- else:\n- return self.dtype(element)\n+ return self.dtype(element)\n \n def from_ComplexField(self, element, base):\n if not element.imag:\ndiff --git a/sympy/polys/domains/tests/test_domains.py b/sympy/polys/domains/tests/test_domains.py\nindex 5cdcb9f0403a..405665726b49 100644\n--- a/sympy/polys/domains/tests/test_domains.py\n+++ b/sympy/polys/domains/tests/test_domains.py\n@@ -19,9 +19,9 @@\n from sympy.polys.domains.realfield import RealField\n \n from sympy.polys.numberfields.subfield import field_isomorphism\n-from sympy.polys.rings import ring\n+from sympy.polys.rings import ring, PolyElement\n from sympy.polys.specialpolys import cyclotomic_poly\n-from sympy.polys.fields import field\n+from sympy.polys.fields import field, FracElement\n \n from sympy.polys.agca.extensions import FiniteExtension\n \n@@ -657,7 +657,12 @@ def test_Domain_is_unit():\n def test_Domain_convert():\n \n def check_element(e1, e2, K1, K2, K3):\n- assert type(e1) is type(e2), '%s, %s: %s %s -> %s' % (e1, e2, K1, K2, K3)\n+ if isinstance(e1, PolyElement):\n+ assert isinstance(e2, PolyElement) and e1.ring == e2.ring\n+ elif isinstance(e1, FracElement):\n+ assert isinstance(e2, FracElement) and e1.field == e2.field\n+ else:\n+ assert type(e1) is type(e2), '%s, %s: %s %s -> %s' % (e1, e2, K1, K2, K3)\n assert e1 == e2, '%s, %s: %s %s -> %s' % (e1, e2, K1, K2, K3)\n \n def check_domains(K1, K2):\ndiff --git a/sympy/polys/fields.py b/sympy/polys/fields.py\nindex e45063b5f7ad..ee844df55690 100644\n--- a/sympy/polys/fields.py\n+++ b/sympy/polys/fields.py\n@@ -1,7 +1,6 @@\n \"\"\"Sparse rational function fields. \"\"\"\n \n from __future__ import annotations\n-from typing import Any\n from functools import reduce\n \n from operator import add, mul, lt, le, gt, ge\n@@ -100,7 +99,6 @@ def sfield(exprs, *symbols, **options):\n else:\n return (_field, fracs)\n \n-_field_cache: dict[Any, Any] = {}\n \n class FracField(DefaultPrinting):\n \"\"\"Multivariate distributed rational function field. \"\"\"\n@@ -120,32 +118,29 @@ def __new__(cls, symbols, domain, order=lex):\n order = ring.order\n \n _hash_tuple = (cls.__name__, symbols, ngens, domain, order)\n- obj = _field_cache.get(_hash_tuple)\n \n- if obj is None:\n- obj = object.__new__(cls)\n- obj._hash_tuple = _hash_tuple\n- obj._hash = hash(_hash_tuple)\n- obj.ring = ring\n- obj.dtype = type(\"FracElement\", (FracElement,), {\"field\": obj})\n- obj.symbols = symbols\n- obj.ngens = ngens\n- obj.domain = domain\n- obj.order = order\n+ obj = object.__new__(cls)\n+ obj._hash_tuple = _hash_tuple\n+ obj._hash = hash(_hash_tuple)\n+ obj.ring = ring\n+ obj.symbols = symbols\n+ obj.ngens = ngens\n+ obj.domain = domain\n+ obj.order = order\n \n- obj.zero = obj.dtype(ring.zero)\n- obj.one = obj.dtype(ring.one)\n+ obj.dtype = FracElement(obj, ring.zero).raw_new\n \n- obj.gens = obj._gens()\n+ obj.zero = obj.dtype(ring.zero)\n+ obj.one = obj.dtype(ring.one)\n \n- for symbol, generator in zip(obj.symbols, obj.gens):\n- if isinstance(symbol, Symbol):\n- name = symbol.name\n+ obj.gens = obj._gens()\n \n- if not hasattr(obj, name):\n- setattr(obj, name, generator)\n+ for symbol, generator in zip(obj.symbols, obj.gens):\n+ if isinstance(symbol, Symbol):\n+ name = symbol.name\n \n- _field_cache[_hash_tuple] = obj\n+ if not hasattr(obj, name):\n+ setattr(obj, name, generator)\n \n return obj\n \n@@ -160,7 +155,7 @@ def __hash__(self):\n return self._hash\n \n def index(self, gen):\n- if isinstance(gen, self.dtype):\n+ if self.is_element(gen):\n return self.ring.index(gen.to_poly())\n else:\n raise ValueError(\"expected a %s, got %s instead\" % (self.dtype,gen))\n@@ -173,8 +168,13 @@ def __eq__(self, other):\n def __ne__(self, other):\n return not self == other\n \n+ def is_element(self, element):\n+ \"\"\"True if ``element`` is an element of this field. False otherwise. \"\"\"\n+ return isinstance(element, FracElement) and element.field == self\n+\n def raw_new(self, numer, denom=None):\n return self.dtype(numer, denom)\n+\n def new(self, numer, denom=None):\n if denom is None: denom = self.ring.one\n numer, denom = numer.cancel(denom)\n@@ -292,17 +292,19 @@ def to_ring(self):\n class FracElement(DomainElement, DefaultPrinting, CantSympify):\n \"\"\"Element of multivariate distributed rational function field. \"\"\"\n \n- def __init__(self, numer, denom=None):\n+ def __init__(self, field, numer, denom=None):\n if denom is None:\n- denom = self.field.ring.one\n+ denom = field.ring.one\n elif not denom:\n raise ZeroDivisionError(\"zero denominator\")\n \n+ self.field = field\n self.numer = numer\n self.denom = denom\n \n- def raw_new(f, numer, denom):\n- return f.__class__(numer, denom)\n+ def raw_new(f, numer, denom=None):\n+ return f.__class__(f.field, numer, denom)\n+\n def new(f, numer, denom):\n return f.raw_new(*numer.cancel(denom))\n \n@@ -356,7 +358,7 @@ def sort_key(self):\n return (self.denom.sort_key(), self.numer.sort_key())\n \n def _cmp(f1, f2, op):\n- if isinstance(f2, f1.field.dtype):\n+ if f1.field.is_element(f2):\n return op(f1.sort_key(), f2.sort_key())\n else:\n return NotImplemented\n@@ -406,12 +408,12 @@ def __add__(f, g):\n return f\n elif not f:\n return g\n- elif isinstance(g, field.dtype):\n+ elif field.is_element(g):\n if f.denom == g.denom:\n return f.new(f.numer + g.numer, f.denom)\n else:\n return f.new(f.numer*g.denom + f.denom*g.numer, f.denom*g.denom)\n- elif isinstance(g, field.ring.dtype):\n+ elif field.ring.is_element(g):\n return f.new(f.numer + f.denom*g, f.denom)\n else:\n if isinstance(g, FracElement):\n@@ -430,7 +432,7 @@ def __add__(f, g):\n return f.__radd__(g)\n \n def __radd__(f, c):\n- if isinstance(c, f.field.ring.dtype):\n+ if f.field.ring.is_element(c):\n return f.new(f.numer + f.denom*c, f.denom)\n \n op, g_numer, g_denom = f._extract_ground(c)\n@@ -450,12 +452,12 @@ def __sub__(f, g):\n return f\n elif not f:\n return -g\n- elif isinstance(g, field.dtype):\n+ elif field.is_element(g):\n if f.denom == g.denom:\n return f.new(f.numer - g.numer, f.denom)\n else:\n return f.new(f.numer*g.denom - f.denom*g.numer, f.denom*g.denom)\n- elif isinstance(g, field.ring.dtype):\n+ elif field.ring.is_element(g):\n return f.new(f.numer - f.denom*g, f.denom)\n else:\n if isinstance(g, FracElement):\n@@ -481,7 +483,7 @@ def __sub__(f, g):\n return f.new(f.numer*g_denom - f.denom*g_numer, f.denom*g_denom)\n \n def __rsub__(f, c):\n- if isinstance(c, f.field.ring.dtype):\n+ if f.field.ring.is_element(c):\n return f.new(-f.numer + f.denom*c, f.denom)\n \n op, g_numer, g_denom = f._extract_ground(c)\n@@ -499,9 +501,9 @@ def __mul__(f, g):\n \n if not f or not g:\n return field.zero\n- elif isinstance(g, field.dtype):\n+ elif field.is_element(g):\n return f.new(f.numer*g.numer, f.denom*g.denom)\n- elif isinstance(g, field.ring.dtype):\n+ elif field.ring.is_element(g):\n return f.new(f.numer*g, f.denom)\n else:\n if isinstance(g, FracElement):\n@@ -520,7 +522,7 @@ def __mul__(f, g):\n return f.__rmul__(g)\n \n def __rmul__(f, c):\n- if isinstance(c, f.field.ring.dtype):\n+ if f.field.ring.is_element(c):\n return f.new(f.numer*c, f.denom)\n \n op, g_numer, g_denom = f._extract_ground(c)\n@@ -538,9 +540,9 @@ def __truediv__(f, g):\n \n if not g:\n raise ZeroDivisionError\n- elif isinstance(g, field.dtype):\n+ elif field.is_element(g):\n return f.new(f.numer*g.denom, f.denom*g.numer)\n- elif isinstance(g, field.ring.dtype):\n+ elif field.ring.is_element(g):\n return f.new(f.numer, f.denom*g)\n else:\n if isinstance(g, FracElement):\n@@ -568,7 +570,7 @@ def __truediv__(f, g):\n def __rtruediv__(f, c):\n if not f:\n raise ZeroDivisionError\n- elif isinstance(c, f.field.ring.dtype):\n+ elif f.field.ring.is_element(c):\n return f.new(f.denom*c, f.numer)\n \n op, g_numer, g_denom = f._extract_ground(c)\ndiff --git a/sympy/polys/modulargcd.py b/sympy/polys/modulargcd.py\nindex 20dfd33d9197..00d1920f69fe 100644\n--- a/sympy/polys/modulargcd.py\n+++ b/sympy/polys/modulargcd.py\n@@ -609,7 +609,8 @@ def _chinese_remainder_reconstruction_multivariate(hp, hq, p, q):\n hpmonoms.difference_update(monoms)\n hqmonoms.difference_update(monoms)\n \n- zero = hp.ring.domain.zero\n+ domain = hp.ring.domain\n+ zero = domain.zero\n \n hpq = hp.ring.zero\n \n@@ -617,7 +618,7 @@ def _chinese_remainder_reconstruction_multivariate(hp, hq, p, q):\n crt_ = _chinese_remainder_reconstruction_multivariate\n else:\n def crt_(cp, cq, p, q):\n- return crt([p, q], [cp, cq], symmetric=True)[0]\n+ return domain(crt([p, q], [cp, cq], symmetric=True)[0])\n \n for monom in monoms:\n hpq[monom] = crt_(hp[monom], hq[monom], p, q)\ndiff --git a/sympy/polys/monomials.py b/sympy/polys/monomials.py\nindex f464ba97f137..e5897a09986d 100644\n--- a/sympy/polys/monomials.py\n+++ b/sympy/polys/monomials.py\n@@ -4,6 +4,7 @@\n from itertools import combinations_with_replacement, product\n from textwrap import dedent\n \n+from sympy.core.cache import cacheit\n from sympy.core import Mul, S, Tuple, sympify\n from sympy.polys.polyerrors import ExactQuotientFailed\n from sympy.polys.polyutils import PicklableWithSlots, dict_from_expr\n@@ -394,8 +395,14 @@ def term_div(a, b, domain):\n class MonomialOps:\n \"\"\"Code generator of fast monomial arithmetic functions. \"\"\"\n \n- def __init__(self, ngens):\n- self.ngens = ngens\n+ @cacheit\n+ def __new__(cls, ngens):\n+ obj = super().__new__(cls)\n+ obj.ngens = ngens\n+ return obj\n+\n+ def __getnewargs__(self):\n+ return (self.ngens,)\n \n def _build(self, code, name):\n ns = {}\n@@ -405,6 +412,7 @@ def _build(self, code, name):\n def _vars(self, name):\n return [ \"%s%s\" % (name, i) for i in range(self.ngens) ]\n \n+ @cacheit\n def mul(self):\n name = \"monomial_mul\"\n template = dedent(\"\"\"\\\n@@ -419,6 +427,7 @@ def %(name)s(A, B):\n code = template % {\"name\": name, \"A\": \", \".join(A), \"B\": \", \".join(B), \"AB\": \", \".join(AB)}\n return self._build(code, name)\n \n+ @cacheit\n def pow(self):\n name = \"monomial_pow\"\n template = dedent(\"\"\"\\\n@@ -431,6 +440,7 @@ def %(name)s(A, k):\n code = template % {\"name\": name, \"A\": \", \".join(A), \"Ak\": \", \".join(Ak)}\n return self._build(code, name)\n \n+ @cacheit\n def mulpow(self):\n name = \"monomial_mulpow\"\n template = dedent(\"\"\"\\\n@@ -445,6 +455,7 @@ def %(name)s(A, B, k):\n code = template % {\"name\": name, \"A\": \", \".join(A), \"B\": \", \".join(B), \"ABk\": \", \".join(ABk)}\n return self._build(code, name)\n \n+ @cacheit\n def ldiv(self):\n name = \"monomial_ldiv\"\n template = dedent(\"\"\"\\\n@@ -459,6 +470,7 @@ def %(name)s(A, B):\n code = template % {\"name\": name, \"A\": \", \".join(A), \"B\": \", \".join(B), \"AB\": \", \".join(AB)}\n return self._build(code, name)\n \n+ @cacheit\n def div(self):\n name = \"monomial_div\"\n template = dedent(\"\"\"\\\n@@ -475,6 +487,7 @@ def %(name)s(A, B):\n code = template % {\"name\": name, \"A\": \", \".join(A), \"B\": \", \".join(B), \"RAB\": \"\\n \".join(RAB), \"R\": \", \".join(R)}\n return self._build(code, name)\n \n+ @cacheit\n def lcm(self):\n name = \"monomial_lcm\"\n template = dedent(\"\"\"\\\n@@ -489,6 +502,7 @@ def %(name)s(A, B):\n code = template % {\"name\": name, \"A\": \", \".join(A), \"B\": \", \".join(B), \"AB\": \", \".join(AB)}\n return self._build(code, name)\n \n+ @cacheit\n def gcd(self):\n name = \"monomial_gcd\"\n template = dedent(\"\"\"\\\ndiff --git a/sympy/polys/puiseux.py b/sympy/polys/puiseux.py\nnew file mode 100644\nindex 000000000000..f02c86ddc1ae\n--- /dev/null\n+++ b/sympy/polys/puiseux.py\n@@ -0,0 +1,795 @@\n+\"\"\"\n+Puiseux rings. These are used by the ring_series module to represented\n+truncated Puiseux series. Elements of a Puiseux ring are like polynomials\n+except that the exponents can be negative or rational rather than just\n+non-negative integers.\n+\"\"\"\n+\n+# Previously the ring_series module used PolyElement to represent Puiseux\n+# series. This is problematic because it means that PolyElement has to support\n+# negative and non-integer exponents which most polynomial representations do\n+# not support. This module provides an implementation of a ring for Puiseux\n+# series that can be used by ring_series without breaking the basic invariants\n+# of polynomial rings.\n+#\n+# Ideally there would be more of a proper series type that can keep track of\n+# not not just the leading terms of a truncated series but also the precision\n+# of the series. For now the rings here are just introduced to keep the\n+# interface that ring_series was using before.\n+\n+from __future__ import annotations\n+\n+from sympy.polys.domains import QQ\n+from sympy.polys.rings import PolyRing, PolyElement\n+from sympy.core.add import Add\n+from sympy.core.mul import Mul\n+from sympy.external.gmpy import gcd, lcm\n+\n+\n+from typing import TYPE_CHECKING\n+\n+\n+if TYPE_CHECKING:\n+ from typing import Any, Unpack\n+ from sympy.core.expr import Expr\n+ from sympy.polys.domains import Domain\n+ from collections.abc import Iterable, Iterator\n+\n+\n+def puiseux_ring(\n+ symbols: str | list[Expr], domain: Domain\n+) -> tuple[PuiseuxRing, Unpack[tuple[PuiseuxPoly, ...]]]:\n+ \"\"\"Construct a Puiseux ring.\n+\n+ This function constructs a Puiseux ring with the given symbols and domain.\n+\n+ >>> from sympy.polys.domains import QQ\n+ >>> from sympy.polys.puiseux import puiseux_ring\n+ >>> R, x, y = puiseux_ring('x y', QQ)\n+ >>> R\n+ PuiseuxRing((x, y), QQ)\n+ >>> p = 5*x**QQ(1,2) + 7/y\n+ >>> p\n+ 7*y**(-1) + 5*x**(1/2)\n+ \"\"\"\n+ ring = PuiseuxRing(symbols, domain)\n+ return (ring,) + ring.gens # type: ignore\n+\n+\n+class PuiseuxRing:\n+ \"\"\"Ring of Puiseux polynomials.\n+\n+ A Puiseux polynomial is a truncated Puiseux series. The exponents of the\n+ monomials can be negative or rational numbers. This ring is used by the\n+ ring_series module:\n+\n+ >>> from sympy.polys.domains import QQ\n+ >>> from sympy.polys.puiseux import puiseux_ring\n+ >>> from sympy.polys.ring_series import rs_exp, rs_nth_root\n+ >>> ring, x, y = puiseux_ring('x y', QQ)\n+ >>> f = x**2 + y**3\n+ >>> f\n+ y**3 + x**2\n+ >>> f.diff(x)\n+ 2*x\n+ >>> rs_exp(x, x, 5)\n+ 1 + x + 1/2*x**2 + 1/6*x**3 + 1/24*x**4\n+\n+ Importantly the Puiseux ring can represent truncated series with negative\n+ and fractional exponents:\n+\n+ >>> f = 1/x + 1/y**2\n+ >>> f\n+ x**(-1) + y**(-2)\n+ >>> f.diff(x)\n+ -1*x**(-2)\n+\n+ >>> rs_nth_root(8*x + x**2 + x**3, 3, x, 5)\n+ 2*x**(1/3) + 1/12*x**(4/3) + 23/288*x**(7/3) + -139/20736*x**(10/3)\n+\n+ See Also\n+ ========\n+\n+ sympy.polys.ring_series.rs_series\n+ PuiseuxPoly\n+ \"\"\"\n+ def __init__(self, symbols: str | list[Expr], domain: Domain):\n+\n+ poly_ring = PolyRing(symbols, domain)\n+\n+ domain = poly_ring.domain\n+ ngens = poly_ring.ngens\n+\n+ self.poly_ring = poly_ring\n+ self.domain = domain\n+\n+ self.symbols = poly_ring.symbols\n+ self.gens = tuple([self.from_poly(g) for g in poly_ring.gens])\n+ self.ngens = ngens\n+\n+ self.zero = self.from_poly(poly_ring.zero)\n+ self.one = self.from_poly(poly_ring.one)\n+\n+ self.zero_monom = poly_ring.zero_monom # type: ignore\n+ self.monomial_mul = poly_ring.monomial_mul # type: ignore\n+\n+ def __repr__(self) -> str:\n+ return f\"PuiseuxRing({self.symbols}, {self.domain})\"\n+\n+ def __eq__(self, other: Any) -> bool:\n+ if not isinstance(other, PuiseuxRing):\n+ return NotImplemented\n+ return self.symbols == other.symbols and self.domain == other.domain\n+\n+ def from_poly(self, poly: PolyElement) -> PuiseuxPoly:\n+ \"\"\"Create a Puiseux polynomial from a polynomial.\n+\n+ >>> from sympy.polys.domains import QQ\n+ >>> from sympy.polys.rings import ring\n+ >>> from sympy.polys.puiseux import puiseux_ring\n+ >>> R1, x1 = ring('x', QQ)\n+ >>> R2, x2 = puiseux_ring('x', QQ)\n+ >>> R2.from_poly(x1**2)\n+ x**2\n+ \"\"\"\n+ return PuiseuxPoly(poly, self)\n+\n+ def from_dict(self, terms: dict[tuple[int, ...], Any]) -> PuiseuxPoly:\n+ \"\"\"Create a Puiseux polynomial from a dictionary of terms.\n+\n+ >>> from sympy.polys.domains import QQ\n+ >>> from sympy.polys.puiseux import puiseux_ring\n+ >>> R, x = puiseux_ring('x', QQ)\n+ >>> R.from_dict({(QQ(1,2),): QQ(3)})\n+ 3*x**(1/2)\n+ \"\"\"\n+ return PuiseuxPoly.from_dict(terms, self)\n+\n+ def from_int(self, n: int) -> PuiseuxPoly:\n+ \"\"\"Create a Puiseux polynomial from an integer.\n+\n+ >>> from sympy.polys.domains import QQ\n+ >>> from sympy.polys.puiseux import puiseux_ring\n+ >>> R, x = puiseux_ring('x', QQ)\n+ >>> R.from_int(3)\n+ 3\n+ \"\"\"\n+ return self.from_poly(self.poly_ring(n))\n+\n+ def domain_new(self, arg: Any) -> Any:\n+ \"\"\"Create a new element of the domain.\n+\n+ >>> from sympy.polys.domains import QQ\n+ >>> from sympy.polys.puiseux import puiseux_ring\n+ >>> R, x = puiseux_ring('x', QQ)\n+ >>> R.domain_new(3)\n+ 3\n+ >>> QQ.of_type(_)\n+ True\n+ \"\"\"\n+ return self.poly_ring.domain_new(arg)\n+\n+ def ground_new(self, arg: Any) -> PuiseuxPoly:\n+ \"\"\"Create a new element from a ground element.\n+\n+ >>> from sympy.polys.domains import QQ\n+ >>> from sympy.polys.puiseux import puiseux_ring, PuiseuxPoly\n+ >>> R, x = puiseux_ring('x', QQ)\n+ >>> R.ground_new(3)\n+ 3\n+ >>> isinstance(_, PuiseuxPoly)\n+ True\n+ \"\"\"\n+ return self.from_poly(self.poly_ring.ground_new(arg))\n+\n+ def __call__(self, arg: Any) -> PuiseuxPoly:\n+ \"\"\"Coerce an element into the ring.\n+\n+ >>> from sympy.polys.domains import QQ\n+ >>> from sympy.polys.puiseux import puiseux_ring\n+ >>> R, x = puiseux_ring('x', QQ)\n+ >>> R(3)\n+ 3\n+ >>> R({(QQ(1,2),): QQ(3)})\n+ 3*x**(1/2)\n+ \"\"\"\n+ if isinstance(arg, dict):\n+ return self.from_dict(arg)\n+ else:\n+ return self.from_poly(self.poly_ring(arg))\n+\n+ def index(self, x: PuiseuxPoly) -> int:\n+ \"\"\"Return the index of a generator.\n+\n+ >>> from sympy.polys.domains import QQ\n+ >>> from sympy.polys.puiseux import puiseux_ring\n+ >>> R, x, y = puiseux_ring('x y', QQ)\n+ >>> R.index(x)\n+ 0\n+ >>> R.index(y)\n+ 1\n+ \"\"\"\n+ return self.gens.index(x)\n+\n+\n+def _div_poly_monom(poly: PolyElement, monom: Iterable[int]) -> PolyElement:\n+ ring = poly.ring\n+ div = ring.monomial_div\n+ return ring.from_dict({div(m, monom): c for m, c in poly.terms()})\n+\n+\n+def _mul_poly_monom(poly: PolyElement, monom: Iterable[int]) -> PolyElement:\n+ ring = poly.ring\n+ mul = ring.monomial_mul\n+ return ring.from_dict({mul(m, monom): c for m, c in poly.terms()})\n+\n+\n+def _div_monom(monom: Iterable[int], div: Iterable[int]) -> tuple[int, ...]:\n+ return tuple(mi - di for mi, di in zip(monom, div))\n+\n+\n+class PuiseuxPoly:\n+ \"\"\"Puiseux polynomial. Represents a truncated Puiseux series.\n+\n+ See the :class:`PuiseuxRing` class for more information.\n+\n+ >>> from sympy import QQ\n+ >>> from sympy.polys.puiseux import puiseux_ring\n+ >>> R, x, y = puiseux_ring('x, y', QQ)\n+ >>> p = 5*x**2 + 7*y**3\n+ >>> p\n+ 7*y**3 + 5*x**2\n+\n+ The internal representation of a Puiseux polynomial wraps a normal\n+ polynomial. To support negative powers the polynomial is considered to be\n+ divided by a monomial.\n+\n+ >>> p2 = 1/x + 1/y**2\n+ >>> p2.monom # x*y**2\n+ (1, 2)\n+ >>> p2.poly\n+ x + y**2\n+ >>> (y**2 + x) / (x*y**2) == p2\n+ True\n+\n+ To support fractional powers the polynomial is considered to be a function\n+ of ``x**(1/nx) * y**(1/ny) * ...``. The representation keeps track of a\n+ monomial and a list of exponent denominators so that the polynomial can be\n+ used to represent both negative and fractional powers.\n+\n+ >>> p3 = x**QQ(1,2) + y**QQ(2,3)\n+ >>> p3.ns\n+ (2, 3)\n+ >>> p3.poly\n+ x + y**2\n+\n+ See Also\n+ ========\n+\n+ sympy.polys.puiseux.PuiseuxRing\n+ sympy.polys.rings.PolyElement\n+ \"\"\"\n+\n+ ring: PuiseuxRing\n+ poly: PolyElement\n+ monom: tuple[int, ...] | None\n+ ns: tuple[int, ...] | None\n+\n+ def __new__(cls, poly: PolyElement, ring: PuiseuxRing) -> PuiseuxPoly:\n+ return cls._new(ring, poly, None, None)\n+\n+ @classmethod\n+ def _new(\n+ cls,\n+ ring: PuiseuxRing,\n+ poly: PolyElement,\n+ monom: tuple[int, ...] | None,\n+ ns: tuple[int, ...] | None,\n+ ) -> PuiseuxPoly:\n+ poly, monom, ns = cls._normalize(poly, monom, ns)\n+ return cls._new_raw(ring, poly, monom, ns)\n+\n+ @classmethod\n+ def _new_raw(\n+ cls,\n+ ring: PuiseuxRing,\n+ poly: PolyElement,\n+ monom: tuple[int, ...] | None,\n+ ns: tuple[int, ...] | None,\n+ ) -> PuiseuxPoly:\n+ obj = object.__new__(cls)\n+ obj.ring = ring\n+ obj.poly = poly\n+ obj.monom = monom\n+ obj.ns = ns\n+ return obj\n+\n+ def __eq__(self, other: Any) -> bool:\n+ if isinstance(other, PuiseuxPoly):\n+ return (\n+ self.poly == other.poly\n+ and self.monom == other.monom\n+ and self.ns == other.ns\n+ )\n+ elif self.monom is None and self.ns is None:\n+ return self.poly.__eq__(other)\n+ else:\n+ return NotImplemented\n+\n+ @classmethod\n+ def _normalize(\n+ cls,\n+ poly: PolyElement,\n+ monom: tuple[int, ...] | None,\n+ ns: tuple[int, ...] | None,\n+ ) -> tuple[PolyElement, tuple[int, ...] | None, tuple[int, ...] | None]:\n+ if monom is None and ns is None:\n+ return poly, None, None\n+\n+ if monom is not None:\n+ degs = [max(d, 0) for d in poly.tail_degrees()]\n+ if all(di >= mi for di, mi in zip(degs, monom)):\n+ poly = _div_poly_monom(poly, monom)\n+ monom = None\n+ elif any(degs):\n+ poly = _div_poly_monom(poly, degs)\n+ monom = _div_monom(monom, degs)\n+\n+ if ns is not None:\n+ factors_d, [poly_d] = poly.deflate()\n+ degrees = poly.degrees()\n+ monom_d = monom if monom is not None else [0] * len(degrees)\n+ ns_new = []\n+ monom_new = []\n+ inflations = []\n+ for fi, ni, di, mi in zip(factors_d, ns, degrees, monom_d):\n+ if di == 0:\n+ g = gcd(ni, mi)\n+ else:\n+ g = gcd(fi, ni, mi)\n+ ns_new.append(ni // g)\n+ monom_new.append(mi // g)\n+ inflations.append(fi // g)\n+\n+ if any(infl > 1 for infl in inflations):\n+ poly_d = poly_d.inflate(inflations)\n+\n+ poly = poly_d\n+\n+ if monom is not None:\n+ monom = tuple(monom_new)\n+\n+ if all(n == 1 for n in ns_new):\n+ ns = None\n+ else:\n+ ns = tuple(ns_new)\n+\n+ return poly, monom, ns\n+\n+ @classmethod\n+ def _monom_fromint(\n+ cls,\n+ monom: tuple[int, ...],\n+ dmonom: tuple[int, ...] | None,\n+ ns: tuple[int, ...] | None,\n+ ) -> tuple[Any, ...]:\n+ if dmonom is not None and ns is not None:\n+ return tuple(QQ(mi - di, ni) for mi, di, ni in zip(monom, dmonom, ns))\n+ elif dmonom is not None:\n+ return tuple(QQ(mi - di) for mi, di in zip(monom, dmonom))\n+ elif ns is not None:\n+ return tuple(QQ(mi, ni) for mi, ni in zip(monom, ns))\n+ else:\n+ return tuple(QQ(mi) for mi in monom)\n+\n+ @classmethod\n+ def _monom_toint(\n+ cls,\n+ monom: tuple[Any, ...],\n+ dmonom: tuple[int, ...] | None,\n+ ns: tuple[int, ...] | None,\n+ ) -> tuple[int, ...]:\n+ if dmonom is not None and ns is not None:\n+ return tuple(\n+ int((mi * ni).numerator + di) for mi, di, ni in zip(monom, dmonom, ns)\n+ )\n+ elif dmonom is not None:\n+ return tuple(int(mi.numerator + di) for mi, di in zip(monom, dmonom))\n+ elif ns is not None:\n+ return tuple(int((mi * ni).numerator) for mi, ni in zip(monom, ns))\n+ else:\n+ return tuple(int(mi.numerator) for mi in monom)\n+\n+ def itermonoms(self) -> Iterator[tuple[Any, ...]]:\n+ \"\"\"Iterate over the monomials of a Puiseux polynomial.\n+\n+ >>> from sympy import QQ\n+ >>> from sympy.polys.puiseux import puiseux_ring\n+ >>> R, x, y = puiseux_ring('x, y', QQ)\n+ >>> p = 5*x**2 + 7*y**3\n+ >>> list(p.itermonoms())\n+ [(2, 0), (0, 3)]\n+ >>> p[(2, 0)]\n+ 5\n+ \"\"\"\n+ monom, ns = self.monom, self.ns\n+ for m in self.poly.itermonoms():\n+ yield self._monom_fromint(m, monom, ns)\n+\n+ def monoms(self) -> list[tuple[Any, ...]]:\n+ \"\"\"Return a list of the monomials of a Puiseux polynomial.\"\"\"\n+ return list(self.itermonoms())\n+\n+ def __iter__(self) -> Iterator[tuple[tuple[Any, ...], Any]]:\n+ return self.itermonoms()\n+\n+ def __getitem__(self, monom: tuple[int, ...]) -> Any:\n+ monom = self._monom_toint(monom, self.monom, self.ns)\n+ return self.poly[monom]\n+\n+ def __len__(self) -> int:\n+ return len(self.poly)\n+\n+ def iterterms(self) -> Iterator[tuple[tuple[Any, ...], Any]]:\n+ \"\"\"Iterate over the terms of a Puiseux polynomial.\n+\n+ >>> from sympy import QQ\n+ >>> from sympy.polys.puiseux import puiseux_ring\n+ >>> R, x, y = puiseux_ring('x, y', QQ)\n+ >>> p = 5*x**2 + 7*y**3\n+ >>> list(p.iterterms())\n+ [((2, 0), 5), ((0, 3), 7)]\n+ \"\"\"\n+ monom, ns = self.monom, self.ns\n+ for m, coeff in self.poly.iterterms():\n+ mq = self._monom_fromint(m, monom, ns)\n+ yield mq, coeff\n+\n+ def terms(self) -> list[tuple[tuple[Any, ...], Any]]:\n+ \"\"\"Return a list of the terms of a Puiseux polynomial.\"\"\"\n+ return list(self.iterterms())\n+\n+ @property\n+ def is_term(self) -> bool:\n+ \"\"\"Return True if the Puiseux polynomial is a single term.\"\"\"\n+ return self.poly.is_term\n+\n+ def to_dict(self) -> dict[tuple[int, ...], Any]:\n+ \"\"\"Return a dictionary representation of a Puiseux polynomial.\"\"\"\n+ return dict(self.iterterms())\n+\n+ @classmethod\n+ def from_dict(\n+ cls, terms: dict[tuple[Any, ...], Any], ring: PuiseuxRing\n+ ) -> PuiseuxPoly:\n+ \"\"\"Create a Puiseux polynomial from a dictionary of terms.\n+\n+ >>> from sympy import QQ\n+ >>> from sympy.polys.puiseux import puiseux_ring, PuiseuxPoly\n+ >>> R, x = puiseux_ring('x', QQ)\n+ >>> PuiseuxPoly.from_dict({(QQ(1,2),): QQ(3)}, R)\n+ 3*x**(1/2)\n+ >>> R.from_dict({(QQ(1,2),): QQ(3)})\n+ 3*x**(1/2)\n+ \"\"\"\n+ ns = [1] * ring.ngens\n+ mon = [0] * ring.ngens\n+ for mo in terms:\n+ ns = [lcm(n, m.denominator) for n, m in zip(ns, mo)]\n+ mon = [min(m, n) for m, n in zip(mo, mon)]\n+\n+ if not any(mon):\n+ monom = None\n+ else:\n+ monom = tuple(-int((m * n).numerator) for m, n in zip(mon, ns))\n+\n+ if all(n == 1 for n in ns):\n+ ns_final = None\n+ else:\n+ ns_final = tuple(ns)\n+\n+ terms_p = {cls._monom_toint(m, monom, ns_final): coeff for m, coeff in terms.items()}\n+\n+ poly = ring.poly_ring.from_dict(terms_p)\n+\n+ return cls._new(ring, poly, monom, ns_final)\n+\n+ def as_expr(self) -> Expr:\n+ \"\"\"Convert a Puiseux polynomial to :class:`~sympy.core.expr.Expr`.\n+\n+ >>> from sympy import QQ, Expr\n+ >>> from sympy.polys.puiseux import puiseux_ring\n+ >>> R, x = puiseux_ring('x', QQ)\n+ >>> p = 5*x**2 + 7*x**3\n+ >>> p.as_expr()\n+ 7*x**3 + 5*x**2\n+ >>> isinstance(_, Expr)\n+ True\n+ \"\"\"\n+ ring = self.ring\n+ dom = ring.domain\n+ symbols = ring.symbols\n+ terms = []\n+ for monom, coeff in self.iterterms():\n+ coeff_expr = dom.to_sympy(coeff)\n+ monoms_expr = []\n+ for i, m in enumerate(monom):\n+ monoms_expr.append(symbols[i] ** m)\n+ terms.append(Mul(coeff_expr, *monoms_expr))\n+ return Add(*terms)\n+\n+ def __repr__(self) -> str:\n+\n+ def format_power(base: str, exp: int) -> str:\n+ if exp == 1:\n+ return base\n+ elif exp >= 0 and int(exp) == exp:\n+ return f\"{base}**{exp}\"\n+ else:\n+ return f\"{base}**({exp})\"\n+\n+ ring = self.ring\n+ dom = ring.domain\n+\n+ syms = [str(s) for s in ring.symbols]\n+ terms_str = []\n+ for monom, coeff in sorted(self.terms()):\n+ monom_str = \"*\".join(format_power(s, e) for s, e in zip(syms, monom) if e)\n+ if coeff == dom.one:\n+ if monom_str:\n+ terms_str.append(monom_str)\n+ else:\n+ terms_str.append(\"1\")\n+ elif not monom_str:\n+ terms_str.append(str(coeff))\n+ else:\n+ terms_str.append(f\"{coeff}*{monom_str}\")\n+\n+ return \" + \".join(terms_str)\n+\n+ def _unify(\n+ self, other: PuiseuxPoly\n+ ) -> tuple[\n+ PolyElement, PolyElement, tuple[int, ...] | None, tuple[int, ...] | None\n+ ]:\n+ \"\"\"Bring two Puiseux polynomials to a common monom and ns.\"\"\"\n+ poly1, monom1, ns1 = self.poly, self.monom, self.ns\n+ poly2, monom2, ns2 = other.poly, other.monom, other.ns\n+\n+ if monom1 == monom2 and ns1 == ns2:\n+ return poly1, poly2, monom1, ns1\n+\n+ if ns1 == ns2:\n+ ns = ns1\n+ elif ns1 is not None and ns2 is not None:\n+ ns = tuple(lcm(n1, n2) for n1, n2 in zip(ns1, ns2))\n+ f1 = [n // n1 for n, n1 in zip(ns, ns1)]\n+ f2 = [n // n2 for n, n2 in zip(ns, ns2)]\n+ poly1 = poly1.inflate(f1)\n+ poly2 = poly2.inflate(f2)\n+ if monom1 is not None:\n+ monom1 = tuple(m * f for m, f in zip(monom1, f1))\n+ if monom2 is not None:\n+ monom2 = tuple(m * f for m, f in zip(monom2, f2))\n+ elif ns2 is not None:\n+ ns = ns2\n+ poly1 = poly1.inflate(ns)\n+ if monom1 is not None:\n+ monom1 = tuple(m * n for m, n in zip(monom1, ns))\n+ elif ns1 is not None:\n+ ns = ns1\n+ poly2 = poly2.inflate(ns)\n+ if monom2 is not None:\n+ monom2 = tuple(m * n for m, n in zip(monom2, ns))\n+ else:\n+ assert False\n+\n+ if monom1 == monom2:\n+ monom = monom1\n+ elif monom1 is not None and monom2 is not None:\n+ monom = tuple(max(m1, m2) for m1, m2 in zip(monom1, monom2))\n+ poly1 = _mul_poly_monom(poly1, _div_monom(monom, monom1))\n+ poly2 = _mul_poly_monom(poly2, _div_monom(monom, monom2))\n+ elif monom2 is not None:\n+ monom = monom2\n+ poly1 = _mul_poly_monom(poly1, monom2)\n+ elif monom1 is not None:\n+ monom = monom1\n+ poly2 = _mul_poly_monom(poly2, monom1)\n+ else:\n+ assert False\n+\n+ return poly1, poly2, monom, ns\n+\n+ def __pos__(self) -> PuiseuxPoly:\n+ return self\n+\n+ def __neg__(self) -> PuiseuxPoly:\n+ return self._new_raw(self.ring, -self.poly, self.monom, self.ns)\n+\n+ def __add__(self, other: Any) -> PuiseuxPoly:\n+ if isinstance(other, PuiseuxPoly):\n+ if self.ring != other.ring:\n+ raise ValueError(\"Cannot add Puiseux polynomials from different rings\")\n+ return self._add(other)\n+ domain = self.ring.domain\n+ if isinstance(other, int):\n+ return self._add_ground(domain.convert_from(QQ(other), QQ))\n+ elif domain.of_type(other):\n+ return self._add_ground(other)\n+ else:\n+ return NotImplemented\n+\n+ def __radd__(self, other: Any) -> PuiseuxPoly:\n+ domain = self.ring.domain\n+ if isinstance(other, int):\n+ return self._add_ground(domain.convert_from(QQ(other), QQ))\n+ elif domain.of_type(other):\n+ return self._add_ground(other)\n+ else:\n+ return NotImplemented\n+\n+ def __sub__(self, other: Any) -> PuiseuxPoly:\n+ if isinstance(other, PuiseuxPoly):\n+ if self.ring != other.ring:\n+ raise ValueError(\n+ \"Cannot subtract Puiseux polynomials from different rings\"\n+ )\n+ return self._sub(other)\n+ domain = self.ring.domain\n+ if isinstance(other, int):\n+ return self._sub_ground(domain.convert_from(QQ(other), QQ))\n+ elif domain.of_type(other):\n+ return self._sub_ground(other)\n+ else:\n+ return NotImplemented\n+\n+ def __rsub__(self, other: Any) -> PuiseuxPoly:\n+ domain = self.ring.domain\n+ if isinstance(other, int):\n+ return self._rsub_ground(domain.convert_from(QQ(other), QQ))\n+ elif domain.of_type(other):\n+ return self._rsub_ground(other)\n+ else:\n+ return NotImplemented\n+\n+ def __mul__(self, other: Any) -> PuiseuxPoly:\n+ if isinstance(other, PuiseuxPoly):\n+ if self.ring != other.ring:\n+ raise ValueError(\n+ \"Cannot multiply Puiseux polynomials from different rings\"\n+ )\n+ return self._mul(other)\n+ domain = self.ring.domain\n+ if isinstance(other, int):\n+ return self._mul_ground(domain.convert_from(QQ(other), QQ))\n+ elif domain.of_type(other):\n+ return self._mul_ground(other)\n+ else:\n+ return NotImplemented\n+\n+ def __rmul__(self, other: Any) -> PuiseuxPoly:\n+ domain = self.ring.domain\n+ if isinstance(other, int):\n+ return self._mul_ground(domain.convert_from(QQ(other), QQ))\n+ elif domain.of_type(other):\n+ return self._mul_ground(other)\n+ else:\n+ return NotImplemented\n+\n+ def __pow__(self, other: Any) -> PuiseuxPoly:\n+ if isinstance(other, int):\n+ if other >= 0:\n+ return self._pow_pint(other)\n+ else:\n+ return self._pow_nint(-other)\n+ elif QQ.of_type(other):\n+ return self._pow_rational(other)\n+ else:\n+ return NotImplemented\n+\n+ def __truediv__(self, other: Any) -> PuiseuxPoly:\n+ if isinstance(other, PuiseuxPoly):\n+ if self.ring != other.ring:\n+ raise ValueError(\n+ \"Cannot divide Puiseux polynomials from different rings\"\n+ )\n+ return self._mul(other._inv())\n+ domain = self.ring.domain\n+ if isinstance(other, int):\n+ return self._mul_ground(domain.convert_from(QQ(1, other), QQ))\n+ elif domain.of_type(other):\n+ return self._div_ground(other)\n+ else:\n+ return NotImplemented\n+\n+ def __rtruediv__(self, other: Any) -> PuiseuxPoly:\n+ if isinstance(other, int):\n+ return self._inv()._mul_ground(self.ring.domain.convert_from(QQ(other), QQ))\n+ elif self.ring.domain.of_type(other):\n+ return self._inv()._mul_ground(other)\n+ else:\n+ return NotImplemented\n+\n+ def _add(self, other: PuiseuxPoly) -> PuiseuxPoly:\n+ poly1, poly2, monom, ns = self._unify(other)\n+ return self._new(self.ring, poly1 + poly2, monom, ns)\n+\n+ def _add_ground(self, ground: Any) -> PuiseuxPoly:\n+ return self._add(self.ring.ground_new(ground))\n+\n+ def _sub(self, other: PuiseuxPoly) -> PuiseuxPoly:\n+ poly1, poly2, monom, ns = self._unify(other)\n+ return self._new(self.ring, poly1 - poly2, monom, ns)\n+\n+ def _sub_ground(self, ground: Any) -> PuiseuxPoly:\n+ return self._sub(self.ring.ground_new(ground))\n+\n+ def _rsub_ground(self, ground: Any) -> PuiseuxPoly:\n+ return self.ring.ground_new(ground)._sub(self)\n+\n+ def _mul(self, other: PuiseuxPoly) -> PuiseuxPoly:\n+ poly1, poly2, monom, ns = self._unify(other)\n+ if monom is not None:\n+ monom = tuple(2 * e for e in monom)\n+ return self._new(self.ring, poly1 * poly2, monom, ns)\n+\n+ def _mul_ground(self, ground: Any) -> PuiseuxPoly:\n+ return self._new_raw(self.ring, self.poly * ground, self.monom, self.ns)\n+\n+ def _div_ground(self, ground: Any) -> PuiseuxPoly:\n+ return self._new_raw(self.ring, self.poly / ground, self.monom, self.ns)\n+\n+ def _pow_pint(self, n: int) -> PuiseuxPoly:\n+ assert n >= 0\n+ monom = self.monom\n+ if monom is not None:\n+ monom = tuple(m * n for m in monom)\n+ return self._new(self.ring, self.poly**n, monom, self.ns)\n+\n+ def _pow_nint(self, n: int) -> PuiseuxPoly:\n+ return self._inv()._pow_pint(n)\n+\n+ def _pow_rational(self, n: Any) -> PuiseuxPoly:\n+ if not self.is_term:\n+ raise ValueError(\"Only monomials can be raised to a rational power\")\n+ [(monom, coeff)] = self.terms()\n+ domain = self.ring.domain\n+ if not domain.is_one(coeff):\n+ raise ValueError(\"Only monomials can be raised to a rational power\")\n+ monom = tuple(m * n for m in monom)\n+ return self.ring.from_dict({monom: domain.one})\n+\n+ def _inv(self) -> PuiseuxPoly:\n+ if not self.is_term:\n+ raise ValueError(\"Only terms can be inverted\")\n+ [(monom, coeff)] = self.terms()\n+ domain = self.ring.domain\n+ if not domain.is_Field and not domain.is_one(coeff):\n+ raise ValueError(\"Cannot invert non-unit coefficient\")\n+ monom = tuple(-m for m in monom)\n+ coeff = 1 / coeff\n+ return self.ring.from_dict({monom: coeff})\n+\n+ def diff(self, x: PuiseuxPoly) -> PuiseuxPoly:\n+ \"\"\"Differentiate a Puiseux polynomial with respect to a variable.\n+\n+ >>> from sympy import QQ\n+ >>> from sympy.polys.puiseux import puiseux_ring\n+ >>> R, x, y = puiseux_ring('x, y', QQ)\n+ >>> p = 5*x**2 + 7*y**3\n+ >>> p.diff(x)\n+ 10*x\n+ >>> p.diff(y)\n+ 21*y**2\n+ \"\"\"\n+ ring = self.ring\n+ i = ring.index(x)\n+ g = {}\n+ for expv, coeff in self.iterterms():\n+ n = expv[i]\n+ if n:\n+ e = list(expv)\n+ e[i] -= 1\n+ g[tuple(e)] = coeff * n\n+ return ring(g)\ndiff --git a/sympy/polys/ring_series.py b/sympy/polys/ring_series.py\nindex d08b0c0507d1..4afcba37627c 100644\n--- a/sympy/polys/ring_series.py\n+++ b/sympy/polys/ring_series.py\n@@ -43,6 +43,7 @@\n \n from sympy.polys.domains import QQ, EX\n from sympy.polys.rings import PolyElement, ring, sring\n+from sympy.polys.puiseux import PuiseuxPoly\n from sympy.polys.polyerrors import DomainError\n from sympy.polys.monomials import (monomial_min, monomial_mul, monomial_div,\n monomial_ldiv)\n@@ -89,7 +90,8 @@ def _invert_monoms(p1):\n \n def _giant_steps(target):\n \"\"\"Return a list of precision steps for the Newton's method\"\"\"\n- res = giant_steps(2, target)\n+ # We use ceil here because giant_steps cannot handle flint.fmpq\n+ res = giant_steps(2, math.ceil(target))\n if res[0] != 2:\n res = [2] + res\n return res\n@@ -113,13 +115,13 @@ def rs_trunc(p1, x, prec):\n x**5 + x + 1\n \"\"\"\n R = p1.ring\n- p = R.zero\n+ p = {}\n i = R.gens.index(x)\n for exp1 in p1:\n if exp1[i] >= prec:\n continue\n p[exp1] = p1[exp1]\n- return p\n+ return R(p)\n \n def rs_is_puiseux(p, x):\n \"\"\"\n@@ -131,15 +133,15 @@ def rs_is_puiseux(p, x):\n ========\n \n >>> from sympy.polys.domains import QQ\n- >>> from sympy.polys.rings import ring\n+ >>> from sympy.polys.puiseux import puiseux_ring\n >>> from sympy.polys.ring_series import rs_is_puiseux\n- >>> R, x = ring('x', QQ)\n+ >>> R, x = puiseux_ring('x', QQ)\n >>> p = x**QQ(2,5) + x**QQ(2,3) + x\n >>> rs_is_puiseux(p, x)\n True\n \"\"\"\n index = p.ring.gens.index(x)\n- for k in p:\n+ for k in p.itermonoms():\n if k[index] != int(k[index]):\n return True\n if k[index] < 0:\n@@ -156,12 +158,12 @@ def rs_puiseux(f, p, x, prec):\n ========\n \n >>> from sympy.polys.domains import QQ\n- >>> from sympy.polys.rings import ring\n+ >>> from sympy.polys.puiseux import puiseux_ring\n >>> from sympy.polys.ring_series import rs_puiseux, rs_exp\n- >>> R, x = ring('x', QQ)\n+ >>> R, x = puiseux_ring('x', QQ)\n >>> p = x**QQ(2,5) + x**QQ(2,3) + x\n >>> rs_puiseux(rs_exp,p, x, 1)\n- 1/2*x**(4/5) + x**(2/3) + x**(2/5) + 1\n+ 1 + x**(2/5) + x**(2/3) + 1/2*x**(4/5)\n \"\"\"\n index = p.ring.gens.index(x)\n n = 1\n@@ -229,18 +231,18 @@ def rs_mul(p1, p2, x, prec):\n 3*x**2 + 3*x + 1\n \"\"\"\n R = p1.ring\n- p = R.zero\n+ p = {}\n if R.__class__ != p2.ring.__class__ or R != p2.ring:\n raise ValueError('p1 and p2 must have the same ring')\n iv = R.gens.index(x)\n- if not isinstance(p2, PolyElement):\n+ if not isinstance(p2, (PolyElement, PuiseuxPoly)):\n raise ValueError('p2 must be a polynomial')\n if R == p2.ring:\n get = p.get\n- items2 = list(p2.items())\n+ items2 = p2.terms()\n items2.sort(key=lambda e: e[0][iv])\n if R.ngens == 1:\n- for exp1, v1 in p1.items():\n+ for exp1, v1 in p1.iterterms():\n for exp2, v2 in items2:\n exp = exp1[0] + exp2[0]\n if exp < prec:\n@@ -250,7 +252,7 @@ def rs_mul(p1, p2, x, prec):\n break\n else:\n monomial_mul = R.monomial_mul\n- for exp1, v1 in p1.items():\n+ for exp1, v1 in p1.iterterms():\n for exp2, v2 in items2:\n if exp1[iv] + exp2[iv] < prec:\n exp = monomial_mul(exp1, exp2)\n@@ -258,8 +260,7 @@ def rs_mul(p1, p2, x, prec):\n else:\n break\n \n- p.strip_zero()\n- return p\n+ return R(p)\n \n def rs_square(p1, x, prec):\n \"\"\"\n@@ -277,10 +278,10 @@ def rs_square(p1, x, prec):\n 6*x**2 + 4*x + 1\n \"\"\"\n R = p1.ring\n- p = R.zero\n+ p = {}\n iv = R.gens.index(x)\n get = p.get\n- items = list(p1.items())\n+ items = p1.terms()\n items.sort(key=lambda e: e[0][iv])\n monomial_mul = R.monomial_mul\n for i in range(len(items)):\n@@ -292,14 +293,13 @@ def rs_square(p1, x, prec):\n p[exp] = get(exp, 0) + v1*v2\n else:\n break\n- p = p.imul_num(2)\n+ p = {m: 2*v for m, v in p.items()}\n get = p.get\n- for expv, v in p1.items():\n+ for expv, v in p1.iterterms():\n if 2*expv[iv] < prec:\n e2 = monomial_mul(expv, expv)\n p[e2] = get(e2, 0) + v**2\n- p.strip_zero()\n- return p\n+ return R(p)\n \n def rs_pow(p1, n, x, prec):\n \"\"\"\n@@ -753,7 +753,7 @@ def rs_diff(p, x):\n \"\"\"\n R = p.ring\n n = R.gens.index(x)\n- p1 = R.zero\n+ p1 = {}\n mn = [0]*R.ngens\n mn[n] = 1\n mn = tuple(mn)\n@@ -761,7 +761,7 @@ def rs_diff(p, x):\n if expv[n]:\n e = monomial_ldiv(expv, mn)\n p1[e] = R.domain_new(p[expv]*expv[n])\n- return p1\n+ return R(p1)\n \n def rs_integrate(p, x):\n \"\"\"\n@@ -784,7 +784,7 @@ def rs_integrate(p, x):\n 1/3*x**3*y**3 + 1/2*x**2\n \"\"\"\n R = p.ring\n- p1 = R.zero\n+ p1 = {}\n n = R.gens.index(x)\n mn = [0]*R.ngens\n mn[n] = 1\n@@ -793,7 +793,7 @@ def rs_integrate(p, x):\n for expv in p:\n e = monomial_mul(expv, mn)\n p1[e] = R.domain_new(p[expv]/(expv[n] + 1))\n- return p1\n+ return R(p1)\n \n def rs_fun(p, f, *args):\n r\"\"\"\n@@ -858,31 +858,31 @@ def mul_xin(p, i, n):\n `x\\_i` is the ith variable in ``p``.\n \"\"\"\n R = p.ring\n- q = R(0)\n- for k, v in p.items():\n+ q = {}\n+ for k, v in p.terms():\n k1 = list(k)\n k1[i] += n\n q[tuple(k1)] = v\n- return q\n+ return R(q)\n \n def pow_xin(p, i, n):\n \"\"\"\n >>> from sympy.polys.domains import QQ\n- >>> from sympy.polys.rings import ring\n+ >>> from sympy.polys.puiseux import puiseux_ring\n >>> from sympy.polys.ring_series import pow_xin\n- >>> R, x, y = ring('x, y', QQ)\n+ >>> R, x, y = puiseux_ring('x, y', QQ)\n >>> p = x**QQ(2,5) + x + x**QQ(2,3)\n >>> index = p.ring.gens.index(x)\n >>> pow_xin(p, index, 15)\n- x**15 + x**10 + x**6\n+ x**6 + x**10 + x**15\n \"\"\"\n R = p.ring\n- q = R(0)\n- for k, v in p.items():\n+ q = {}\n+ for k, v in p.terms():\n k1 = list(k)\n k1[i] *= n\n q[tuple(k1)] = v\n- return q\n+ return R(q)\n \n def _nth_root1(p, n, x, prec):\n \"\"\"\n@@ -973,7 +973,7 @@ def rs_nth_root(p, n, x, prec):\n c = p[zm]\n if R.domain is EX:\n c_expr = c.as_expr()\n- const = c_expr**QQ(1, n)\n+ const = EX(c_expr**QQ(1, n))\n elif isinstance(c, PolyElement):\n try:\n c_expr = c.as_expr()\n@@ -991,7 +991,7 @@ def rs_nth_root(p, n, x, prec):\n else:\n res = _nth_root1(p, n, x, prec)\n if m:\n- m = QQ(m, n)\n+ m = QQ(m) / n\n res = mul_xin(res, index, m)\n return res\n \n@@ -1008,13 +1008,13 @@ def rs_log(p, x, prec):\n ========\n \n >>> from sympy.polys.domains import QQ\n- >>> from sympy.polys.rings import ring\n+ >>> from sympy.polys.puiseux import puiseux_ring\n >>> from sympy.polys.ring_series import rs_log\n- >>> R, x = ring('x', QQ)\n+ >>> R, x = puiseux_ring('x', QQ)\n >>> rs_log(1 + x, x, 8)\n- 1/7*x**7 - 1/6*x**6 + 1/5*x**5 - 1/4*x**4 + 1/3*x**3 - 1/2*x**2 + x\n+ x + -1/2*x**2 + 1/3*x**3 + -1/4*x**4 + 1/5*x**5 + -1/6*x**6 + 1/7*x**7\n >>> rs_log(x**QQ(3, 2) + 1, x, 5)\n- 1/3*x**(9/2) - 1/2*x**3 + x**(3/2)\n+ x**(3/2) + -1/2*x**3 + 1/3*x**(9/2)\n \"\"\"\n if rs_is_puiseux(p, x):\n return rs_puiseux(rs_log, p, x, prec)\n@@ -1030,7 +1030,7 @@ def rs_log(p, x, prec):\n c_expr = c.as_expr()\n if R.domain is EX:\n const = log(c_expr)\n- elif isinstance(c, PolyElement):\n+ elif isinstance(c, (PolyElement, PuiseuxPoly)):\n try:\n const = R(log(c_expr))\n except ValueError:\n@@ -1400,13 +1400,13 @@ def rs_sin(p, x, prec):\n ========\n \n >>> from sympy.polys.domains import QQ\n- >>> from sympy.polys.rings import ring\n+ >>> from sympy.polys.puiseux import puiseux_ring\n >>> from sympy.polys.ring_series import rs_sin\n- >>> R, x, y = ring('x, y', QQ)\n+ >>> R, x, y = puiseux_ring('x, y', QQ)\n >>> rs_sin(x + x*y, x, 4)\n- -1/6*x**3*y**3 - 1/2*x**3*y**2 - 1/2*x**3*y - 1/6*x**3 + x*y + x\n+ x + x*y + -1/6*x**3 + -1/2*x**3*y + -1/2*x**3*y**2 + -1/6*x**3*y**3\n >>> rs_sin(x**QQ(3, 2) + x*y**QQ(7, 5), x, 4)\n- -1/2*x**(7/2)*y**(14/5) - 1/6*x**3*y**(21/5) + x**(3/2) + x*y**(7/5)\n+ x*y**(7/5) + x**(3/2) + -1/6*x**3*y**(21/5) + -1/2*x**(7/2)*y**(14/5)\n \n See Also\n ========\n@@ -1470,13 +1470,13 @@ def rs_cos(p, x, prec):\n ========\n \n >>> from sympy.polys.domains import QQ\n- >>> from sympy.polys.rings import ring\n+ >>> from sympy.polys.puiseux import puiseux_ring\n >>> from sympy.polys.ring_series import rs_cos\n- >>> R, x, y = ring('x, y', QQ)\n+ >>> R, x, y = puiseux_ring('x, y', QQ)\n >>> rs_cos(x + x*y, x, 4)\n- -1/2*x**2*y**2 - x**2*y - 1/2*x**2 + 1\n+ 1 + -1/2*x**2 + -1*x**2*y + -1/2*x**2*y**2\n >>> rs_cos(x + x*y, x, 4)/x**QQ(7, 5)\n- -1/2*x**(3/5)*y**2 - x**(3/5)*y - 1/2*x**(3/5) + x**(-7/5)\n+ x**(-7/5) + -1/2*x**(3/5) + -1*x**(3/5)*y + -1/2*x**(3/5)*y**2\n \n See Also\n ========\n@@ -1830,7 +1830,7 @@ def rs_compose_add(p1, p2):\n np2e = rs_hadamard_exp(np2)\n np3e = rs_mul(np1e, np2e, x, prec)\n np3 = rs_hadamard_exp(np3e, True)\n- np3a = (np3[(0,)] - np3)/x\n+ np3a = (np3[(0,)] - np3) / x\n q = rs_integrate(np3a, x)\n q = rs_exp(q, x, prec)\n q = _invert_monoms(q)\n@@ -1960,8 +1960,8 @@ def rs_series(expr, a, prec):\n Parameters\n ==========\n \n- expr : :class:`Expr`\n- a : :class:`Symbol` with respect to which expr is to be expanded\n+ expr : :class:`~.Expr`\n+ a : :class:`~.Symbol` with respect to which expr is to be expanded\n prec : order of the series expansion\n \n Currently supports multivariate Taylor series expansion. This is much\ndiff --git a/sympy/polys/rings.py b/sympy/polys/rings.py\nindex 9103b1737af1..2e902f30f809 100644\n--- a/sympy/polys/rings.py\n+++ b/sympy/polys/rings.py\n@@ -1,12 +1,12 @@\n \"\"\"Sparse polynomial rings. \"\"\"\n \n from __future__ import annotations\n-from typing import Any\n \n from operator import add, mul, lt, le, gt, ge\n from functools import reduce\n from types import GeneratorType\n \n+from sympy.core.cache import cacheit\n from sympy.core.expr import Expr\n from sympy.core.intfunc import igcd\n from sympy.core.symbol import Symbol, symbols as _symbols\n@@ -192,7 +192,6 @@ def _parse_symbols(symbols):\n \n raise GeneratorsError(\"expected a string, Symbol or expression or a non-empty sequence of strings, Symbols or expressions\")\n \n-_ring_cache: dict[Any, Any] = {}\n \n class PolyRing(DefaultPrinting, IPolys):\n \"\"\"Multivariate distributed polynomial ring. \"\"\"\n@@ -210,61 +209,58 @@ def __new__(cls, symbols, domain, order=lex):\n order = OrderOpt.preprocess(order)\n \n _hash_tuple = (cls.__name__, symbols, ngens, domain, order)\n- obj = _ring_cache.get(_hash_tuple)\n-\n- if obj is None:\n- if domain.is_Composite and set(symbols) & set(domain.symbols):\n- raise GeneratorsError(\"polynomial ring and it's ground domain share generators\")\n-\n- obj = object.__new__(cls)\n- obj._hash_tuple = _hash_tuple\n- obj._hash = hash(_hash_tuple)\n- obj.dtype = type(\"PolyElement\", (PolyElement,), {\"ring\": obj})\n- obj.symbols = symbols\n- obj.ngens = ngens\n- obj.domain = domain\n- obj.order = order\n-\n- obj.zero_monom = (0,)*ngens\n- obj.gens = obj._gens()\n- obj._gens_set = set(obj.gens)\n-\n- obj._one = [(obj.zero_monom, domain.one)]\n-\n- if ngens:\n- # These expect monomials in at least one variable\n- codegen = MonomialOps(ngens)\n- obj.monomial_mul = codegen.mul()\n- obj.monomial_pow = codegen.pow()\n- obj.monomial_mulpow = codegen.mulpow()\n- obj.monomial_ldiv = codegen.ldiv()\n- obj.monomial_div = codegen.div()\n- obj.monomial_lcm = codegen.lcm()\n- obj.monomial_gcd = codegen.gcd()\n- else:\n- monunit = lambda a, b: ()\n- obj.monomial_mul = monunit\n- obj.monomial_pow = monunit\n- obj.monomial_mulpow = lambda a, b, c: ()\n- obj.monomial_ldiv = monunit\n- obj.monomial_div = monunit\n- obj.monomial_lcm = monunit\n- obj.monomial_gcd = monunit\n-\n-\n- if order is lex:\n- obj.leading_expv = max\n- else:\n- obj.leading_expv = lambda f: max(f, key=order)\n \n- for symbol, generator in zip(obj.symbols, obj.gens):\n- if isinstance(symbol, Symbol):\n- name = symbol.name\n+ if domain.is_Composite and set(symbols) & set(domain.symbols):\n+ raise GeneratorsError(\"polynomial ring and it's ground domain share generators\")\n+\n+ obj = object.__new__(cls)\n+ obj._hash_tuple = _hash_tuple\n+ obj._hash = hash(_hash_tuple)\n+ obj.symbols = symbols\n+ obj.ngens = ngens\n+ obj.domain = domain\n+ obj.order = order\n+\n+ obj.dtype = PolyElement(obj, ()).new\n+\n+ obj.zero_monom = (0,)*ngens\n+ obj.gens = obj._gens()\n+ obj._gens_set = set(obj.gens)\n+\n+ obj._one = [(obj.zero_monom, domain.one)]\n+\n+ if ngens:\n+ # These expect monomials in at least one variable\n+ codegen = MonomialOps(ngens)\n+ obj.monomial_mul = codegen.mul()\n+ obj.monomial_pow = codegen.pow()\n+ obj.monomial_mulpow = codegen.mulpow()\n+ obj.monomial_ldiv = codegen.ldiv()\n+ obj.monomial_div = codegen.div()\n+ obj.monomial_lcm = codegen.lcm()\n+ obj.monomial_gcd = codegen.gcd()\n+ else:\n+ monunit = lambda a, b: ()\n+ obj.monomial_mul = monunit\n+ obj.monomial_pow = monunit\n+ obj.monomial_mulpow = lambda a, b, c: ()\n+ obj.monomial_ldiv = monunit\n+ obj.monomial_div = monunit\n+ obj.monomial_lcm = monunit\n+ obj.monomial_gcd = monunit\n+\n+\n+ if order is lex:\n+ obj.leading_expv = max\n+ else:\n+ obj.leading_expv = lambda f: max(f, key=order)\n \n- if not hasattr(obj, name):\n- setattr(obj, name, generator)\n+ for symbol, generator in zip(obj.symbols, obj.gens):\n+ if isinstance(symbol, Symbol):\n+ name = symbol.name\n \n- _ring_cache[_hash_tuple] = obj\n+ if not hasattr(obj, name):\n+ setattr(obj, name, generator)\n \n return obj\n \n@@ -304,6 +300,13 @@ def __ne__(self, other):\n return not self == other\n \n def clone(self, symbols=None, domain=None, order=None):\n+ # Need a hashable tuple for cacheit to work\n+ if symbols is not None and isinstance(symbols, list):\n+ symbols = tuple(symbols)\n+ return self._clone(symbols, domain, order)\n+\n+ @cacheit\n+ def _clone(self, symbols, domain, order):\n return self.__class__(symbols or self.symbols, domain or self.domain, order or self.order)\n \n def monomial_basis(self, i):\n@@ -314,12 +317,16 @@ def monomial_basis(self, i):\n \n @property\n def zero(self):\n- return self.dtype()\n+ return self.dtype([])\n \n @property\n def one(self):\n return self.dtype(self._one)\n \n+ def is_element(self, element):\n+ \"\"\"True if ``element`` is an element of this ring. False otherwise. \"\"\"\n+ return isinstance(element, PolyElement) and element.ring == self\n+\n def domain_new(self, element, orig_domain=None):\n return self.domain.convert(element, orig_domain)\n \n@@ -423,7 +430,7 @@ def index(self, gen):\n i = -i - 1\n else:\n raise ValueError(\"invalid generator index: %s\" % gen)\n- elif isinstance(gen, self.dtype):\n+ elif self.is_element(gen):\n try:\n i = self.gens.index(gen)\n except ValueError:\n@@ -579,8 +586,24 @@ def symmetric_poly(self, n):\n class PolyElement(DomainElement, DefaultPrinting, CantSympify, dict):\n \"\"\"Element of multivariate distributed polynomial ring. \"\"\"\n \n+ def __init__(self, ring, init):\n+ super().__init__(init)\n+ self.ring = ring\n+ # This check would be too slow to run every time:\n+ # self._check()\n+\n+ def _check(self):\n+ assert isinstance(self, PolyElement)\n+ assert isinstance(self.ring, PolyRing)\n+ dom = self.ring.domain\n+ assert isinstance(dom, Domain)\n+ for monom, coeff in self.items():\n+ assert dom.of_type(coeff)\n+ assert len(monom) == self.ring.ngens\n+ assert all(isinstance(exp, int) and exp >= 0 for exp in monom)\n+\n def new(self, init):\n- return self.__class__(init)\n+ return self.__class__(self.ring, init)\n \n def parent(self):\n return self.ring.to_domain()\n@@ -695,7 +718,7 @@ def __eq__(p1, p2):\n \"\"\"\n if not p2:\n return not p1\n- elif isinstance(p2, PolyElement) and p2.ring == p1.ring:\n+ elif p1.ring.is_element(p2):\n return dict.__eq__(p1, p2)\n elif len(p1) > 1:\n return False\n@@ -709,7 +732,7 @@ def almosteq(p1, p2, tolerance=None):\n \"\"\"Approximate equality test for polynomials. \"\"\"\n ring = p1.ring\n \n- if isinstance(p2, ring.dtype):\n+ if ring.is_element(p2):\n if set(p1.keys()) != set(p2.keys()):\n return False\n \n@@ -733,7 +756,7 @@ def sort_key(self):\n return (len(self), self.terms())\n \n def _cmp(p1, p2, op):\n- if isinstance(p2, p1.ring.dtype):\n+ if p1.ring.is_element(p2):\n return op(p1.sort_key(), p2.sort_key())\n else:\n return NotImplemented\n@@ -956,7 +979,7 @@ def __add__(p1, p2):\n if not p2:\n return p1.copy()\n ring = p1.ring\n- if isinstance(p2, ring.dtype):\n+ if ring.is_element(p2):\n p = p1.copy()\n get = p.get\n zero = ring.domain.zero\n@@ -1032,7 +1055,7 @@ def __sub__(p1, p2):\n if not p2:\n return p1.copy()\n ring = p1.ring\n- if isinstance(p2, ring.dtype):\n+ if ring.is_element(p2):\n p = p1.copy()\n get = p.get\n zero = ring.domain.zero\n@@ -1092,6 +1115,7 @@ def __rsub__(p1, n):\n for expv in p1:\n p[expv] = -p1[expv]\n p += n\n+ # p._check()\n return p\n \n def __mul__(p1, p2):\n@@ -1114,7 +1138,7 @@ def __mul__(p1, p2):\n p = ring.zero\n if not p1 or not p2:\n return p\n- elif isinstance(p2, ring.dtype):\n+ elif ring.is_element(p2):\n get = p.get\n zero = ring.domain.zero\n monomial_mul = ring.monomial_mul\n@@ -1124,6 +1148,7 @@ def __mul__(p1, p2):\n exp = monomial_mul(exp1, exp2)\n p[exp] = get(exp, zero) + v1*v2\n p.strip_zero()\n+ # p._check()\n return p\n elif isinstance(p2, PolyElement):\n if isinstance(ring.domain, PolynomialRing) and ring.domain.ring == p2.ring:\n@@ -1142,6 +1167,7 @@ def __mul__(p1, p2):\n v = v1*p2\n if v:\n p[exp1] = v\n+ # p._check()\n return p\n \n def __rmul__(p1, p2):\n@@ -1188,6 +1214,11 @@ def __pow__(self, n):\n x**3 + 3*x**2*y**2 + 3*x*y**4 + y**6\n \n \"\"\"\n+ if not isinstance(n, int):\n+ raise TypeError(\"exponent must be an integer, got %s\" % n)\n+ elif n < 0:\n+ raise ValueError(\"exponent must be a non-negative integer, got %s\" % n)\n+\n ring = self.ring\n \n if not n:\n@@ -1202,6 +1233,7 @@ def __pow__(self, n):\n p[ring.monomial_pow(monom, n)] = coeff\n else:\n p[ring.monomial_pow(monom, n)] = coeff**n\n+ # p._check()\n return p\n \n # For ring series, we need negative and rational exponent support only\n@@ -1300,6 +1332,7 @@ def square(self):\n k2 = monomial_mul(k, k)\n p[k2] = get(k2, zero) + v**2\n p.strip_zero()\n+ # p._check()\n return p\n \n def __divmod__(p1, p2):\n@@ -1307,7 +1340,7 @@ def __divmod__(p1, p2):\n \n if not p2:\n raise ZeroDivisionError(\"polynomial division\")\n- elif isinstance(p2, ring.dtype):\n+ elif ring.is_element(p2):\n return p1.div(p2)\n elif isinstance(p2, PolyElement):\n if isinstance(ring.domain, PolynomialRing) and ring.domain.ring == p2.ring:\n@@ -1325,14 +1358,20 @@ def __divmod__(p1, p2):\n return (p1.quo_ground(p2), p1.rem_ground(p2))\n \n def __rdivmod__(p1, p2):\n- return NotImplemented\n+ ring = p1.ring\n+ try:\n+ p2 = ring.ground_new(p2)\n+ except CoercionFailed:\n+ return NotImplemented\n+ else:\n+ return p2.div(p1)\n \n def __mod__(p1, p2):\n ring = p1.ring\n \n if not p2:\n raise ZeroDivisionError(\"polynomial division\")\n- elif isinstance(p2, ring.dtype):\n+ elif ring.is_element(p2):\n return p1.rem(p2)\n elif isinstance(p2, PolyElement):\n if isinstance(ring.domain, PolynomialRing) and ring.domain.ring == p2.ring:\n@@ -1350,18 +1389,21 @@ def __mod__(p1, p2):\n return p1.rem_ground(p2)\n \n def __rmod__(p1, p2):\n- return NotImplemented\n+ ring = p1.ring\n+ try:\n+ p2 = ring.ground_new(p2)\n+ except CoercionFailed:\n+ return NotImplemented\n+ else:\n+ return p2.rem(p1)\n \n- def __truediv__(p1, p2):\n+ def __floordiv__(p1, p2):\n ring = p1.ring\n \n if not p2:\n raise ZeroDivisionError(\"polynomial division\")\n- elif isinstance(p2, ring.dtype):\n- if p2.is_monomial:\n- return p1*(p2**(-1))\n- else:\n- return p1.quo(p2)\n+ elif ring.is_element(p2):\n+ return p1.quo(p2)\n elif isinstance(p2, PolyElement):\n if isinstance(ring.domain, PolynomialRing) and ring.domain.ring == p2.ring:\n pass\n@@ -1377,13 +1419,45 @@ def __truediv__(p1, p2):\n else:\n return p1.quo_ground(p2)\n \n- def __rtruediv__(p1, p2):\n- return NotImplemented\n+ def __rfloordiv__(p1, p2):\n+ ring = p1.ring\n+ try:\n+ p2 = ring.ground_new(p2)\n+ except CoercionFailed:\n+ return NotImplemented\n+ else:\n+ return p2.quo(p1)\n \n- __floordiv__ = __truediv__\n- __rfloordiv__ = __rtruediv__\n+ def __truediv__(p1, p2):\n+ ring = p1.ring\n+\n+ if not p2:\n+ raise ZeroDivisionError(\"polynomial division\")\n+ elif ring.is_element(p2):\n+ return p1.exquo(p2)\n+ elif isinstance(p2, PolyElement):\n+ if isinstance(ring.domain, PolynomialRing) and ring.domain.ring == p2.ring:\n+ pass\n+ elif isinstance(p2.ring.domain, PolynomialRing) and p2.ring.domain.ring == ring:\n+ return p2.__rtruediv__(p1)\n+ else:\n+ return NotImplemented\n \n- # TODO: use // (__floordiv__) for exquo()?\n+ try:\n+ p2 = ring.domain_new(p2)\n+ except CoercionFailed:\n+ return NotImplemented\n+ else:\n+ return p1.quo_ground(p2)\n+\n+ def __rtruediv__(p1, p2):\n+ ring = p1.ring\n+ try:\n+ p2 = ring.ground_new(p2)\n+ except CoercionFailed:\n+ return NotImplemented\n+ else:\n+ return p2.exquo(p1)\n \n def _term_div(self):\n zm = self.ring.zero_monom\n@@ -1738,7 +1812,7 @@ def coeff(self, element):\n \"\"\"\n if element == 1:\n return self._get_coeff(self.ring.zero_monom)\n- elif isinstance(element, self.ring.dtype):\n+ elif self.ring.is_element(element):\n terms = list(element.iterterms())\n if len(terms) == 1:\n monom, coeff = terms[0]\ndiff --git a/sympy/polys/tests/test_fields.py b/sympy/polys/tests/test_fields.py\nindex da9f39101599..4f85a00d75dc 100644\n--- a/sympy/polys/tests/test_fields.py\n+++ b/sympy/polys/tests/test_fields.py\n@@ -29,19 +29,10 @@ def test_FracField___hash__():\n \n def test_FracField___eq__():\n assert field(\"x,y,z\", QQ)[0] == field(\"x,y,z\", QQ)[0]\n- assert field(\"x,y,z\", QQ)[0] is field(\"x,y,z\", QQ)[0]\n-\n assert field(\"x,y,z\", QQ)[0] != field(\"x,y,z\", ZZ)[0]\n- assert field(\"x,y,z\", QQ)[0] is not field(\"x,y,z\", ZZ)[0]\n-\n assert field(\"x,y,z\", ZZ)[0] != field(\"x,y,z\", QQ)[0]\n- assert field(\"x,y,z\", ZZ)[0] is not field(\"x,y,z\", QQ)[0]\n-\n assert field(\"x,y,z\", QQ)[0] != field(\"x,y\", QQ)[0]\n- assert field(\"x,y,z\", QQ)[0] is not field(\"x,y\", QQ)[0]\n-\n assert field(\"x,y\", QQ)[0] != field(\"x,y,z\", QQ)[0]\n- assert field(\"x,y\", QQ)[0] is not field(\"x,y,z\", QQ)[0]\n \n def test_sfield():\n x = symbols(\"x\")\n@@ -99,34 +90,34 @@ def test_FracElement_from_expr():\n F, X, Y, Z = field((x, y, z), ZZ)\n \n f = F.from_expr(1)\n- assert f == 1 and isinstance(f, F.dtype)\n+ assert f == 1 and F.is_element(f)\n \n f = F.from_expr(Rational(3, 7))\n- assert f == F(3)/7 and isinstance(f, F.dtype)\n+ assert f == F(3)/7 and F.is_element(f)\n \n f = F.from_expr(x)\n- assert f == X and isinstance(f, F.dtype)\n+ assert f == X and F.is_element(f)\n \n f = F.from_expr(Rational(3,7)*x)\n- assert f == X*Rational(3, 7) and isinstance(f, F.dtype)\n+ assert f == X*Rational(3, 7) and F.is_element(f)\n \n f = F.from_expr(1/x)\n- assert f == 1/X and isinstance(f, F.dtype)\n+ assert f == 1/X and F.is_element(f)\n \n f = F.from_expr(x*y*z)\n- assert f == X*Y*Z and isinstance(f, F.dtype)\n+ assert f == X*Y*Z and F.is_element(f)\n \n f = F.from_expr(x*y/z)\n- assert f == X*Y/Z and isinstance(f, F.dtype)\n+ assert f == X*Y/Z and F.is_element(f)\n \n f = F.from_expr(x*y*z + x*y + x)\n- assert f == X*Y*Z + X*Y + X and isinstance(f, F.dtype)\n+ assert f == X*Y*Z + X*Y + X and F.is_element(f)\n \n f = F.from_expr((x*y*z + x*y + x)/(x*y + 7))\n- assert f == (X*Y*Z + X*Y + X)/(X*Y + 7) and isinstance(f, F.dtype)\n+ assert f == (X*Y*Z + X*Y + X)/(X*Y + 7) and F.is_element(f)\n \n f = F.from_expr(x**3*y*z + x**2*y**7 + 1)\n- assert f == X**3*Y*Z + X**2*Y**7 + 1 and isinstance(f, F.dtype)\n+ assert f == X**3*Y*Z + X**2*Y**7 + 1 and F.is_element(f)\n \n raises(ValueError, lambda: F.from_expr(2**x))\n raises(ValueError, lambda: F.from_expr(7*x + sqrt(2)))\ndiff --git a/sympy/polys/tests/test_puiseux.py b/sympy/polys/tests/test_puiseux.py\nnew file mode 100644\nindex 000000000000..031881e9d12c\n--- /dev/null\n+++ b/sympy/polys/tests/test_puiseux.py\n@@ -0,0 +1,204 @@\n+#\n+# Tests for PuiseuxRing and PuiseuxPoly\n+#\n+\n+from sympy.testing.pytest import raises\n+\n+from sympy import ZZ, QQ, ring\n+from sympy.polys.puiseux import PuiseuxRing, PuiseuxPoly, puiseux_ring\n+\n+from sympy.abc import x, y\n+\n+\n+def test_puiseux_ring():\n+ R, px = puiseux_ring('x', QQ)\n+ R2, px2 = puiseux_ring([x], QQ)\n+ assert isinstance(R, PuiseuxRing)\n+ assert isinstance(px, PuiseuxPoly)\n+ assert R == R2\n+ assert px == px2\n+ assert R == PuiseuxRing('x', QQ)\n+ assert R == PuiseuxRing([x], QQ)\n+ assert R != PuiseuxRing('y', QQ)\n+ assert R != PuiseuxRing('x', ZZ)\n+ assert R != PuiseuxRing('x, y', QQ)\n+ assert R != QQ\n+ assert str(R) == 'PuiseuxRing((x,), QQ)'\n+\n+\n+def test_puiseux_ring_attributes():\n+ R1, px1, py1 = ring('x, y', QQ)\n+ R2, px2, py2 = puiseux_ring('x, y', QQ)\n+ assert R2.domain == QQ\n+ assert R2.symbols == (x, y)\n+ assert R2.gens == (px2, py2)\n+ assert R2.ngens == 2\n+ assert R2.poly_ring == R1\n+ assert R2.zero == PuiseuxPoly(R1.zero, R2)\n+ assert R2.one == PuiseuxPoly(R1.one, R2)\n+ assert R2.zero_monom == R1.zero_monom == (0, 0) # type: ignore\n+ assert R2.monomial_mul((1, 2), (3, 4)) == (4, 6)\n+\n+\n+def test_puiseux_ring_methods():\n+ R1, px1, py1 = ring('x, y', QQ)\n+ R2, px2, py2 = puiseux_ring('x, y', QQ)\n+ assert R2({(1, 2): 3}) == 3*px2*py2**2\n+ assert R2(px1) == px2\n+ assert R2(1) == R2.one\n+ assert R2(QQ(1,2)) == QQ(1,2)*R2.one\n+ assert R2.from_poly(px1) == px2\n+ assert R2.from_poly(px1) != py2\n+ assert R2.from_dict({(1, 2): QQ(3)}) == 3*px2*py2**2\n+ assert R2.from_dict({(QQ(1,2), 2): QQ(3)}) == 3*px2**QQ(1,2)*py2**2\n+ assert R2.from_int(3) == 3*R2.one\n+ assert R2.domain_new(3) == QQ(3)\n+ assert QQ.of_type(R2.domain_new(3))\n+ assert R2.ground_new(3) == 3*R2.one\n+ assert isinstance(R2.ground_new(3), PuiseuxPoly)\n+ assert R2.index(px2) == 0\n+ assert R2.index(py2) == 1\n+\n+\n+def test_puiseux_poly():\n+ R1, px1 = ring('x', QQ)\n+ R2, px2 = puiseux_ring('x', QQ)\n+ assert PuiseuxPoly(px1, R2) == px2\n+ assert px2.ring == R2\n+ assert px2.as_expr() == px1.as_expr() == x\n+ assert px1 != px2\n+ assert R2.one == px2**0 == 1\n+ assert px2 == px1\n+ assert px2 != 2.0\n+ assert px2**QQ(1,2) != px1\n+\n+\n+def test_puiseux_poly_normalization():\n+ R, x = puiseux_ring('x', QQ)\n+ assert (x**2 + 1) / x == x + 1/x == R({(1,): 1, (-1,): 1})\n+ assert (x**QQ(1,6))**2 == x**QQ(1,3) == R({(QQ(1,3),): 1})\n+ assert (x**QQ(1,6))**(-2) == x**(-QQ(1,3)) == R({(-QQ(1,3),): 1})\n+ assert (x**QQ(1,6))**QQ(1,2) == x**QQ(1,12) == R({(QQ(1,12),): 1})\n+ assert (x**QQ(1,6))**6 == x == R({(1,): 1})\n+ assert x**QQ(1,6) * x**QQ(1,3) == x**QQ(1,2) == R({(QQ(1,2),): 1})\n+ assert 1/x * x**2 == x == R({(1,): 1})\n+ assert 1/x**QQ(1,3) * x**QQ(1,3) == 1 == R({(0,): 1})\n+\n+\n+def test_puiseux_poly_monoms():\n+ R, x = puiseux_ring('x', QQ)\n+ assert x.monoms() == [(1,)]\n+ assert list(x) == [(1,)]\n+ assert (x**2 + 1).monoms() == [(2,), (0,)]\n+ assert R({(1,): 1, (-1,): 1}).monoms() == [(1,), (-1,)]\n+ assert R({(QQ(1,3),): 1}).monoms() == [(QQ(1,3),)]\n+ assert R({(-QQ(1,3),): 1}).monoms() == [(-QQ(1,3),)]\n+ p = x**QQ(1,6)\n+ assert p[(QQ(1,6),)] == 1\n+ raises(KeyError, lambda: p[(1,)])\n+ assert p.to_dict() == {(QQ(1,6),): 1}\n+ assert R(p.to_dict()) == p\n+ assert PuiseuxPoly.from_dict({(QQ(1,6),): 1}, R) == p\n+\n+\n+def test_puiseux_poly_repr():\n+ R, x = puiseux_ring('x', QQ)\n+ assert repr(x) == 'x'\n+ assert repr(x**QQ(1,2)) == 'x**(1/2)'\n+ assert repr(1/x) == 'x**(-1)'\n+ assert repr(2*x**2 + 1) == '1 + 2*x**2'\n+ assert repr(R.one) == '1'\n+ assert repr(2*R.one) == '2'\n+\n+\n+def test_puiseux_poly_unify():\n+ R, x = puiseux_ring('x', QQ)\n+ assert 1/x + x == x + 1/x == R({(1,): 1, (-1,): 1})\n+ assert repr(1/x + x) == 'x**(-1) + x'\n+ assert 1/x + 1/x == 2/x == R({(-1,): 2})\n+ assert repr(1/x + 1/x) == '2*x**(-1)'\n+ assert x**QQ(1,2) + x**QQ(1,2) == 2*x**QQ(1,2) == R({(QQ(1,2),): 2})\n+ assert repr(x**QQ(1,2) + x**QQ(1,2)) == '2*x**(1/2)'\n+ assert x**QQ(1,2) + x**QQ(1,3) == R({(QQ(1,2),): 1, (QQ(1,3),): 1})\n+ assert repr(x**QQ(1,2) + x**QQ(1,3)) == 'x**(1/3) + x**(1/2)'\n+ assert x + x**QQ(1,2) == R({(1,): 1, (QQ(1,2),): 1})\n+ assert repr(x + x**QQ(1,2)) == 'x**(1/2) + x'\n+ assert 1/x**QQ(1,2) + 1/x**QQ(1,3) == R({(-QQ(1,2),): 1, (-QQ(1,3),): 1})\n+ assert repr(1/x**QQ(1,2) + 1/x**QQ(1,3)) == 'x**(-1/2) + x**(-1/3)'\n+ assert 1/x + x**QQ(1,2) == x**QQ(1,2) + 1/x == R({(-1,): 1, (QQ(1,2),): 1})\n+ assert repr(1/x + x**QQ(1,2)) == 'x**(-1) + x**(1/2)'\n+\n+\n+def test_puiseux_poly_arit():\n+ R, x = puiseux_ring('x', QQ)\n+ R2, y = puiseux_ring('y', QQ)\n+ p = x**2 + 1\n+ assert +p == p\n+ assert -p == -1 - x**2\n+ assert p + p == 2*p == 2*x**2 + 2\n+ assert p + 1 == 1 + p == x**2 + 2\n+ assert p + QQ(1,2) == QQ(1,2) + p == x**2 + QQ(3,2)\n+ assert p - p == 0\n+ assert p - 1 == -1 + p == x**2\n+ assert p - QQ(1,2) == -QQ(1,2) + p == x**2 + QQ(1,2)\n+ assert 1 - p == -p + 1 == -x**2\n+ assert QQ(1,2) - p == -p + QQ(1,2) == -x**2 - QQ(1,2)\n+ assert p * p == x**4 + 2*x**2 + 1\n+ assert p * 1 == 1 * p == p\n+ assert 2 * p == p * 2 == 2*x**2 + 2\n+ assert p * QQ(1,2) == QQ(1,2) * p == QQ(1,2)*x**2 + QQ(1,2)\n+ assert x**QQ(1,2) * x**QQ(1,2) == x\n+ raises(ValueError, lambda: x + y)\n+ raises(ValueError, lambda: x - y)\n+ raises(ValueError, lambda: x * y)\n+ raises(TypeError, lambda: x + None)\n+ raises(TypeError, lambda: x - None)\n+ raises(TypeError, lambda: x * None)\n+ raises(TypeError, lambda: None + x)\n+ raises(TypeError, lambda: None - x)\n+ raises(TypeError, lambda: None * x)\n+\n+\n+def test_puiseux_poly_div():\n+ R, x = puiseux_ring('x', QQ)\n+ R2, y = puiseux_ring('y', QQ)\n+ p = x**2 - 1\n+ assert p / 1 == p\n+ assert p / QQ(1,2) == 2*p == 2*x**2 - 2\n+ assert p / x == x - 1/x == R({(1,): 1, (-1,): -1})\n+ assert 2 / x == 2*x**-1 == R({(-1,): 2})\n+ assert QQ(1,2) / x == QQ(1,2)*x**-1 == 1/(2*x) == 1/x/2 == R({(-1,): QQ(1,2)})\n+ raises(ZeroDivisionError, lambda: p / 0)\n+ raises(ValueError, lambda: (x + 1) / (x + 2))\n+ raises(ValueError, lambda: (x + 1) / (x + 1))\n+ raises(ValueError, lambda: x / y)\n+ raises(TypeError, lambda: x / None)\n+ raises(TypeError, lambda: None / x)\n+\n+\n+def test_puiseux_poly_pow():\n+ R, x = puiseux_ring('x', QQ)\n+ Rz, xz = puiseux_ring('x', ZZ)\n+ assert x**0 == 1 == R({(0,): 1})\n+ assert x**1 == x == R({(1,): 1})\n+ assert x**2 == x*x == R({(2,): 1})\n+ assert x**QQ(1,2) == R({(QQ(1,2),): 1})\n+ assert x**-1 == 1/x == R({(-1,): 1})\n+ assert x**-QQ(1,2) == 1/x**QQ(1,2) == R({(-QQ(1,2),): 1})\n+ assert (2*x)**-1 == 1/(2*x) == QQ(1,2)/x == QQ(1,2)*x**-1 == R({(-1,): QQ(1,2)})\n+ assert 2/x**2 == 2*x**-2 == R({(-2,): 2})\n+ assert 2/xz**2 == 2*xz**-2 == Rz({(-2,): 2})\n+ raises(TypeError, lambda: x**None)\n+ raises(ValueError, lambda: (x + 1)**-1)\n+ raises(ValueError, lambda: (x + 1)**QQ(1,2))\n+ raises(ValueError, lambda: (2*x)**QQ(1,2))\n+ raises(ValueError, lambda: (2*xz)**-1)\n+\n+\n+def test_puiseux_poly_diff():\n+ R, x, y = puiseux_ring('x, y', QQ)\n+ assert (x**2 + 1).diff(x) == 2*x\n+ assert (x**2 + 1).diff(y) == 0\n+ assert (x**2 + y**2).diff(x) == 2*x\n+ assert (x**QQ(1,2) + y**QQ(1,2)).diff(x) == QQ(1,2)*x**-QQ(1,2)\n+ assert ((x*y)**QQ(1,2)).diff(x) == QQ(1,2)*y**QQ(1,2)*x**-QQ(1,2)\ndiff --git a/sympy/polys/tests/test_ring_series.py b/sympy/polys/tests/test_ring_series.py\nindex 0f70c05d3888..b19156fbaceb 100644\n--- a/sympy/polys/tests/test_ring_series.py\n+++ b/sympy/polys/tests/test_ring_series.py\n@@ -1,5 +1,6 @@\n from sympy.polys.domains import QQ, EX, RR\n from sympy.polys.rings import ring\n+from sympy.polys.puiseux import puiseux_ring\n from sympy.polys.ring_series import (_invert_monoms, rs_integrate,\n rs_trunc, rs_mul, rs_square, rs_pow, _has_constant_term, rs_hadamard_exp,\n rs_series_from_list, rs_exp, rs_log, rs_newton, rs_series_inversion,\n@@ -141,11 +142,11 @@ def test_series_from_list():\n p2 += cx*rs_pow(p, i, x, h)\n assert p1 == p2\n \n+\n def test_log():\n R, x = ring('x', QQ)\n p = 1 + x\n- p1 = rs_log(p, x, 4)/x**2\n- assert p1 == Rational(1, 3)*x - S.Half + x**(-1)\n+ assert rs_log(p, x, 4) == x - x**2/2 + x**3/3\n p = 1 + x +2*x**2/3\n p1 = rs_log(p, x, 9)\n assert p1 == -17*x**8/648 + 13*x**7/189 - 11*x**6/162 - x**5/45 + \\\n@@ -172,6 +173,7 @@ def test_log():\n p = x + x**2 + 3\n assert rs_log(p, x, 10).compose(x, 5) == EX(log(3) + Rational(19281291595, 9920232))\n \n+\n def test_exp():\n R, x = ring('x', QQ)\n p = x + x**4\n@@ -222,8 +224,9 @@ def test_fun():\n assert rs_fun(p, rs_tan, x, 10) == rs_tan(p, x, 10)\n assert rs_fun(p, _tan1, x, 10) == _tan1(p, x, 10)\n \n+\n def test_nth_root():\n- R, x, y = ring('x, y', QQ)\n+ R, x, y = puiseux_ring('x, y', QQ)\n assert rs_nth_root(1 + x**2*y, 4, x, 10) == -77*x**8*y**4/2048 + \\\n 7*x**6*y**3/128 - 3*x**4*y**2/32 + x**2*y/4 + 1\n assert rs_nth_root(1 + x*y + x**2*y**3, 3, x, 5) == -x**4*y**6/9 + \\\n@@ -236,14 +239,15 @@ def test_nth_root():\n \n # Constant term in series\n a = symbols('a')\n- R, x, y = ring('x, y', EX)\n- assert rs_nth_root(x + a, 3, x, 4) == EX(5/(81*a**QQ(8, 3)))*x**3 - \\\n+ R, x, y = puiseux_ring('x, y', EX)\n+ assert rs_nth_root(x + EX(a), 3, x, 4) == EX(5/(81*a**QQ(8, 3)))*x**3 - \\\n EX(1/(9*a**QQ(5, 3)))*x**2 + EX(1/(3*a**QQ(2, 3)))*x + EX(a**QQ(1, 3))\n assert rs_nth_root(x**QQ(2, 3) + x**2*y + 5, 2, x, 3) == -EX(sqrt(5)/100)*\\\n x**QQ(8, 3)*y - EX(sqrt(5)/16000)*x**QQ(8, 3) + EX(sqrt(5)/10)*x**2*y + \\\n EX(sqrt(5)/2000)*x**2 - EX(sqrt(5)/200)*x**QQ(4, 3) + \\\n EX(sqrt(5)/10)*x**QQ(2, 3) + EX(sqrt(5))\n \n+\n def test_atan():\n R, x, y = ring('x, y', QQ)\n assert rs_atan(x, x, 9) == -x**7/7 + x**5/5 - x**3/3 + x\n@@ -272,8 +276,7 @@ def test_asin():\n \n def test_tan():\n R, x, y = ring('x, y', QQ)\n- assert rs_tan(x, x, 9)/x**5 == \\\n- Rational(17, 315)*x**2 + Rational(2, 15) + Rational(1, 3)*x**(-2) + x**(-4)\n+ assert rs_tan(x, x, 9) == x + x**3/3 + QQ(2,15)*x**5 + QQ(17,315)*x**7\n assert rs_tan(x*y + x**2*y**3, x, 9) == 4*x**8*y**11/3 + 17*x**8*y**9/45 + \\\n 4*x**7*y**9/3 + 17*x**7*y**7/315 + x**6*y**9/3 + 2*x**6*y**7/3 + \\\n x**5*y**7 + 2*x**5*y**5/15 + x**4*y**5 + x**3*y**3/3 + x**2*y**3 + x*y\n@@ -301,18 +304,19 @@ def test_tan():\n assert rs_atan(p, x, 10).compose(x, 10) == EX(atan(5) + S(67701870330562640) / \\\n 668083460499)\n \n+\n def test_cot():\n- R, x, y = ring('x, y', QQ)\n+ R, x, y = puiseux_ring('x, y', QQ)\n assert rs_cot(x**6 + x**7, x, 8) == x**(-6) - x**(-5) + x**(-4) - \\\n x**(-3) + x**(-2) - x**(-1) + 1 - x + x**2 - x**3 + x**4 - x**5 + \\\n 2*x**6/3 - 4*x**7/3\n assert rs_cot(x + x**2*y, x, 5) == -x**4*y**5 - x**4*y/15 + x**3*y**4 - \\\n x**3/45 - x**2*y**3 - x**2*y/3 + x*y**2 - x/3 - y + x**(-1)\n \n+\n def test_sin():\n R, x, y = ring('x, y', QQ)\n- assert rs_sin(x, x, 9)/x**5 == \\\n- Rational(-1, 5040)*x**2 + Rational(1, 120) - Rational(1, 6)*x**(-2) + x**(-4)\n+ assert rs_sin(x, x, 9) == x - x**3/6 + x**5/120 - x**7/5040\n assert rs_sin(x*y + x**2*y**3, x, 9) == x**8*y**11/12 - \\\n x**8*y**9/720 + x**7*y**9/12 - x**7*y**7/5040 - x**6*y**9/6 + \\\n x**6*y**7/24 - x**5*y**7/2 + x**5*y**5/120 - x**4*y**5/2 - \\\n@@ -337,8 +341,7 @@ def test_sin():\n \n def test_cos():\n R, x, y = ring('x, y', QQ)\n- assert rs_cos(x, x, 9)/x**5 == \\\n- Rational(1, 40320)*x**3 - Rational(1, 720)*x + Rational(1, 24)*x**(-1) - S.Half*x**(-3) + x**(-5)\n+ assert rs_cos(x, x, 9) == 1 - x**2/2 + x**4/24 - x**6/720 + x**8/40320\n assert rs_cos(x*y + x**2*y**3, x, 9) == x**8*y**12/24 - \\\n x**8*y**10/48 + x**8*y**8/40320 + x**7*y**10/6 - \\\n x**7*y**8/120 + x**6*y**8/4 - x**6*y**6/720 + x**5*y**6/6 - \\\n@@ -372,7 +375,7 @@ def test_cos_sin():\n \n def test_atanh():\n R, x, y = ring('x, y', QQ)\n- assert rs_atanh(x, x, 9)/x**5 == Rational(1, 7)*x**2 + Rational(1, 5) + Rational(1, 3)*x**(-2) + x**(-4)\n+ assert rs_atanh(x, x, 9) == x + x**3/3 + x**5/5 + x**7/7\n assert rs_atanh(x*y + x**2*y**3, x, 9) == 2*x**8*y**11 + x**8*y**9 + \\\n 2*x**7*y**9 + x**7*y**7/7 + x**6*y**9/3 + x**6*y**7 + x**5*y**7 + \\\n x**5*y**5/5 + x**4*y**5 + x**3*y**3/3 + x**2*y**3 + x*y\n@@ -395,7 +398,7 @@ def test_atanh():\n \n def test_sinh():\n R, x, y = ring('x, y', QQ)\n- assert rs_sinh(x, x, 9)/x**5 == Rational(1, 5040)*x**2 + Rational(1, 120) + Rational(1, 6)*x**(-2) + x**(-4)\n+ assert rs_sinh(x, x, 9) == x + x**3/6 + x**5/120 + x**7/5040\n assert rs_sinh(x*y + x**2*y**3, x, 9) == x**8*y**11/12 + \\\n x**8*y**9/720 + x**7*y**9/12 + x**7*y**7/5040 + x**6*y**9/6 + \\\n x**6*y**7/24 + x**5*y**7/2 + x**5*y**5/120 + x**4*y**5/2 + \\\n@@ -403,8 +406,7 @@ def test_sinh():\n \n def test_cosh():\n R, x, y = ring('x, y', QQ)\n- assert rs_cosh(x, x, 9)/x**5 == Rational(1, 40320)*x**3 + Rational(1, 720)*x + Rational(1, 24)*x**(-1) + \\\n- S.Half*x**(-3) + x**(-5)\n+ assert rs_cosh(x, x, 9) == 1 + x**2/2 + x**4/24 + x**6/720 + x**8/40320\n assert rs_cosh(x*y + x**2*y**3, x, 9) == x**8*y**12/24 + \\\n x**8*y**10/48 + x**8*y**8/40320 + x**7*y**10/6 + \\\n x**7*y**8/120 + x**6*y**8/4 + x**6*y**6/720 + x**5*y**6/6 + \\\n@@ -412,7 +414,7 @@ def test_cosh():\n \n def test_tanh():\n R, x, y = ring('x, y', QQ)\n- assert rs_tanh(x, x, 9)/x**5 == Rational(-17, 315)*x**2 + Rational(2, 15) - Rational(1, 3)*x**(-2) + x**(-4)\n+ assert rs_tanh(x, x, 9) == x - QQ(1,3)*x**3 + QQ(2,15)*x**5 - QQ(17,315)*x**7\n assert rs_tanh(x*y + x**2*y**3, x, 9) == 4*x**8*y**11/3 - \\\n 17*x**8*y**9/45 + 4*x**7*y**9/3 - 17*x**7*y**7/315 - x**6*y**9/3 + \\\n 2*x**6*y**7/3 - x**5*y**7 + 2*x**5*y**5/15 - x**4*y**5 - \\\n@@ -443,8 +445,9 @@ def test_RR():\n q = ((2 + a)**QQ(1, 5)).series(a, 0, 5).removeO()\n is_close(p.as_expr(), q.subs(a, 5).n())\n \n+\n def test_is_regular():\n- R, x, y = ring('x, y', QQ)\n+ R, x, y = puiseux_ring('x, y', QQ)\n p = 1 + 2*x + x**2 + 3*x**3\n assert not rs_is_puiseux(p, x)\n \n@@ -455,8 +458,9 @@ def test_is_regular():\n p = x + x**2*y**QQ(1,5)*y\n assert not rs_is_puiseux(p, x)\n \n+\n def test_puiseux():\n- R, x, y = ring('x, y', QQ)\n+ R, x, y = puiseux_ring('x, y', QQ)\n p = x**QQ(2,5) + x**QQ(2,3) + x\n \n r = rs_series_inversion(p, x, 1)\n@@ -518,20 +522,21 @@ def test_puiseux():\n assert r == -x**QQ(9,5) - x**QQ(26,15) - x**QQ(22,15) - x**QQ(6,5)/3 + \\\n x + x**QQ(2,3) + x**QQ(2,5)\n \n+\n def test_puiseux_algebraic(): # https://github.com/sympy/sympy/issues/24395\n \n K = QQ.algebraic_field(sqrt(2))\n sqrt2 = K.from_sympy(sqrt(2))\n x, y = symbols('x, y')\n- R, xr, yr = ring([x, y], K)\n+ R, xr, yr = puiseux_ring([x, y], K)\n p = (1+sqrt2)*xr**QQ(1,2) + (1-sqrt2)*yr**QQ(2,3)\n \n- assert dict(p) == {(QQ(1,2),QQ(0)):1+sqrt2, (QQ(0),QQ(2,3)):1-sqrt2}\n+ assert p.to_dict() == {(QQ(1,2),QQ(0)):1+sqrt2, (QQ(0),QQ(2,3)):1-sqrt2}\n assert p.as_expr() == (1 + sqrt(2))*x**(S(1)/2) + (1 - sqrt(2))*y**(S(2)/3)\n \n \n def test1():\n- R, x = ring('x', QQ)\n+ R, x = puiseux_ring('x', QQ)\n r = rs_sin(x, x, 15)*x**(-5)\n assert r == x**8/6227020800 - x**6/39916800 + x**4/362880 - x**2/5040 + \\\n QQ(1,120) - x**-2/6 + x**-4\n@@ -556,9 +561,10 @@ def test1():\n x**3/720 + x**QQ(5,2)/120 + x**2/24 + x**QQ(3,2)/6 + x/2 + \\\n x**QQ(1,2) + 1\n \n+\n def test_puiseux2():\n R, y = ring('y', QQ)\n- S, x = ring('x', R)\n+ S, x = puiseux_ring('x', R.to_domain())\n \n p = x + x**QQ(1,5)*y\n r = rs_atan(p, x, 3)\ndiff --git a/sympy/polys/tests/test_rings.py b/sympy/polys/tests/test_rings.py\nindex 3a48d45a6f15..4fd3c4c05eff 100644\n--- a/sympy/polys/tests/test_rings.py\n+++ b/sympy/polys/tests/test_rings.py\n@@ -58,19 +58,10 @@ def test_PolyRing___hash__():\n \n def test_PolyRing___eq__():\n assert ring(\"x,y,z\", QQ)[0] == ring(\"x,y,z\", QQ)[0]\n- assert ring(\"x,y,z\", QQ)[0] is ring(\"x,y,z\", QQ)[0]\n-\n assert ring(\"x,y,z\", QQ)[0] != ring(\"x,y,z\", ZZ)[0]\n- assert ring(\"x,y,z\", QQ)[0] is not ring(\"x,y,z\", ZZ)[0]\n-\n assert ring(\"x,y,z\", ZZ)[0] != ring(\"x,y,z\", QQ)[0]\n- assert ring(\"x,y,z\", ZZ)[0] is not ring(\"x,y,z\", QQ)[0]\n-\n assert ring(\"x,y,z\", QQ)[0] != ring(\"x,y\", QQ)[0]\n- assert ring(\"x,y,z\", QQ)[0] is not ring(\"x,y\", QQ)[0]\n-\n assert ring(\"x,y\", QQ)[0] != ring(\"x,y,z\", QQ)[0]\n- assert ring(\"x,y\", QQ)[0] is not ring(\"x,y,z\", QQ)[0]\n \n def test_PolyRing_ring_new():\n R, x, y, z = ring(\"x,y,z\", QQ)\n@@ -288,23 +279,23 @@ def test_PolyElement_from_expr():\n R, X, Y, Z = ring((x, y, z), ZZ)\n \n f = R.from_expr(1)\n- assert f == 1 and isinstance(f, R.dtype)\n+ assert f == 1 and R.is_element(f)\n \n f = R.from_expr(x)\n- assert f == X and isinstance(f, R.dtype)\n+ assert f == X and R.is_element(f)\n \n f = R.from_expr(x*y*z)\n- assert f == X*Y*Z and isinstance(f, R.dtype)\n+ assert f == X*Y*Z and R.is_element(f)\n \n f = R.from_expr(x*y*z + x*y + x)\n- assert f == X*Y*Z + X*Y + X and isinstance(f, R.dtype)\n+ assert f == X*Y*Z + X*Y + X and R.is_element(f)\n \n f = R.from_expr(x**3*y*z + x**2*y**7 + 1)\n- assert f == X**3*Y*Z + X**2*Y**7 + 1 and isinstance(f, R.dtype)\n+ assert f == X**3*Y*Z + X**2*Y**7 + 1 and R.is_element(f)\n \n r, F = sring([exp(2)])\n f = r.from_expr(exp(2))\n- assert f == F[0] and isinstance(f, r.dtype)\n+ assert f == F[0] and r.is_element(f)\n \n raises(ValueError, lambda: R.from_expr(1/x))\n raises(ValueError, lambda: R.from_expr(2**x))\n@@ -312,7 +303,7 @@ def test_PolyElement_from_expr():\n \n R, = ring(\"\", ZZ)\n f = R.from_expr(1)\n- assert f == 1 and isinstance(f, R.dtype)\n+ assert f == 1 and R.is_element(f)\n \n def test_PolyElement_degree():\n R, x,y,z = ring(\"x,y,z\", ZZ)\n@@ -615,9 +606,9 @@ def test_PolyElement___truediv__():\n assert (x**2 - 1).quo(x) == x\n assert (x**2 - x).quo(x) == x - 1\n \n- assert (x**2 - 1)/x == x - x**(-1)\n+ raises(ExactQuotientFailed, lambda: (x**2 - 1)/x)\n assert (x**2 - x)/x == x - 1\n- assert (x**2 - 1)/(2*x) == x/2 - x**(-1)/2\n+ raises(ExactQuotientFailed, lambda: (x**2 - 1)/(2*x))\n \n assert (x**2 - 1).quo(2*x) == 0\n assert (x**2 - x)/(x - 1) == (x**2 - x).quo(x - 1) == x\n@@ -634,7 +625,7 @@ def test_PolyElement___truediv__():\n Rxyz, x,y,z = ring(\"x,y,z\", Ruv)\n \n assert dict((u**2*x + u)/u) == {(1, 0, 0): u, (0, 0, 0): 1}\n- raises(TypeError, lambda: u/(u**2*x + u))\n+ raises(ExactQuotientFailed, lambda: u/(u**2*x + u))\n \n raises(TypeError, lambda: t/x)\n raises(TypeError, lambda: x/t)\n@@ -670,7 +661,8 @@ def test_PolyElement___truediv__():\n \n assert f.div(g) == divmod(f, g) == (q, r)\n assert f.rem(g) == f % g == r\n- assert f.quo(g) == f / g == q\n+ assert f.quo(g) == q\n+ raises(ExactQuotientFailed, lambda: f / g)\n raises(ExactQuotientFailed, lambda: f.exquo(g))\n \n f, g = 3*x**3 + x**2 + x + 5, 5*x**2 - 3*x + 1\n@@ -678,7 +670,8 @@ def test_PolyElement___truediv__():\n \n assert f.div(g) == divmod(f, g) == (q, r)\n assert f.rem(g) == f % g == r\n- assert f.quo(g) == f / g == q\n+ assert f.quo(g) == q\n+ raises(ExactQuotientFailed, lambda: f / g)\n raises(ExactQuotientFailed, lambda: f.exquo(g))\n \n f, g = 5*x**4 + 4*x**3 + 3*x**2 + 2*x + 1, x**2 + 2*x + 3\n@@ -686,7 +679,8 @@ def test_PolyElement___truediv__():\n \n assert f.div(g) == divmod(f, g) == (q, r)\n assert f.rem(g) == f % g == r\n- assert f.quo(g) == f / g == q\n+ assert f.quo(g) == q\n+ raises(ExactQuotientFailed, lambda: f / g)\n raises(ExactQuotientFailed, lambda: f.exquo(g))\n \n f, g = 5*x**5 + 4*x**4 + 3*x**3 + 2*x**2 + x, x**4 + 2*x**3 + 9\n@@ -694,7 +688,8 @@ def test_PolyElement___truediv__():\n \n assert f.div(g) == divmod(f, g) == (q, r)\n assert f.rem(g) == f % g == r\n- assert f.quo(g) == f / g == q\n+ assert f.quo(g) == q\n+ raises(ExactQuotientFailed, lambda: f / g)\n raises(ExactQuotientFailed, lambda: f.exquo(g))\n \n R, x = ring(\"x\", QQ)\n@@ -704,7 +699,8 @@ def test_PolyElement___truediv__():\n \n assert f.div(g) == divmod(f, g) == (q, r)\n assert f.rem(g) == f % g == r\n- assert f.quo(g) == f / g == q\n+ assert f.quo(g) == q\n+ raises(ExactQuotientFailed, lambda: f / g)\n raises(ExactQuotientFailed, lambda: f.exquo(g))\n \n f, g = 3*x**3 + x**2 + x + 5, 5*x**2 - 3*x + 1\n@@ -712,7 +708,8 @@ def test_PolyElement___truediv__():\n \n assert f.div(g) == divmod(f, g) == (q, r)\n assert f.rem(g) == f % g == r\n- assert f.quo(g) == f / g == q\n+ assert f.quo(g) == q\n+ raises(ExactQuotientFailed, lambda: f / g)\n raises(ExactQuotientFailed, lambda: f.exquo(g))\n \n R, x,y = ring(\"x,y\", ZZ)\n@@ -722,15 +719,16 @@ def test_PolyElement___truediv__():\n \n assert f.div(g) == divmod(f, g) == (q, r)\n assert f.rem(g) == f % g == r\n- assert f.quo(g) == f / g == q\n- assert f.exquo(g) == q\n+ assert f.quo(g) == q\n+ assert f.exquo(g) == f / g == q\n \n f, g = x**2 + y**2, x - y\n q, r = x + y, 2*y**2\n \n assert f.div(g) == divmod(f, g) == (q, r)\n assert f.rem(g) == f % g == r\n- assert f.quo(g) == f / g == q\n+ assert f.quo(g) == q\n+ raises(ExactQuotientFailed, lambda: f / g)\n raises(ExactQuotientFailed, lambda: f.exquo(g))\n \n f, g = x**2 + y**2, -x + y\n@@ -738,7 +736,8 @@ def test_PolyElement___truediv__():\n \n assert f.div(g) == divmod(f, g) == (q, r)\n assert f.rem(g) == f % g == r\n- assert f.quo(g) == f / g == q\n+ assert f.quo(g) == q\n+ raises(ExactQuotientFailed, lambda: f / g)\n raises(ExactQuotientFailed, lambda: f.exquo(g))\n \n f, g = x**2 + y**2, 2*x - 2*y\n@@ -746,7 +745,8 @@ def test_PolyElement___truediv__():\n \n assert f.div(g) == divmod(f, g) == (q, r)\n assert f.rem(g) == f % g == r\n- assert f.quo(g) == f / g == q\n+ assert f.quo(g) == q\n+ raises(ExactQuotientFailed, lambda: f / g)\n raises(ExactQuotientFailed, lambda: f.exquo(g))\n \n R, x,y = ring(\"x,y\", QQ)\n@@ -756,15 +756,16 @@ def test_PolyElement___truediv__():\n \n assert f.div(g) == divmod(f, g) == (q, r)\n assert f.rem(g) == f % g == r\n- assert f.quo(g) == f / g == q\n- assert f.exquo(g) == q\n+ assert f.quo(g) == q\n+ assert f.exquo(g) == f / g == q\n \n f, g = x**2 + y**2, x - y\n q, r = x + y, 2*y**2\n \n assert f.div(g) == divmod(f, g) == (q, r)\n assert f.rem(g) == f % g == r\n- assert f.quo(g) == f / g == q\n+ assert f.quo(g) == q\n+ raises(ExactQuotientFailed, lambda: f / g)\n raises(ExactQuotientFailed, lambda: f.exquo(g))\n \n f, g = x**2 + y**2, -x + y\n@@ -772,7 +773,8 @@ def test_PolyElement___truediv__():\n \n assert f.div(g) == divmod(f, g) == (q, r)\n assert f.rem(g) == f % g == r\n- assert f.quo(g) == f / g == q\n+ assert f.quo(g) == q\n+ raises(ExactQuotientFailed, lambda: f / g)\n raises(ExactQuotientFailed, lambda: f.exquo(g))\n \n f, g = x**2 + y**2, 2*x - 2*y\n@@ -780,7 +782,8 @@ def test_PolyElement___truediv__():\n \n assert f.div(g) == divmod(f, g) == (q, r)\n assert f.rem(g) == f % g == r\n- assert f.quo(g) == f / g == q\n+ assert f.quo(g) == q\n+ raises(ExactQuotientFailed, lambda: f / g)\n raises(ExactQuotientFailed, lambda: f.exquo(g))\n \n def test_PolyElement___pow__():\n@@ -791,8 +794,6 @@ def test_PolyElement___pow__():\n assert f**1 == f\n raises(ValueError, lambda: f**(-1))\n \n- assert x**(-1) == x**(-1)\n-\n assert f**2 == f._pow_generic(2) == f._pow_multinomial(2) == 4*x**2 + 12*x + 9\n assert f**3 == f._pow_generic(3) == f._pow_multinomial(3) == 8*x**3 + 36*x**2 + 54*x + 27\n assert f**4 == f._pow_generic(4) == f._pow_multinomial(4) == 16*x**4 + 96*x**3 + 216*x**2 + 216*x + 81\n@@ -1172,13 +1173,13 @@ def test_PolyElement_evaluate():\n f = (x*y)**3 + 4*(x*y)**2 + 2*x*y + 3\n \n r = f.evaluate(x, 0)\n- assert r == 3 and isinstance(r, R.drop(x).dtype)\n+ assert r == 3 and R.drop(x).is_element(r)\n r = f.evaluate([(x, 0), (y, 0)])\n- assert r == 3 and isinstance(r, R.drop(x, y).dtype)\n+ assert r == 3 and R.drop(x, y).is_element(r)\n r = f.evaluate(y, 0)\n- assert r == 3 and isinstance(r, R.drop(y).dtype)\n+ assert r == 3 and R.drop(y).is_element(r)\n r = f.evaluate([(y, 0), (x, 0)])\n- assert r == 3 and isinstance(r, R.drop(y, x).dtype)\n+ assert r == 3 and R.drop(y, x).is_element(r)\n \n r = f.evaluate([(x, 0), (y, 0), (z, 0)])\n assert r == 3 and not isinstance(r, PolyElement)\n@@ -1192,7 +1193,7 @@ def test_PolyElement_subs():\n f = x**3 + 4*x**2 + 2*x + 3\n \n r = f.subs(x, 0)\n- assert r == 3 and isinstance(r, R.dtype)\n+ assert r == 3 and R.is_element(r)\n \n raises(CoercionFailed, lambda: f.subs(x, QQ(1,7)))\n \n@@ -1200,9 +1201,9 @@ def test_PolyElement_subs():\n f = x**3 + 4*x**2 + 2*x + 3\n \n r = f.subs(x, 0)\n- assert r == 3 and isinstance(r, R.dtype)\n+ assert r == 3 and R.is_element(r)\n r = f.subs([(x, 0), (y, 0)])\n- assert r == 3 and isinstance(r, R.dtype)\n+ assert r == 3 and R.is_element(r)\n \n raises(CoercionFailed, lambda: f.subs([(x, 1), (y, QQ(1,7))]))\n raises(CoercionFailed, lambda: f.subs([(x, QQ(1,7)), (y, 1)]))\n@@ -1252,7 +1253,7 @@ def test_PolyElement_compose():\n f = x**3 + 4*x**2 + 2*x + 3\n \n r = f.compose(x, 0)\n- assert r == 3 and isinstance(r, R.dtype)\n+ assert r == 3 and R.is_element(r)\n \n assert f.compose(x, x) == f\n assert f.compose(x, x**2) == x**6 + 4*x**4 + 2*x**2 + 3\n@@ -1263,13 +1264,13 @@ def test_PolyElement_compose():\n f = x**3 + 4*x**2 + 2*x + 3\n \n r = f.compose(x, 0)\n- assert r == 3 and isinstance(r, R.dtype)\n+ assert r == 3 and R.is_element(r)\n r = f.compose([(x, 0), (y, 0)])\n- assert r == 3 and isinstance(r, R.dtype)\n+ assert r == 3 and R.is_element(r)\n \n r = (x**3 + 4*x**2 + 2*x*y*z + 3).compose(x, y*z**2 - 1)\n q = (y*z**2 - 1)**3 + 4*(y*z**2 - 1)**2 + 2*(y*z**2 - 1)*y*z + 3\n- assert r == q and isinstance(r, R.dtype)\n+ assert r == q and R.is_element(r)\n \n def test_PolyElement_is_():\n R, x,y,z = ring(\"x,y,z\", QQ)\n@@ -1350,7 +1351,7 @@ def test_PolyElement_drop():\n \n assert R(1).drop(0).ring == PolyRing(\"y,z\", ZZ, lex)\n assert R(1).drop(0).drop(0).ring == PolyRing(\"z\", ZZ, lex)\n- assert isinstance(R(1).drop(0).drop(0).drop(0), R.dtype) is False\n+ assert R.is_element(R(1).drop(0).drop(0).drop(0)) is False\n \n raises(ValueError, lambda: z.drop(0).drop(0).drop(0))\n raises(ValueError, lambda: x.drop(0))\ndiff --git a/sympy/polys/tests/test_solvers.py b/sympy/polys/tests/test_solvers.py\nindex 9b7c2b3c9f74..bf8708314466 100644\n--- a/sympy/polys/tests/test_solvers.py\n+++ b/sympy/polys/tests/test_solvers.py\n@@ -13,7 +13,7 @@ def test_solve_lin_sys_2x2_one():\n 2*x1 - x2]\n sol = {x1: QQ(5, 3), x2: QQ(10, 3)}\n _sol = solve_lin_sys(eqs, domain)\n- assert _sol == sol and all(isinstance(s, domain.dtype) for s in _sol)\n+ assert _sol == sol and all(s.ring == domain for s in _sol)\n \n def test_solve_lin_sys_2x4_none():\n domain, x1,x2 = ring(\"x1,x2\", QQ)\ndiff --git a/sympy/printing/tests/test_str.py b/sympy/printing/tests/test_str.py\nindex 5140136b2465..675212964b03 100644\n--- a/sympy/printing/tests/test_str.py\n+++ b/sympy/printing/tests/test_str.py\n@@ -463,8 +463,6 @@ def test_PolyElement():\n assert str(x - 1) == \"x - 1\"\n assert str(x + 1) == \"x + 1\"\n assert str(x**2) == \"x**2\"\n- assert str(x**(-2)) == \"x**(-2)\"\n- assert str(x**QQ(1, 2)) == \"x**(1/2)\"\n \n assert str((u**2 + 3*u*v + 1)*x**2*y + u + 1) == \"(u**2 + 3*u*v + 1)*x**2*y + u + 1\"\n assert str((u**2 + 3*u*v + 1)*x**2*y + (u + 1)*x) == \"(u**2 + 3*u*v + 1)*x**2*y + (u + 1)*x\"\n" }
[ { "diff_hunk": "@@ -0,0 +1,795 @@\n+\"\"\"\n+Puiseux rings. These are used by the ring_series module to represented\n+truncated Puiseux series. Elements of a Puiseux ring are like polynomials\n+except that the exponents can be negative or rational rather than just\n+non-negative integers.\n+\"\"\"\n+\n+# Previously the ring_series module used PolyElement to represent Puiseux\n+# series. This is problematic because it means that PolyElement has to support\n+# negative and non-integer exponents which most polynomial representations do\n+# not support. This module provides an implementation of a ring for Puiseux\n+# series that can be used by ring_series without breaking the basic invariants\n+# of polynomial rings.\n+#\n+# Ideally there would be more of a proper series type that can keep track of\n+# not not just the leading terms of a truncated series but also the precision\n+# of the series. For now the rings here are just introduced to keep the\n+# interface that ring_series was using before.\n+\n+from __future__ import annotations\n+\n+from sympy.polys.domains import QQ\n+from sympy.polys.rings import PolyRing, PolyElement\n+from sympy.core.add import Add\n+from sympy.core.mul import Mul\n+from sympy.external.gmpy import gcd, lcm\n+\n+\n+from typing import TYPE_CHECKING\n+\n+\n+if TYPE_CHECKING:\n+ from typing import Any, Unpack\n+ from sympy.core.expr import Expr\n+ from sympy.polys.domains import Domain\n+ from collections.abc import Iterable, Iterator\n+\n+\n+def puiseux_ring(\n+ symbols: str | list[Expr], domain: Domain\n+) -> tuple[PuiseuxRing, Unpack[tuple[PuiseuxPoly, ...]]]:\n+ \"\"\"Construct a Puiseux ring.\n+\n+ This function constructs a Puiseux ring with the given symbols and domain.\n+\n+ >>> from sympy.polys.domains import QQ\n+ >>> from sympy.polys.puiseux import puiseux_ring\n+ >>> R, x, y = puiseux_ring('x y', QQ)\n+ >>> R\n+ PuiseuxRing((x, y), QQ)\n+ >>> p = 5*x**QQ(1,2) + 7/y\n+ >>> p\n+ 7*y**(-1) + 5*x**(1/2)\n+ \"\"\"\n+ ring = PuiseuxRing(symbols, domain)\n+ return (ring,) + ring.gens # type: ignore\n+\n+\n+class PuiseuxRing:\n+ \"\"\"Ring of Puiseux polynomials.\n+\n+ A Puiseux polynomial is a truncated Puiseux series. The exponents of the\n+ monomials can be negative or rational numbers. This ring is used by the\n+ ring_series module:\n+\n+ >>> from sympy.polys.domains import QQ\n+ >>> from sympy.polys.puiseux import puiseux_ring\n+ >>> from sympy.polys.ring_series import rs_exp, rs_nth_root\n+ >>> ring, x, y = puiseux_ring('x y', QQ)\n+ >>> f = x**2 + y**3\n+ >>> f\n+ y**3 + x**2\n+ >>> f.diff(x)\n+ 2*x\n+ >>> rs_exp(x, x, 5)\n+ 1 + x + 1/2*x**2 + 1/6*x**3 + 1/24*x**4\n+\n+ Importantly the Puiseux ring can represent truncated series with negative\n+ and fractional exponents:\n+\n+ >>> f = 1/x + 1/y**2\n+ >>> f\n+ x**(-1) + y**(-2)\n+ >>> f.diff(x)\n+ -1*x**(-2)\n+\n+ >>> rs_nth_root(8*x + x**2 + x**3, 3, x, 5)\n+ 2*x**(1/3) + 1/12*x**(4/3) + 23/288*x**(7/3) + -139/20736*x**(10/3)\n+\n+ See Also\n+ ========\n+\n+ sympy.polys.ring_series.rs_series\n+ PuiseuxPoly\n+ \"\"\"\n+ def __init__(self, symbols: str | list[Expr], domain: Domain):\n+\n+ poly_ring = PolyRing(symbols, domain)\n+\n+ domain = poly_ring.domain\n+ ngens = poly_ring.ngens\n+\n+ self.poly_ring = poly_ring\n+ self.domain = domain\n+\n+ self.symbols = poly_ring.symbols\n+ self.gens = tuple([self.from_poly(g) for g in poly_ring.gens])\n+ self.ngens = ngens\n+\n+ self.zero = self.from_poly(poly_ring.zero)\n+ self.one = self.from_poly(poly_ring.one)\n+\n+ self.zero_monom = poly_ring.zero_monom # type: ignore\n+ self.monomial_mul = poly_ring.monomial_mul # type: ignore\n+\n+ def __repr__(self) -> str:\n+ return f\"PuiseuxRing({self.symbols}, {self.domain})\"\n+\n+ def __eq__(self, other: Any) -> bool:\n+ if not isinstance(other, PuiseuxRing):\n+ return NotImplemented\n+ return self.symbols == other.symbols and self.domain == other.domain\n+\n+ def from_poly(self, poly: PolyElement) -> PuiseuxPoly:\n+ \"\"\"Create a Puiseux polynomial from a polynomial.\n+\n+ >>> from sympy.polys.domains import QQ\n+ >>> from sympy.polys.rings import ring\n+ >>> from sympy.polys.puiseux import puiseux_ring\n+ >>> R1, x1 = ring('x', QQ)\n+ >>> R2, x2 = puiseux_ring('x', QQ)\n+ >>> R2.from_poly(x1**2)\n+ x**2\n+ \"\"\"\n+ return PuiseuxPoly(poly, self)\n+\n+ def from_dict(self, terms: dict[tuple[int, ...], Any]) -> PuiseuxPoly:\n+ \"\"\"Create a Puiseux polynomial from a dictionary of terms.\n+\n+ >>> from sympy.polys.domains import QQ\n+ >>> from sympy.polys.puiseux import puiseux_ring\n+ >>> R, x = puiseux_ring('x', QQ)\n+ >>> R.from_dict({(QQ(1,2),): QQ(3)})\n+ 3*x**(1/2)\n+ \"\"\"\n+ return PuiseuxPoly.from_dict(terms, self)\n+\n+ def from_int(self, n: int) -> PuiseuxPoly:\n+ \"\"\"Create a Puiseux polynomial from an integer.\n+\n+ >>> from sympy.polys.domains import QQ\n+ >>> from sympy.polys.puiseux import puiseux_ring\n+ >>> R, x = puiseux_ring('x', QQ)\n+ >>> R.from_int(3)\n+ 3\n+ \"\"\"\n+ return self.from_poly(self.poly_ring(n))\n+\n+ def domain_new(self, arg: Any) -> Any:\n+ \"\"\"Create a new element of the domain.\n+\n+ >>> from sympy.polys.domains import QQ\n+ >>> from sympy.polys.puiseux import puiseux_ring\n+ >>> R, x = puiseux_ring('x', QQ)\n+ >>> R.domain_new(3)\n+ 3\n+ >>> QQ.of_type(_)\n+ True\n+ \"\"\"\n+ return self.poly_ring.domain_new(arg)\n+\n+ def ground_new(self, arg: Any) -> PuiseuxPoly:\n+ \"\"\"Create a new element from a ground element.\n+\n+ >>> from sympy.polys.domains import QQ\n+ >>> from sympy.polys.puiseux import puiseux_ring, PuiseuxPoly\n+ >>> R, x = puiseux_ring('x', QQ)\n+ >>> R.ground_new(3)\n+ 3\n+ >>> isinstance(_, PuiseuxPoly)\n+ True\n+ \"\"\"\n+ return self.from_poly(self.poly_ring.ground_new(arg))\n+\n+ def __call__(self, arg: Any) -> PuiseuxPoly:\n+ \"\"\"Coerce an element into the ring.\n+\n+ >>> from sympy.polys.domains import QQ\n+ >>> from sympy.polys.puiseux import puiseux_ring\n+ >>> R, x = puiseux_ring('x', QQ)\n+ >>> R(3)\n+ 3\n+ >>> R({(QQ(1,2),): QQ(3)})\n+ 3*x**(1/2)\n+ \"\"\"\n+ if isinstance(arg, dict):\n+ return self.from_dict(arg)\n+ else:\n+ return self.from_poly(self.poly_ring(arg))\n+\n+ def index(self, x: PuiseuxPoly) -> int:\n+ \"\"\"Return the index of a generator.\n+\n+ >>> from sympy.polys.domains import QQ\n+ >>> from sympy.polys.puiseux import puiseux_ring\n+ >>> R, x, y = puiseux_ring('x y', QQ)\n+ >>> R.index(x)\n+ 0\n+ >>> R.index(y)\n+ 1\n+ \"\"\"\n+ return self.gens.index(x)\n+\n+\n+def _div_poly_monom(poly: PolyElement, monom: Iterable[int]) -> PolyElement:\n+ ring = poly.ring\n+ div = ring.monomial_div\n+ return ring.from_dict({div(m, monom): c for m, c in poly.terms()})\n+\n+\n+def _mul_poly_monom(poly: PolyElement, monom: Iterable[int]) -> PolyElement:\n+ ring = poly.ring\n+ mul = ring.monomial_mul\n+ return ring.from_dict({mul(m, monom): c for m, c in poly.terms()})\n+\n+\n+def _div_monom(monom: Iterable[int], div: Iterable[int]) -> tuple[int, ...]:\n+ return tuple(mi - di for mi, di in zip(monom, div))\n+\n+\n+class PuiseuxPoly:\n+ \"\"\"Puiseux polynomial. Represents a truncated Puiseux series.\n+\n+ See the :class:`PuiseuxRing` class for more information.\n+\n+ >>> from sympy import QQ\n+ >>> from sympy.polys.puiseux import puiseux_ring\n+ >>> R, x, y = puiseux_ring('x, y', QQ)\n+ >>> p = 5*x**2 + 7*y**3\n+ >>> p\n+ 7*y**3 + 5*x**2\n+\n+ The internal representation of a Puiseux polynomial wraps a normal\n+ polynomial. To support negative powers the polynomial is considered to be\n+ divided by a monomial.\n+\n+ >>> p2 = 1/x + 1/y**2\n+ >>> p2.monom # x*y**2\n+ (1, 2)\n+ >>> p2.poly\n+ x + y**2\n+ >>> (y**2 + x) / (x*y**2) == p2\n+ True\n+\n+ To support fractional powers the polynomial is considered to be a function\n+ of ``x**(1/nx) * y**(1/ny) * ...``. The representation keeps track of a", "line": null, "original_line": 256, "original_start_line": null, "path": "sympy/polys/puiseux.py", "start_line": null, "text": "@user1:\nShould this be\r\n```suggestion\r\n of ``x**(1/nx), y**(1/ny), ...``. The representation keeps track of a\r\n```\r\n?" } ]
6cfcf8e0f8e17ca2d3b1b48b0466def48a714d35
diff --git a/doc/src/modules/polys/ringseries.rst b/doc/src/modules/polys/ringseries.rst index 43d5a9e6ce7e..0a57ede91961 100644 --- a/doc/src/modules/polys/ringseries.rst +++ b/doc/src/modules/polys/ringseries.rst @@ -35,47 +35,22 @@ Taylor series, we extend it to allow Laurent and even Puiseux series (with fractional exponents):: >>> from sympy.polys.ring_series import rs_cos, rs_tan - >>> R, x, y = ring('x, y', QQ) + >>> from sympy.polys.puiseux import puiseux_ring + >>> R, x, y = puiseux_ring('x, y', QQ) >>> rs_cos(x + x*y, x, 3)/x**3 - -1/2*x**(-1)*y**2 - x**(-1)*y - 1/2*x**(-1) + x**(-3) + x**(-3) + -1/2*x**(-1) + -1*x**(-1)*y + -1/2*x**(-1)*y**2 >>> rs_tan(x**QQ(2, 5)*y**QQ(1, 2), x, 2) - 1/3*x**(6/5)*y**(3/2) + x**(2/5)*y**(1/2) - -By default, ``PolyElement`` did not allow non-natural numbers as exponents. It -converted a fraction to an integer and raised an error on getting negative -exponents. The goal of the ``ring series`` module is fast series expansion, and -not to use the ``polys`` module. The reason we use it as our backend is simply -because it implements a sparse representation and most of the basic functions -that we need. However, this default behaviour of ``polys`` was limiting for -``ring series``. - -Note that there is no such constraint (in having rational exponents) in the -data-structure used by ``polys``- ``dict``. Sparse polynomials -(``PolyElement``) use the Python dict to store a polynomial term by term, where -a tuple of exponents is the key and the coefficient of that term is the value. -There is no reason why we can't have rational values in the ``dict`` so as to -support rational exponents. - -So the approach we took was to modify sparse ``polys`` to allow non-natural -exponents. And it turned out to be quite simple. We only had to delete the -conversion to ``int`` of exponents in the ``__pow__`` method of -``PolyElement``. So:: - - >>> x**QQ(3, 4) - x**(3/4) - -and not ``1`` as was the case earlier. - -Though this change violates the definition of a polynomial, it doesn't break -anything yet. Ideally, we shouldn't modify ``polys`` in any way. But to have -all the ``series`` capabilities we want, no other simple way was found. If need -be, we can separate the modified part of ``polys`` from core ``polys``. It -would be great if any other elegant solution is found. - -All series returned by the functions of this module are instances of the -``PolyElement`` class. To use them with other SymPy types, convert them to + x**(2/5)*y**(1/2) + 1/3*x**(6/5)*y**(3/2) + +Since polynomial rings cannot handle negative or fractional exponents, we use +the :func:`sympy.polys.puiseux.puiseux_ring` function to create a ring that can +represent such series. + +All series returned by the functions of this module are instances of +``PolyElement`` or ``PuiseuxPoly``. To use them with other SymPy types, convert +them to ``Expr``:: >>> from sympy.polys.ring_series import rs_exp @@ -213,6 +188,7 @@ by ``polys.ring.ring``. **Utility functions** +.. autofunction:: rs_series .. autofunction:: rs_is_puiseux .. autofunction:: rs_puiseux .. autofunction:: rs_puiseux2 @@ -220,3 +196,15 @@ by ``polys.ring.ring``. .. autofunction:: rs_fun .. autofunction:: mul_xin .. autofunction:: pow_xin + +**Puiseux rings** + +.. currentmodule:: sympy.polys.puiseux + +.. autofunction:: puiseux_ring + +.. autoclass:: PuiseuxRing + :members: + +.. autoclass:: PuiseuxPoly + :members: diff --git a/pyproject.toml b/pyproject.toml index 3430924e523c..67391201b5f9 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -30,6 +30,14 @@ markers = [ "tooslow", ] +[tool.coverage.report] + +exclude_lines = [ + "pragma: no cover", + "if TYPE_CHECKING:", + "assert False", +] + [tool.ruff] # Enable Pyflakes `E` and `F` codes by default. lint.select = [ diff --git a/sympy/integrals/prde.py b/sympy/integrals/prde.py index 4488cbfc4000..28e91ea0ff3a 100644 --- a/sympy/integrals/prde.py +++ b/sympy/integrals/prde.py @@ -533,7 +533,7 @@ def param_poly_rischDE(a, b, q, n, DE): if a.is_ground: # Normalization: a = 1. a = a.LC() - b, q = b.quo_ground(a), [qi.quo_ground(a) for qi in q] + b, q = b.to_field().exquo_ground(a), [qi.to_field().exquo_ground(a) for qi in q] if not b.is_zero and (DE.case == 'base' or b.degree() > max(0, DE.d.degree() - 1)): diff --git a/sympy/integrals/tests/test_integrals.py b/sympy/integrals/tests/test_integrals.py index 51b81775bb14..41e1ef3aa363 100644 --- a/sympy/integrals/tests/test_integrals.py +++ b/sympy/integrals/tests/test_integrals.py @@ -1813,8 +1813,8 @@ def test_issue_15810(): def test_issue_21024(): x = Symbol('x', real=True, nonzero=True) f = log(x)*log(4*x) + log(3*x + exp(2)) - F = x*log(x)**2 + x*(1 - 2*log(2)) + (-2*x + 2*x*log(2))*log(x) + \ - (x + exp(2)/6)*log(3*x + exp(2)) + exp(2)*log(3*x + exp(2))/6 + F = x*log(x)**2 + x*log(3*x + exp(2)) + x*(1 - 2*log(2)) + \ + (-2*x + 2*x*log(2))*log(x) + exp(2)*log(3*x + exp(2))/3 assert F == integrate(f, x) f = (x + exp(3))/x**2 diff --git a/sympy/polys/domains/complexfield.py b/sympy/polys/domains/complexfield.py index de02e46d190b..69f0bff2c1b3 100644 --- a/sympy/polys/domains/complexfield.py +++ b/sympy/polys/domains/complexfield.py @@ -142,10 +142,7 @@ def from_RealField(self, element, base): return self.dtype(element) def from_ComplexField(self, element, base): - if self == base: - return element - else: - return self.dtype(element) + return self.dtype(element) def get_ring(self): """Returns a ring associated with ``self``. """ diff --git a/sympy/polys/domains/domain.py b/sympy/polys/domains/domain.py index 1c2b0d3171d6..1d7fc1eac618 100644 --- a/sympy/polys/domains/domain.py +++ b/sympy/polys/domains/domain.py @@ -116,7 +116,7 @@ class (``dtype``) for the elements of the domain. For example the ZZ[x] >>> type(K) # class of the domain <class 'sympy.polys.domains.polynomialring.PolynomialRing'> - >>> K.dtype # class of the elements + >>> K.dtype # doctest: +SKIP <class 'sympy.polys.rings.PolyElement'> >>> p_expr = x**2 + 1 # Expr >>> p_expr @@ -469,7 +469,7 @@ def convert(self, element, base=None): def of_type(self, element): """Check if ``a`` is of type ``dtype``. """ - return isinstance(element, self.tp) # XXX: this isn't correct, e.g. PolyElement + return isinstance(element, self.tp) def __contains__(self, a): """Check if ``a`` belongs to this domain. """ diff --git a/sympy/polys/domains/fractionfield.py b/sympy/polys/domains/fractionfield.py index 47bc25436b8e..78f5054ddd54 100644 --- a/sympy/polys/domains/fractionfield.py +++ b/sympy/polys/domains/fractionfield.py @@ -37,6 +37,10 @@ def __init__(self, domain_or_field, symbols=None, order=None): def new(self, element): return self.field.field_new(element) + def of_type(self, element): + """Check if ``a`` is of type ``dtype``. """ + return self.field.is_element(element) + @property def zero(self): return self.field.zero @@ -53,13 +57,13 @@ def __str__(self): return str(self.domain) + '(' + ','.join(map(str, self.symbols)) + ')' def __hash__(self): - return hash((self.__class__.__name__, self.dtype.field, self.domain, self.symbols)) + return hash((self.__class__.__name__, self.field, self.domain, self.symbols)) def __eq__(self, other): """Returns ``True`` if two domains are equivalent. """ - return isinstance(other, FractionField) and \ - (self.dtype.field, self.domain, self.symbols) ==\ - (other.dtype.field, other.domain, other.symbols) + if not isinstance(other, FractionField): + return NotImplemented + return self.field == other.field def to_sympy(self, a): """Convert ``a`` to a SymPy object. """ diff --git a/sympy/polys/domains/polynomialring.py b/sympy/polys/domains/polynomialring.py index bad73208f866..daccdcdede4d 100644 --- a/sympy/polys/domains/polynomialring.py +++ b/sympy/polys/domains/polynomialring.py @@ -43,6 +43,10 @@ def __init__(self, domain_or_ring, symbols=None, order=None): def new(self, element): return self.ring.ring_new(element) + def of_type(self, element): + """Check if ``a`` is of type ``dtype``. """ + return self.ring.is_element(element) + @property def zero(self): return self.ring.zero @@ -59,13 +63,13 @@ def __str__(self): return str(self.domain) + '[' + ','.join(map(str, self.symbols)) + ']' def __hash__(self): - return hash((self.__class__.__name__, self.dtype.ring, self.domain, self.symbols)) + return hash((self.__class__.__name__, self.ring, self.domain, self.symbols)) def __eq__(self, other): """Returns `True` if two domains are equivalent. """ - return isinstance(other, PolynomialRing) and \ - (self.dtype.ring, self.domain, self.symbols) == \ - (other.dtype.ring, other.domain, other.symbols) + if not isinstance(other, PolynomialRing): + return NotImplemented + return self.ring == other.ring def is_unit(self, a): """Returns ``True`` if ``a`` is a unit of ``self``""" diff --git a/sympy/polys/domains/realfield.py b/sympy/polys/domains/realfield.py index 79ada6f70737..cb7fac2218c1 100644 --- a/sympy/polys/domains/realfield.py +++ b/sympy/polys/domains/realfield.py @@ -171,10 +171,7 @@ def from_AlgebraicField(self, element, base): return self.from_sympy(base.to_sympy(element).evalf(self.dps)) def from_RealField(self, element, base): - if self == base: - return element - else: - return self.dtype(element) + return self.dtype(element) def from_ComplexField(self, element, base): if not element.imag: diff --git a/sympy/polys/domains/tests/test_domains.py b/sympy/polys/domains/tests/test_domains.py index 5cdcb9f0403a..405665726b49 100644 --- a/sympy/polys/domains/tests/test_domains.py +++ b/sympy/polys/domains/tests/test_domains.py @@ -19,9 +19,9 @@ from sympy.polys.domains.realfield import RealField from sympy.polys.numberfields.subfield import field_isomorphism -from sympy.polys.rings import ring +from sympy.polys.rings import ring, PolyElement from sympy.polys.specialpolys import cyclotomic_poly -from sympy.polys.fields import field +from sympy.polys.fields import field, FracElement from sympy.polys.agca.extensions import FiniteExtension @@ -657,7 +657,12 @@ def test_Domain_is_unit(): def test_Domain_convert(): def check_element(e1, e2, K1, K2, K3): - assert type(e1) is type(e2), '%s, %s: %s %s -> %s' % (e1, e2, K1, K2, K3) + if isinstance(e1, PolyElement): + assert isinstance(e2, PolyElement) and e1.ring == e2.ring + elif isinstance(e1, FracElement): + assert isinstance(e2, FracElement) and e1.field == e2.field + else: + assert type(e1) is type(e2), '%s, %s: %s %s -> %s' % (e1, e2, K1, K2, K3) assert e1 == e2, '%s, %s: %s %s -> %s' % (e1, e2, K1, K2, K3) def check_domains(K1, K2): diff --git a/sympy/polys/fields.py b/sympy/polys/fields.py index e45063b5f7ad..ee844df55690 100644 --- a/sympy/polys/fields.py +++ b/sympy/polys/fields.py @@ -1,7 +1,6 @@ """Sparse rational function fields. """ from __future__ import annotations -from typing import Any from functools import reduce from operator import add, mul, lt, le, gt, ge @@ -100,7 +99,6 @@ def sfield(exprs, *symbols, **options): else: return (_field, fracs) -_field_cache: dict[Any, Any] = {} class FracField(DefaultPrinting): """Multivariate distributed rational function field. """ @@ -120,32 +118,29 @@ def __new__(cls, symbols, domain, order=lex): order = ring.order _hash_tuple = (cls.__name__, symbols, ngens, domain, order) - obj = _field_cache.get(_hash_tuple) - if obj is None: - obj = object.__new__(cls) - obj._hash_tuple = _hash_tuple - obj._hash = hash(_hash_tuple) - obj.ring = ring - obj.dtype = type("FracElement", (FracElement,), {"field": obj}) - obj.symbols = symbols - obj.ngens = ngens - obj.domain = domain - obj.order = order + obj = object.__new__(cls) + obj._hash_tuple = _hash_tuple + obj._hash = hash(_hash_tuple) + obj.ring = ring + obj.symbols = symbols + obj.ngens = ngens + obj.domain = domain + obj.order = order - obj.zero = obj.dtype(ring.zero) - obj.one = obj.dtype(ring.one) + obj.dtype = FracElement(obj, ring.zero).raw_new - obj.gens = obj._gens() + obj.zero = obj.dtype(ring.zero) + obj.one = obj.dtype(ring.one) - for symbol, generator in zip(obj.symbols, obj.gens): - if isinstance(symbol, Symbol): - name = symbol.name + obj.gens = obj._gens() - if not hasattr(obj, name): - setattr(obj, name, generator) + for symbol, generator in zip(obj.symbols, obj.gens): + if isinstance(symbol, Symbol): + name = symbol.name - _field_cache[_hash_tuple] = obj + if not hasattr(obj, name): + setattr(obj, name, generator) return obj @@ -160,7 +155,7 @@ def __hash__(self): return self._hash def index(self, gen): - if isinstance(gen, self.dtype): + if self.is_element(gen): return self.ring.index(gen.to_poly()) else: raise ValueError("expected a %s, got %s instead" % (self.dtype,gen)) @@ -173,8 +168,13 @@ def __eq__(self, other): def __ne__(self, other): return not self == other + def is_element(self, element): + """True if ``element`` is an element of this field. False otherwise. """ + return isinstance(element, FracElement) and element.field == self + def raw_new(self, numer, denom=None): return self.dtype(numer, denom) + def new(self, numer, denom=None): if denom is None: denom = self.ring.one numer, denom = numer.cancel(denom) @@ -292,17 +292,19 @@ def to_ring(self): class FracElement(DomainElement, DefaultPrinting, CantSympify): """Element of multivariate distributed rational function field. """ - def __init__(self, numer, denom=None): + def __init__(self, field, numer, denom=None): if denom is None: - denom = self.field.ring.one + denom = field.ring.one elif not denom: raise ZeroDivisionError("zero denominator") + self.field = field self.numer = numer self.denom = denom - def raw_new(f, numer, denom): - return f.__class__(numer, denom) + def raw_new(f, numer, denom=None): + return f.__class__(f.field, numer, denom) + def new(f, numer, denom): return f.raw_new(*numer.cancel(denom)) @@ -356,7 +358,7 @@ def sort_key(self): return (self.denom.sort_key(), self.numer.sort_key()) def _cmp(f1, f2, op): - if isinstance(f2, f1.field.dtype): + if f1.field.is_element(f2): return op(f1.sort_key(), f2.sort_key()) else: return NotImplemented @@ -406,12 +408,12 @@ def __add__(f, g): return f elif not f: return g - elif isinstance(g, field.dtype): + elif field.is_element(g): if f.denom == g.denom: return f.new(f.numer + g.numer, f.denom) else: return f.new(f.numer*g.denom + f.denom*g.numer, f.denom*g.denom) - elif isinstance(g, field.ring.dtype): + elif field.ring.is_element(g): return f.new(f.numer + f.denom*g, f.denom) else: if isinstance(g, FracElement): @@ -430,7 +432,7 @@ def __add__(f, g): return f.__radd__(g) def __radd__(f, c): - if isinstance(c, f.field.ring.dtype): + if f.field.ring.is_element(c): return f.new(f.numer + f.denom*c, f.denom) op, g_numer, g_denom = f._extract_ground(c) @@ -450,12 +452,12 @@ def __sub__(f, g): return f elif not f: return -g - elif isinstance(g, field.dtype): + elif field.is_element(g): if f.denom == g.denom: return f.new(f.numer - g.numer, f.denom) else: return f.new(f.numer*g.denom - f.denom*g.numer, f.denom*g.denom) - elif isinstance(g, field.ring.dtype): + elif field.ring.is_element(g): return f.new(f.numer - f.denom*g, f.denom) else: if isinstance(g, FracElement): @@ -481,7 +483,7 @@ def __sub__(f, g): return f.new(f.numer*g_denom - f.denom*g_numer, f.denom*g_denom) def __rsub__(f, c): - if isinstance(c, f.field.ring.dtype): + if f.field.ring.is_element(c): return f.new(-f.numer + f.denom*c, f.denom) op, g_numer, g_denom = f._extract_ground(c) @@ -499,9 +501,9 @@ def __mul__(f, g): if not f or not g: return field.zero - elif isinstance(g, field.dtype): + elif field.is_element(g): return f.new(f.numer*g.numer, f.denom*g.denom) - elif isinstance(g, field.ring.dtype): + elif field.ring.is_element(g): return f.new(f.numer*g, f.denom) else: if isinstance(g, FracElement): @@ -520,7 +522,7 @@ def __mul__(f, g): return f.__rmul__(g) def __rmul__(f, c): - if isinstance(c, f.field.ring.dtype): + if f.field.ring.is_element(c): return f.new(f.numer*c, f.denom) op, g_numer, g_denom = f._extract_ground(c) @@ -538,9 +540,9 @@ def __truediv__(f, g): if not g: raise ZeroDivisionError - elif isinstance(g, field.dtype): + elif field.is_element(g): return f.new(f.numer*g.denom, f.denom*g.numer) - elif isinstance(g, field.ring.dtype): + elif field.ring.is_element(g): return f.new(f.numer, f.denom*g) else: if isinstance(g, FracElement): @@ -568,7 +570,7 @@ def __truediv__(f, g): def __rtruediv__(f, c): if not f: raise ZeroDivisionError - elif isinstance(c, f.field.ring.dtype): + elif f.field.ring.is_element(c): return f.new(f.denom*c, f.numer) op, g_numer, g_denom = f._extract_ground(c) diff --git a/sympy/polys/modulargcd.py b/sympy/polys/modulargcd.py index 20dfd33d9197..00d1920f69fe 100644 --- a/sympy/polys/modulargcd.py +++ b/sympy/polys/modulargcd.py @@ -609,7 +609,8 @@ def _chinese_remainder_reconstruction_multivariate(hp, hq, p, q): hpmonoms.difference_update(monoms) hqmonoms.difference_update(monoms) - zero = hp.ring.domain.zero + domain = hp.ring.domain + zero = domain.zero hpq = hp.ring.zero @@ -617,7 +618,7 @@ def _chinese_remainder_reconstruction_multivariate(hp, hq, p, q): crt_ = _chinese_remainder_reconstruction_multivariate else: def crt_(cp, cq, p, q): - return crt([p, q], [cp, cq], symmetric=True)[0] + return domain(crt([p, q], [cp, cq], symmetric=True)[0]) for monom in monoms: hpq[monom] = crt_(hp[monom], hq[monom], p, q) diff --git a/sympy/polys/monomials.py b/sympy/polys/monomials.py index f464ba97f137..e5897a09986d 100644 --- a/sympy/polys/monomials.py +++ b/sympy/polys/monomials.py @@ -4,6 +4,7 @@ from itertools import combinations_with_replacement, product from textwrap import dedent +from sympy.core.cache import cacheit from sympy.core import Mul, S, Tuple, sympify from sympy.polys.polyerrors import ExactQuotientFailed from sympy.polys.polyutils import PicklableWithSlots, dict_from_expr @@ -394,8 +395,14 @@ def term_div(a, b, domain): class MonomialOps: """Code generator of fast monomial arithmetic functions. """ - def __init__(self, ngens): - self.ngens = ngens + @cacheit + def __new__(cls, ngens): + obj = super().__new__(cls) + obj.ngens = ngens + return obj + + def __getnewargs__(self): + return (self.ngens,) def _build(self, code, name): ns = {} @@ -405,6 +412,7 @@ def _build(self, code, name): def _vars(self, name): return [ "%s%s" % (name, i) for i in range(self.ngens) ] + @cacheit def mul(self): name = "monomial_mul" template = dedent("""\ @@ -419,6 +427,7 @@ def %(name)s(A, B): code = template % {"name": name, "A": ", ".join(A), "B": ", ".join(B), "AB": ", ".join(AB)} return self._build(code, name) + @cacheit def pow(self): name = "monomial_pow" template = dedent("""\ @@ -431,6 +440,7 @@ def %(name)s(A, k): code = template % {"name": name, "A": ", ".join(A), "Ak": ", ".join(Ak)} return self._build(code, name) + @cacheit def mulpow(self): name = "monomial_mulpow" template = dedent("""\ @@ -445,6 +455,7 @@ def %(name)s(A, B, k): code = template % {"name": name, "A": ", ".join(A), "B": ", ".join(B), "ABk": ", ".join(ABk)} return self._build(code, name) + @cacheit def ldiv(self): name = "monomial_ldiv" template = dedent("""\ @@ -459,6 +470,7 @@ def %(name)s(A, B): code = template % {"name": name, "A": ", ".join(A), "B": ", ".join(B), "AB": ", ".join(AB)} return self._build(code, name) + @cacheit def div(self): name = "monomial_div" template = dedent("""\ @@ -475,6 +487,7 @@ def %(name)s(A, B): code = template % {"name": name, "A": ", ".join(A), "B": ", ".join(B), "RAB": "\n ".join(RAB), "R": ", ".join(R)} return self._build(code, name) + @cacheit def lcm(self): name = "monomial_lcm" template = dedent("""\ @@ -489,6 +502,7 @@ def %(name)s(A, B): code = template % {"name": name, "A": ", ".join(A), "B": ", ".join(B), "AB": ", ".join(AB)} return self._build(code, name) + @cacheit def gcd(self): name = "monomial_gcd" template = dedent("""\ diff --git a/sympy/polys/puiseux.py b/sympy/polys/puiseux.py new file mode 100644 index 000000000000..e25a0fabbf93 --- /dev/null +++ b/sympy/polys/puiseux.py @@ -0,0 +1,795 @@ +""" +Puiseux rings. These are used by the ring_series module to represented +truncated Puiseux series. Elements of a Puiseux ring are like polynomials +except that the exponents can be negative or rational rather than just +non-negative integers. +""" + +# Previously the ring_series module used PolyElement to represent Puiseux +# series. This is problematic because it means that PolyElement has to support +# negative and non-integer exponents which most polynomial representations do +# not support. This module provides an implementation of a ring for Puiseux +# series that can be used by ring_series without breaking the basic invariants +# of polynomial rings. +# +# Ideally there would be more of a proper series type that can keep track of +# not not just the leading terms of a truncated series but also the precision +# of the series. For now the rings here are just introduced to keep the +# interface that ring_series was using before. + +from __future__ import annotations + +from sympy.polys.domains import QQ +from sympy.polys.rings import PolyRing, PolyElement +from sympy.core.add import Add +from sympy.core.mul import Mul +from sympy.external.gmpy import gcd, lcm + + +from typing import TYPE_CHECKING + + +if TYPE_CHECKING: + from typing import Any, Unpack + from sympy.core.expr import Expr + from sympy.polys.domains import Domain + from collections.abc import Iterable, Iterator + + +def puiseux_ring( + symbols: str | list[Expr], domain: Domain +) -> tuple[PuiseuxRing, Unpack[tuple[PuiseuxPoly, ...]]]: + """Construct a Puiseux ring. + + This function constructs a Puiseux ring with the given symbols and domain. + + >>> from sympy.polys.domains import QQ + >>> from sympy.polys.puiseux import puiseux_ring + >>> R, x, y = puiseux_ring('x y', QQ) + >>> R + PuiseuxRing((x, y), QQ) + >>> p = 5*x**QQ(1,2) + 7/y + >>> p + 7*y**(-1) + 5*x**(1/2) + """ + ring = PuiseuxRing(symbols, domain) + return (ring,) + ring.gens # type: ignore + + +class PuiseuxRing: + """Ring of Puiseux polynomials. + + A Puiseux polynomial is a truncated Puiseux series. The exponents of the + monomials can be negative or rational numbers. This ring is used by the + ring_series module: + + >>> from sympy.polys.domains import QQ + >>> from sympy.polys.puiseux import puiseux_ring + >>> from sympy.polys.ring_series import rs_exp, rs_nth_root + >>> ring, x, y = puiseux_ring('x y', QQ) + >>> f = x**2 + y**3 + >>> f + y**3 + x**2 + >>> f.diff(x) + 2*x + >>> rs_exp(x, x, 5) + 1 + x + 1/2*x**2 + 1/6*x**3 + 1/24*x**4 + + Importantly the Puiseux ring can represent truncated series with negative + and fractional exponents: + + >>> f = 1/x + 1/y**2 + >>> f + x**(-1) + y**(-2) + >>> f.diff(x) + -1*x**(-2) + + >>> rs_nth_root(8*x + x**2 + x**3, 3, x, 5) + 2*x**(1/3) + 1/12*x**(4/3) + 23/288*x**(7/3) + -139/20736*x**(10/3) + + See Also + ======== + + sympy.polys.ring_series.rs_series + PuiseuxPoly + """ + def __init__(self, symbols: str | list[Expr], domain: Domain): + + poly_ring = PolyRing(symbols, domain) + + domain = poly_ring.domain + ngens = poly_ring.ngens + + self.poly_ring = poly_ring + self.domain = domain + + self.symbols = poly_ring.symbols + self.gens = tuple([self.from_poly(g) for g in poly_ring.gens]) + self.ngens = ngens + + self.zero = self.from_poly(poly_ring.zero) + self.one = self.from_poly(poly_ring.one) + + self.zero_monom = poly_ring.zero_monom # type: ignore + self.monomial_mul = poly_ring.monomial_mul # type: ignore + + def __repr__(self) -> str: + return f"PuiseuxRing({self.symbols}, {self.domain})" + + def __eq__(self, other: Any) -> bool: + if not isinstance(other, PuiseuxRing): + return NotImplemented + return self.symbols == other.symbols and self.domain == other.domain + + def from_poly(self, poly: PolyElement) -> PuiseuxPoly: + """Create a Puiseux polynomial from a polynomial. + + >>> from sympy.polys.domains import QQ + >>> from sympy.polys.rings import ring + >>> from sympy.polys.puiseux import puiseux_ring + >>> R1, x1 = ring('x', QQ) + >>> R2, x2 = puiseux_ring('x', QQ) + >>> R2.from_poly(x1**2) + x**2 + """ + return PuiseuxPoly(poly, self) + + def from_dict(self, terms: dict[tuple[int, ...], Any]) -> PuiseuxPoly: + """Create a Puiseux polynomial from a dictionary of terms. + + >>> from sympy.polys.domains import QQ + >>> from sympy.polys.puiseux import puiseux_ring + >>> R, x = puiseux_ring('x', QQ) + >>> R.from_dict({(QQ(1,2),): QQ(3)}) + 3*x**(1/2) + """ + return PuiseuxPoly.from_dict(terms, self) + + def from_int(self, n: int) -> PuiseuxPoly: + """Create a Puiseux polynomial from an integer. + + >>> from sympy.polys.domains import QQ + >>> from sympy.polys.puiseux import puiseux_ring + >>> R, x = puiseux_ring('x', QQ) + >>> R.from_int(3) + 3 + """ + return self.from_poly(self.poly_ring(n)) + + def domain_new(self, arg: Any) -> Any: + """Create a new element of the domain. + + >>> from sympy.polys.domains import QQ + >>> from sympy.polys.puiseux import puiseux_ring + >>> R, x = puiseux_ring('x', QQ) + >>> R.domain_new(3) + 3 + >>> QQ.of_type(_) + True + """ + return self.poly_ring.domain_new(arg) + + def ground_new(self, arg: Any) -> PuiseuxPoly: + """Create a new element from a ground element. + + >>> from sympy.polys.domains import QQ + >>> from sympy.polys.puiseux import puiseux_ring, PuiseuxPoly + >>> R, x = puiseux_ring('x', QQ) + >>> R.ground_new(3) + 3 + >>> isinstance(_, PuiseuxPoly) + True + """ + return self.from_poly(self.poly_ring.ground_new(arg)) + + def __call__(self, arg: Any) -> PuiseuxPoly: + """Coerce an element into the ring. + + >>> from sympy.polys.domains import QQ + >>> from sympy.polys.puiseux import puiseux_ring + >>> R, x = puiseux_ring('x', QQ) + >>> R(3) + 3 + >>> R({(QQ(1,2),): QQ(3)}) + 3*x**(1/2) + """ + if isinstance(arg, dict): + return self.from_dict(arg) + else: + return self.from_poly(self.poly_ring(arg)) + + def index(self, x: PuiseuxPoly) -> int: + """Return the index of a generator. + + >>> from sympy.polys.domains import QQ + >>> from sympy.polys.puiseux import puiseux_ring + >>> R, x, y = puiseux_ring('x y', QQ) + >>> R.index(x) + 0 + >>> R.index(y) + 1 + """ + return self.gens.index(x) + + +def _div_poly_monom(poly: PolyElement, monom: Iterable[int]) -> PolyElement: + ring = poly.ring + div = ring.monomial_div + return ring.from_dict({div(m, monom): c for m, c in poly.terms()}) + + +def _mul_poly_monom(poly: PolyElement, monom: Iterable[int]) -> PolyElement: + ring = poly.ring + mul = ring.monomial_mul + return ring.from_dict({mul(m, monom): c for m, c in poly.terms()}) + + +def _div_monom(monom: Iterable[int], div: Iterable[int]) -> tuple[int, ...]: + return tuple(mi - di for mi, di in zip(monom, div)) + + +class PuiseuxPoly: + """Puiseux polynomial. Represents a truncated Puiseux series. + + See the :class:`PuiseuxRing` class for more information. + + >>> from sympy import QQ + >>> from sympy.polys.puiseux import puiseux_ring + >>> R, x, y = puiseux_ring('x, y', QQ) + >>> p = 5*x**2 + 7*y**3 + >>> p + 7*y**3 + 5*x**2 + + The internal representation of a Puiseux polynomial wraps a normal + polynomial. To support negative powers the polynomial is considered to be + divided by a monomial. + + >>> p2 = 1/x + 1/y**2 + >>> p2.monom # x*y**2 + (1, 2) + >>> p2.poly + x + y**2 + >>> (y**2 + x) / (x*y**2) == p2 + True + + To support fractional powers the polynomial is considered to be a function + of ``x**(1/nx), y**(1/ny), ...``. The representation keeps track of a + monomial and a list of exponent denominators so that the polynomial can be + used to represent both negative and fractional powers. + + >>> p3 = x**QQ(1,2) + y**QQ(2,3) + >>> p3.ns + (2, 3) + >>> p3.poly + x + y**2 + + See Also + ======== + + sympy.polys.puiseux.PuiseuxRing + sympy.polys.rings.PolyElement + """ + + ring: PuiseuxRing + poly: PolyElement + monom: tuple[int, ...] | None + ns: tuple[int, ...] | None + + def __new__(cls, poly: PolyElement, ring: PuiseuxRing) -> PuiseuxPoly: + return cls._new(ring, poly, None, None) + + @classmethod + def _new( + cls, + ring: PuiseuxRing, + poly: PolyElement, + monom: tuple[int, ...] | None, + ns: tuple[int, ...] | None, + ) -> PuiseuxPoly: + poly, monom, ns = cls._normalize(poly, monom, ns) + return cls._new_raw(ring, poly, monom, ns) + + @classmethod + def _new_raw( + cls, + ring: PuiseuxRing, + poly: PolyElement, + monom: tuple[int, ...] | None, + ns: tuple[int, ...] | None, + ) -> PuiseuxPoly: + obj = object.__new__(cls) + obj.ring = ring + obj.poly = poly + obj.monom = monom + obj.ns = ns + return obj + + def __eq__(self, other: Any) -> bool: + if isinstance(other, PuiseuxPoly): + return ( + self.poly == other.poly + and self.monom == other.monom + and self.ns == other.ns + ) + elif self.monom is None and self.ns is None: + return self.poly.__eq__(other) + else: + return NotImplemented + + @classmethod + def _normalize( + cls, + poly: PolyElement, + monom: tuple[int, ...] | None, + ns: tuple[int, ...] | None, + ) -> tuple[PolyElement, tuple[int, ...] | None, tuple[int, ...] | None]: + if monom is None and ns is None: + return poly, None, None + + if monom is not None: + degs = [max(d, 0) for d in poly.tail_degrees()] + if all(di >= mi for di, mi in zip(degs, monom)): + poly = _div_poly_monom(poly, monom) + monom = None + elif any(degs): + poly = _div_poly_monom(poly, degs) + monom = _div_monom(monom, degs) + + if ns is not None: + factors_d, [poly_d] = poly.deflate() + degrees = poly.degrees() + monom_d = monom if monom is not None else [0] * len(degrees) + ns_new = [] + monom_new = [] + inflations = [] + for fi, ni, di, mi in zip(factors_d, ns, degrees, monom_d): + if di == 0: + g = gcd(ni, mi) + else: + g = gcd(fi, ni, mi) + ns_new.append(ni // g) + monom_new.append(mi // g) + inflations.append(fi // g) + + if any(infl > 1 for infl in inflations): + poly_d = poly_d.inflate(inflations) + + poly = poly_d + + if monom is not None: + monom = tuple(monom_new) + + if all(n == 1 for n in ns_new): + ns = None + else: + ns = tuple(ns_new) + + return poly, monom, ns + + @classmethod + def _monom_fromint( + cls, + monom: tuple[int, ...], + dmonom: tuple[int, ...] | None, + ns: tuple[int, ...] | None, + ) -> tuple[Any, ...]: + if dmonom is not None and ns is not None: + return tuple(QQ(mi - di, ni) for mi, di, ni in zip(monom, dmonom, ns)) + elif dmonom is not None: + return tuple(QQ(mi - di) for mi, di in zip(monom, dmonom)) + elif ns is not None: + return tuple(QQ(mi, ni) for mi, ni in zip(monom, ns)) + else: + return tuple(QQ(mi) for mi in monom) + + @classmethod + def _monom_toint( + cls, + monom: tuple[Any, ...], + dmonom: tuple[int, ...] | None, + ns: tuple[int, ...] | None, + ) -> tuple[int, ...]: + if dmonom is not None and ns is not None: + return tuple( + int((mi * ni).numerator + di) for mi, di, ni in zip(monom, dmonom, ns) + ) + elif dmonom is not None: + return tuple(int(mi.numerator + di) for mi, di in zip(monom, dmonom)) + elif ns is not None: + return tuple(int((mi * ni).numerator) for mi, ni in zip(monom, ns)) + else: + return tuple(int(mi.numerator) for mi in monom) + + def itermonoms(self) -> Iterator[tuple[Any, ...]]: + """Iterate over the monomials of a Puiseux polynomial. + + >>> from sympy import QQ + >>> from sympy.polys.puiseux import puiseux_ring + >>> R, x, y = puiseux_ring('x, y', QQ) + >>> p = 5*x**2 + 7*y**3 + >>> list(p.itermonoms()) + [(2, 0), (0, 3)] + >>> p[(2, 0)] + 5 + """ + monom, ns = self.monom, self.ns + for m in self.poly.itermonoms(): + yield self._monom_fromint(m, monom, ns) + + def monoms(self) -> list[tuple[Any, ...]]: + """Return a list of the monomials of a Puiseux polynomial.""" + return list(self.itermonoms()) + + def __iter__(self) -> Iterator[tuple[tuple[Any, ...], Any]]: + return self.itermonoms() + + def __getitem__(self, monom: tuple[int, ...]) -> Any: + monom = self._monom_toint(monom, self.monom, self.ns) + return self.poly[monom] + + def __len__(self) -> int: + return len(self.poly) + + def iterterms(self) -> Iterator[tuple[tuple[Any, ...], Any]]: + """Iterate over the terms of a Puiseux polynomial. + + >>> from sympy import QQ + >>> from sympy.polys.puiseux import puiseux_ring + >>> R, x, y = puiseux_ring('x, y', QQ) + >>> p = 5*x**2 + 7*y**3 + >>> list(p.iterterms()) + [((2, 0), 5), ((0, 3), 7)] + """ + monom, ns = self.monom, self.ns + for m, coeff in self.poly.iterterms(): + mq = self._monom_fromint(m, monom, ns) + yield mq, coeff + + def terms(self) -> list[tuple[tuple[Any, ...], Any]]: + """Return a list of the terms of a Puiseux polynomial.""" + return list(self.iterterms()) + + @property + def is_term(self) -> bool: + """Return True if the Puiseux polynomial is a single term.""" + return self.poly.is_term + + def to_dict(self) -> dict[tuple[int, ...], Any]: + """Return a dictionary representation of a Puiseux polynomial.""" + return dict(self.iterterms()) + + @classmethod + def from_dict( + cls, terms: dict[tuple[Any, ...], Any], ring: PuiseuxRing + ) -> PuiseuxPoly: + """Create a Puiseux polynomial from a dictionary of terms. + + >>> from sympy import QQ + >>> from sympy.polys.puiseux import puiseux_ring, PuiseuxPoly + >>> R, x = puiseux_ring('x', QQ) + >>> PuiseuxPoly.from_dict({(QQ(1,2),): QQ(3)}, R) + 3*x**(1/2) + >>> R.from_dict({(QQ(1,2),): QQ(3)}) + 3*x**(1/2) + """ + ns = [1] * ring.ngens + mon = [0] * ring.ngens + for mo in terms: + ns = [lcm(n, m.denominator) for n, m in zip(ns, mo)] + mon = [min(m, n) for m, n in zip(mo, mon)] + + if not any(mon): + monom = None + else: + monom = tuple(-int((m * n).numerator) for m, n in zip(mon, ns)) + + if all(n == 1 for n in ns): + ns_final = None + else: + ns_final = tuple(ns) + + terms_p = {cls._monom_toint(m, monom, ns_final): coeff for m, coeff in terms.items()} + + poly = ring.poly_ring.from_dict(terms_p) + + return cls._new(ring, poly, monom, ns_final) + + def as_expr(self) -> Expr: + """Convert a Puiseux polynomial to :class:`~sympy.core.expr.Expr`. + + >>> from sympy import QQ, Expr + >>> from sympy.polys.puiseux import puiseux_ring + >>> R, x = puiseux_ring('x', QQ) + >>> p = 5*x**2 + 7*x**3 + >>> p.as_expr() + 7*x**3 + 5*x**2 + >>> isinstance(_, Expr) + True + """ + ring = self.ring + dom = ring.domain + symbols = ring.symbols + terms = [] + for monom, coeff in self.iterterms(): + coeff_expr = dom.to_sympy(coeff) + monoms_expr = [] + for i, m in enumerate(monom): + monoms_expr.append(symbols[i] ** m) + terms.append(Mul(coeff_expr, *monoms_expr)) + return Add(*terms) + + def __repr__(self) -> str: + + def format_power(base: str, exp: int) -> str: + if exp == 1: + return base + elif exp >= 0 and int(exp) == exp: + return f"{base}**{exp}" + else: + return f"{base}**({exp})" + + ring = self.ring + dom = ring.domain + + syms = [str(s) for s in ring.symbols] + terms_str = [] + for monom, coeff in sorted(self.terms()): + monom_str = "*".join(format_power(s, e) for s, e in zip(syms, monom) if e) + if coeff == dom.one: + if monom_str: + terms_str.append(monom_str) + else: + terms_str.append("1") + elif not monom_str: + terms_str.append(str(coeff)) + else: + terms_str.append(f"{coeff}*{monom_str}") + + return " + ".join(terms_str) + + def _unify( + self, other: PuiseuxPoly + ) -> tuple[ + PolyElement, PolyElement, tuple[int, ...] | None, tuple[int, ...] | None + ]: + """Bring two Puiseux polynomials to a common monom and ns.""" + poly1, monom1, ns1 = self.poly, self.monom, self.ns + poly2, monom2, ns2 = other.poly, other.monom, other.ns + + if monom1 == monom2 and ns1 == ns2: + return poly1, poly2, monom1, ns1 + + if ns1 == ns2: + ns = ns1 + elif ns1 is not None and ns2 is not None: + ns = tuple(lcm(n1, n2) for n1, n2 in zip(ns1, ns2)) + f1 = [n // n1 for n, n1 in zip(ns, ns1)] + f2 = [n // n2 for n, n2 in zip(ns, ns2)] + poly1 = poly1.inflate(f1) + poly2 = poly2.inflate(f2) + if monom1 is not None: + monom1 = tuple(m * f for m, f in zip(monom1, f1)) + if monom2 is not None: + monom2 = tuple(m * f for m, f in zip(monom2, f2)) + elif ns2 is not None: + ns = ns2 + poly1 = poly1.inflate(ns) + if monom1 is not None: + monom1 = tuple(m * n for m, n in zip(monom1, ns)) + elif ns1 is not None: + ns = ns1 + poly2 = poly2.inflate(ns) + if monom2 is not None: + monom2 = tuple(m * n for m, n in zip(monom2, ns)) + else: + assert False + + if monom1 == monom2: + monom = monom1 + elif monom1 is not None and monom2 is not None: + monom = tuple(max(m1, m2) for m1, m2 in zip(monom1, monom2)) + poly1 = _mul_poly_monom(poly1, _div_monom(monom, monom1)) + poly2 = _mul_poly_monom(poly2, _div_monom(monom, monom2)) + elif monom2 is not None: + monom = monom2 + poly1 = _mul_poly_monom(poly1, monom2) + elif monom1 is not None: + monom = monom1 + poly2 = _mul_poly_monom(poly2, monom1) + else: + assert False + + return poly1, poly2, monom, ns + + def __pos__(self) -> PuiseuxPoly: + return self + + def __neg__(self) -> PuiseuxPoly: + return self._new_raw(self.ring, -self.poly, self.monom, self.ns) + + def __add__(self, other: Any) -> PuiseuxPoly: + if isinstance(other, PuiseuxPoly): + if self.ring != other.ring: + raise ValueError("Cannot add Puiseux polynomials from different rings") + return self._add(other) + domain = self.ring.domain + if isinstance(other, int): + return self._add_ground(domain.convert_from(QQ(other), QQ)) + elif domain.of_type(other): + return self._add_ground(other) + else: + return NotImplemented + + def __radd__(self, other: Any) -> PuiseuxPoly: + domain = self.ring.domain + if isinstance(other, int): + return self._add_ground(domain.convert_from(QQ(other), QQ)) + elif domain.of_type(other): + return self._add_ground(other) + else: + return NotImplemented + + def __sub__(self, other: Any) -> PuiseuxPoly: + if isinstance(other, PuiseuxPoly): + if self.ring != other.ring: + raise ValueError( + "Cannot subtract Puiseux polynomials from different rings" + ) + return self._sub(other) + domain = self.ring.domain + if isinstance(other, int): + return self._sub_ground(domain.convert_from(QQ(other), QQ)) + elif domain.of_type(other): + return self._sub_ground(other) + else: + return NotImplemented + + def __rsub__(self, other: Any) -> PuiseuxPoly: + domain = self.ring.domain + if isinstance(other, int): + return self._rsub_ground(domain.convert_from(QQ(other), QQ)) + elif domain.of_type(other): + return self._rsub_ground(other) + else: + return NotImplemented + + def __mul__(self, other: Any) -> PuiseuxPoly: + if isinstance(other, PuiseuxPoly): + if self.ring != other.ring: + raise ValueError( + "Cannot multiply Puiseux polynomials from different rings" + ) + return self._mul(other) + domain = self.ring.domain + if isinstance(other, int): + return self._mul_ground(domain.convert_from(QQ(other), QQ)) + elif domain.of_type(other): + return self._mul_ground(other) + else: + return NotImplemented + + def __rmul__(self, other: Any) -> PuiseuxPoly: + domain = self.ring.domain + if isinstance(other, int): + return self._mul_ground(domain.convert_from(QQ(other), QQ)) + elif domain.of_type(other): + return self._mul_ground(other) + else: + return NotImplemented + + def __pow__(self, other: Any) -> PuiseuxPoly: + if isinstance(other, int): + if other >= 0: + return self._pow_pint(other) + else: + return self._pow_nint(-other) + elif QQ.of_type(other): + return self._pow_rational(other) + else: + return NotImplemented + + def __truediv__(self, other: Any) -> PuiseuxPoly: + if isinstance(other, PuiseuxPoly): + if self.ring != other.ring: + raise ValueError( + "Cannot divide Puiseux polynomials from different rings" + ) + return self._mul(other._inv()) + domain = self.ring.domain + if isinstance(other, int): + return self._mul_ground(domain.convert_from(QQ(1, other), QQ)) + elif domain.of_type(other): + return self._div_ground(other) + else: + return NotImplemented + + def __rtruediv__(self, other: Any) -> PuiseuxPoly: + if isinstance(other, int): + return self._inv()._mul_ground(self.ring.domain.convert_from(QQ(other), QQ)) + elif self.ring.domain.of_type(other): + return self._inv()._mul_ground(other) + else: + return NotImplemented + + def _add(self, other: PuiseuxPoly) -> PuiseuxPoly: + poly1, poly2, monom, ns = self._unify(other) + return self._new(self.ring, poly1 + poly2, monom, ns) + + def _add_ground(self, ground: Any) -> PuiseuxPoly: + return self._add(self.ring.ground_new(ground)) + + def _sub(self, other: PuiseuxPoly) -> PuiseuxPoly: + poly1, poly2, monom, ns = self._unify(other) + return self._new(self.ring, poly1 - poly2, monom, ns) + + def _sub_ground(self, ground: Any) -> PuiseuxPoly: + return self._sub(self.ring.ground_new(ground)) + + def _rsub_ground(self, ground: Any) -> PuiseuxPoly: + return self.ring.ground_new(ground)._sub(self) + + def _mul(self, other: PuiseuxPoly) -> PuiseuxPoly: + poly1, poly2, monom, ns = self._unify(other) + if monom is not None: + monom = tuple(2 * e for e in monom) + return self._new(self.ring, poly1 * poly2, monom, ns) + + def _mul_ground(self, ground: Any) -> PuiseuxPoly: + return self._new_raw(self.ring, self.poly * ground, self.monom, self.ns) + + def _div_ground(self, ground: Any) -> PuiseuxPoly: + return self._new_raw(self.ring, self.poly / ground, self.monom, self.ns) + + def _pow_pint(self, n: int) -> PuiseuxPoly: + assert n >= 0 + monom = self.monom + if monom is not None: + monom = tuple(m * n for m in monom) + return self._new(self.ring, self.poly**n, monom, self.ns) + + def _pow_nint(self, n: int) -> PuiseuxPoly: + return self._inv()._pow_pint(n) + + def _pow_rational(self, n: Any) -> PuiseuxPoly: + if not self.is_term: + raise ValueError("Only monomials can be raised to a rational power") + [(monom, coeff)] = self.terms() + domain = self.ring.domain + if not domain.is_one(coeff): + raise ValueError("Only monomials can be raised to a rational power") + monom = tuple(m * n for m in monom) + return self.ring.from_dict({monom: domain.one}) + + def _inv(self) -> PuiseuxPoly: + if not self.is_term: + raise ValueError("Only terms can be inverted") + [(monom, coeff)] = self.terms() + domain = self.ring.domain + if not domain.is_Field and not domain.is_one(coeff): + raise ValueError("Cannot invert non-unit coefficient") + monom = tuple(-m for m in monom) + coeff = 1 / coeff + return self.ring.from_dict({monom: coeff}) + + def diff(self, x: PuiseuxPoly) -> PuiseuxPoly: + """Differentiate a Puiseux polynomial with respect to a variable. + + >>> from sympy import QQ + >>> from sympy.polys.puiseux import puiseux_ring + >>> R, x, y = puiseux_ring('x, y', QQ) + >>> p = 5*x**2 + 7*y**3 + >>> p.diff(x) + 10*x + >>> p.diff(y) + 21*y**2 + """ + ring = self.ring + i = ring.index(x) + g = {} + for expv, coeff in self.iterterms(): + n = expv[i] + if n: + e = list(expv) + e[i] -= 1 + g[tuple(e)] = coeff * n + return ring(g) diff --git a/sympy/polys/ring_series.py b/sympy/polys/ring_series.py index d08b0c0507d1..4afcba37627c 100644 --- a/sympy/polys/ring_series.py +++ b/sympy/polys/ring_series.py @@ -43,6 +43,7 @@ from sympy.polys.domains import QQ, EX from sympy.polys.rings import PolyElement, ring, sring +from sympy.polys.puiseux import PuiseuxPoly from sympy.polys.polyerrors import DomainError from sympy.polys.monomials import (monomial_min, monomial_mul, monomial_div, monomial_ldiv) @@ -89,7 +90,8 @@ def _invert_monoms(p1): def _giant_steps(target): """Return a list of precision steps for the Newton's method""" - res = giant_steps(2, target) + # We use ceil here because giant_steps cannot handle flint.fmpq + res = giant_steps(2, math.ceil(target)) if res[0] != 2: res = [2] + res return res @@ -113,13 +115,13 @@ def rs_trunc(p1, x, prec): x**5 + x + 1 """ R = p1.ring - p = R.zero + p = {} i = R.gens.index(x) for exp1 in p1: if exp1[i] >= prec: continue p[exp1] = p1[exp1] - return p + return R(p) def rs_is_puiseux(p, x): """ @@ -131,15 +133,15 @@ def rs_is_puiseux(p, x): ======== >>> from sympy.polys.domains import QQ - >>> from sympy.polys.rings import ring + >>> from sympy.polys.puiseux import puiseux_ring >>> from sympy.polys.ring_series import rs_is_puiseux - >>> R, x = ring('x', QQ) + >>> R, x = puiseux_ring('x', QQ) >>> p = x**QQ(2,5) + x**QQ(2,3) + x >>> rs_is_puiseux(p, x) True """ index = p.ring.gens.index(x) - for k in p: + for k in p.itermonoms(): if k[index] != int(k[index]): return True if k[index] < 0: @@ -156,12 +158,12 @@ def rs_puiseux(f, p, x, prec): ======== >>> from sympy.polys.domains import QQ - >>> from sympy.polys.rings import ring + >>> from sympy.polys.puiseux import puiseux_ring >>> from sympy.polys.ring_series import rs_puiseux, rs_exp - >>> R, x = ring('x', QQ) + >>> R, x = puiseux_ring('x', QQ) >>> p = x**QQ(2,5) + x**QQ(2,3) + x >>> rs_puiseux(rs_exp,p, x, 1) - 1/2*x**(4/5) + x**(2/3) + x**(2/5) + 1 + 1 + x**(2/5) + x**(2/3) + 1/2*x**(4/5) """ index = p.ring.gens.index(x) n = 1 @@ -229,18 +231,18 @@ def rs_mul(p1, p2, x, prec): 3*x**2 + 3*x + 1 """ R = p1.ring - p = R.zero + p = {} if R.__class__ != p2.ring.__class__ or R != p2.ring: raise ValueError('p1 and p2 must have the same ring') iv = R.gens.index(x) - if not isinstance(p2, PolyElement): + if not isinstance(p2, (PolyElement, PuiseuxPoly)): raise ValueError('p2 must be a polynomial') if R == p2.ring: get = p.get - items2 = list(p2.items()) + items2 = p2.terms() items2.sort(key=lambda e: e[0][iv]) if R.ngens == 1: - for exp1, v1 in p1.items(): + for exp1, v1 in p1.iterterms(): for exp2, v2 in items2: exp = exp1[0] + exp2[0] if exp < prec: @@ -250,7 +252,7 @@ def rs_mul(p1, p2, x, prec): break else: monomial_mul = R.monomial_mul - for exp1, v1 in p1.items(): + for exp1, v1 in p1.iterterms(): for exp2, v2 in items2: if exp1[iv] + exp2[iv] < prec: exp = monomial_mul(exp1, exp2) @@ -258,8 +260,7 @@ def rs_mul(p1, p2, x, prec): else: break - p.strip_zero() - return p + return R(p) def rs_square(p1, x, prec): """ @@ -277,10 +278,10 @@ def rs_square(p1, x, prec): 6*x**2 + 4*x + 1 """ R = p1.ring - p = R.zero + p = {} iv = R.gens.index(x) get = p.get - items = list(p1.items()) + items = p1.terms() items.sort(key=lambda e: e[0][iv]) monomial_mul = R.monomial_mul for i in range(len(items)): @@ -292,14 +293,13 @@ def rs_square(p1, x, prec): p[exp] = get(exp, 0) + v1*v2 else: break - p = p.imul_num(2) + p = {m: 2*v for m, v in p.items()} get = p.get - for expv, v in p1.items(): + for expv, v in p1.iterterms(): if 2*expv[iv] < prec: e2 = monomial_mul(expv, expv) p[e2] = get(e2, 0) + v**2 - p.strip_zero() - return p + return R(p) def rs_pow(p1, n, x, prec): """ @@ -753,7 +753,7 @@ def rs_diff(p, x): """ R = p.ring n = R.gens.index(x) - p1 = R.zero + p1 = {} mn = [0]*R.ngens mn[n] = 1 mn = tuple(mn) @@ -761,7 +761,7 @@ def rs_diff(p, x): if expv[n]: e = monomial_ldiv(expv, mn) p1[e] = R.domain_new(p[expv]*expv[n]) - return p1 + return R(p1) def rs_integrate(p, x): """ @@ -784,7 +784,7 @@ def rs_integrate(p, x): 1/3*x**3*y**3 + 1/2*x**2 """ R = p.ring - p1 = R.zero + p1 = {} n = R.gens.index(x) mn = [0]*R.ngens mn[n] = 1 @@ -793,7 +793,7 @@ def rs_integrate(p, x): for expv in p: e = monomial_mul(expv, mn) p1[e] = R.domain_new(p[expv]/(expv[n] + 1)) - return p1 + return R(p1) def rs_fun(p, f, *args): r""" @@ -858,31 +858,31 @@ def mul_xin(p, i, n): `x\_i` is the ith variable in ``p``. """ R = p.ring - q = R(0) - for k, v in p.items(): + q = {} + for k, v in p.terms(): k1 = list(k) k1[i] += n q[tuple(k1)] = v - return q + return R(q) def pow_xin(p, i, n): """ >>> from sympy.polys.domains import QQ - >>> from sympy.polys.rings import ring + >>> from sympy.polys.puiseux import puiseux_ring >>> from sympy.polys.ring_series import pow_xin - >>> R, x, y = ring('x, y', QQ) + >>> R, x, y = puiseux_ring('x, y', QQ) >>> p = x**QQ(2,5) + x + x**QQ(2,3) >>> index = p.ring.gens.index(x) >>> pow_xin(p, index, 15) - x**15 + x**10 + x**6 + x**6 + x**10 + x**15 """ R = p.ring - q = R(0) - for k, v in p.items(): + q = {} + for k, v in p.terms(): k1 = list(k) k1[i] *= n q[tuple(k1)] = v - return q + return R(q) def _nth_root1(p, n, x, prec): """ @@ -973,7 +973,7 @@ def rs_nth_root(p, n, x, prec): c = p[zm] if R.domain is EX: c_expr = c.as_expr() - const = c_expr**QQ(1, n) + const = EX(c_expr**QQ(1, n)) elif isinstance(c, PolyElement): try: c_expr = c.as_expr() @@ -991,7 +991,7 @@ def rs_nth_root(p, n, x, prec): else: res = _nth_root1(p, n, x, prec) if m: - m = QQ(m, n) + m = QQ(m) / n res = mul_xin(res, index, m) return res @@ -1008,13 +1008,13 @@ def rs_log(p, x, prec): ======== >>> from sympy.polys.domains import QQ - >>> from sympy.polys.rings import ring + >>> from sympy.polys.puiseux import puiseux_ring >>> from sympy.polys.ring_series import rs_log - >>> R, x = ring('x', QQ) + >>> R, x = puiseux_ring('x', QQ) >>> rs_log(1 + x, x, 8) - 1/7*x**7 - 1/6*x**6 + 1/5*x**5 - 1/4*x**4 + 1/3*x**3 - 1/2*x**2 + x + x + -1/2*x**2 + 1/3*x**3 + -1/4*x**4 + 1/5*x**5 + -1/6*x**6 + 1/7*x**7 >>> rs_log(x**QQ(3, 2) + 1, x, 5) - 1/3*x**(9/2) - 1/2*x**3 + x**(3/2) + x**(3/2) + -1/2*x**3 + 1/3*x**(9/2) """ if rs_is_puiseux(p, x): return rs_puiseux(rs_log, p, x, prec) @@ -1030,7 +1030,7 @@ def rs_log(p, x, prec): c_expr = c.as_expr() if R.domain is EX: const = log(c_expr) - elif isinstance(c, PolyElement): + elif isinstance(c, (PolyElement, PuiseuxPoly)): try: const = R(log(c_expr)) except ValueError: @@ -1400,13 +1400,13 @@ def rs_sin(p, x, prec): ======== >>> from sympy.polys.domains import QQ - >>> from sympy.polys.rings import ring + >>> from sympy.polys.puiseux import puiseux_ring >>> from sympy.polys.ring_series import rs_sin - >>> R, x, y = ring('x, y', QQ) + >>> R, x, y = puiseux_ring('x, y', QQ) >>> rs_sin(x + x*y, x, 4) - -1/6*x**3*y**3 - 1/2*x**3*y**2 - 1/2*x**3*y - 1/6*x**3 + x*y + x + x + x*y + -1/6*x**3 + -1/2*x**3*y + -1/2*x**3*y**2 + -1/6*x**3*y**3 >>> rs_sin(x**QQ(3, 2) + x*y**QQ(7, 5), x, 4) - -1/2*x**(7/2)*y**(14/5) - 1/6*x**3*y**(21/5) + x**(3/2) + x*y**(7/5) + x*y**(7/5) + x**(3/2) + -1/6*x**3*y**(21/5) + -1/2*x**(7/2)*y**(14/5) See Also ======== @@ -1470,13 +1470,13 @@ def rs_cos(p, x, prec): ======== >>> from sympy.polys.domains import QQ - >>> from sympy.polys.rings import ring + >>> from sympy.polys.puiseux import puiseux_ring >>> from sympy.polys.ring_series import rs_cos - >>> R, x, y = ring('x, y', QQ) + >>> R, x, y = puiseux_ring('x, y', QQ) >>> rs_cos(x + x*y, x, 4) - -1/2*x**2*y**2 - x**2*y - 1/2*x**2 + 1 + 1 + -1/2*x**2 + -1*x**2*y + -1/2*x**2*y**2 >>> rs_cos(x + x*y, x, 4)/x**QQ(7, 5) - -1/2*x**(3/5)*y**2 - x**(3/5)*y - 1/2*x**(3/5) + x**(-7/5) + x**(-7/5) + -1/2*x**(3/5) + -1*x**(3/5)*y + -1/2*x**(3/5)*y**2 See Also ======== @@ -1830,7 +1830,7 @@ def rs_compose_add(p1, p2): np2e = rs_hadamard_exp(np2) np3e = rs_mul(np1e, np2e, x, prec) np3 = rs_hadamard_exp(np3e, True) - np3a = (np3[(0,)] - np3)/x + np3a = (np3[(0,)] - np3) / x q = rs_integrate(np3a, x) q = rs_exp(q, x, prec) q = _invert_monoms(q) @@ -1960,8 +1960,8 @@ def rs_series(expr, a, prec): Parameters ========== - expr : :class:`Expr` - a : :class:`Symbol` with respect to which expr is to be expanded + expr : :class:`~.Expr` + a : :class:`~.Symbol` with respect to which expr is to be expanded prec : order of the series expansion Currently supports multivariate Taylor series expansion. This is much diff --git a/sympy/polys/rings.py b/sympy/polys/rings.py index 9103b1737af1..2e902f30f809 100644 --- a/sympy/polys/rings.py +++ b/sympy/polys/rings.py @@ -1,12 +1,12 @@ """Sparse polynomial rings. """ from __future__ import annotations -from typing import Any from operator import add, mul, lt, le, gt, ge from functools import reduce from types import GeneratorType +from sympy.core.cache import cacheit from sympy.core.expr import Expr from sympy.core.intfunc import igcd from sympy.core.symbol import Symbol, symbols as _symbols @@ -192,7 +192,6 @@ def _parse_symbols(symbols): raise GeneratorsError("expected a string, Symbol or expression or a non-empty sequence of strings, Symbols or expressions") -_ring_cache: dict[Any, Any] = {} class PolyRing(DefaultPrinting, IPolys): """Multivariate distributed polynomial ring. """ @@ -210,61 +209,58 @@ def __new__(cls, symbols, domain, order=lex): order = OrderOpt.preprocess(order) _hash_tuple = (cls.__name__, symbols, ngens, domain, order) - obj = _ring_cache.get(_hash_tuple) - - if obj is None: - if domain.is_Composite and set(symbols) & set(domain.symbols): - raise GeneratorsError("polynomial ring and it's ground domain share generators") - - obj = object.__new__(cls) - obj._hash_tuple = _hash_tuple - obj._hash = hash(_hash_tuple) - obj.dtype = type("PolyElement", (PolyElement,), {"ring": obj}) - obj.symbols = symbols - obj.ngens = ngens - obj.domain = domain - obj.order = order - - obj.zero_monom = (0,)*ngens - obj.gens = obj._gens() - obj._gens_set = set(obj.gens) - - obj._one = [(obj.zero_monom, domain.one)] - - if ngens: - # These expect monomials in at least one variable - codegen = MonomialOps(ngens) - obj.monomial_mul = codegen.mul() - obj.monomial_pow = codegen.pow() - obj.monomial_mulpow = codegen.mulpow() - obj.monomial_ldiv = codegen.ldiv() - obj.monomial_div = codegen.div() - obj.monomial_lcm = codegen.lcm() - obj.monomial_gcd = codegen.gcd() - else: - monunit = lambda a, b: () - obj.monomial_mul = monunit - obj.monomial_pow = monunit - obj.monomial_mulpow = lambda a, b, c: () - obj.monomial_ldiv = monunit - obj.monomial_div = monunit - obj.monomial_lcm = monunit - obj.monomial_gcd = monunit - - - if order is lex: - obj.leading_expv = max - else: - obj.leading_expv = lambda f: max(f, key=order) - for symbol, generator in zip(obj.symbols, obj.gens): - if isinstance(symbol, Symbol): - name = symbol.name + if domain.is_Composite and set(symbols) & set(domain.symbols): + raise GeneratorsError("polynomial ring and it's ground domain share generators") + + obj = object.__new__(cls) + obj._hash_tuple = _hash_tuple + obj._hash = hash(_hash_tuple) + obj.symbols = symbols + obj.ngens = ngens + obj.domain = domain + obj.order = order + + obj.dtype = PolyElement(obj, ()).new + + obj.zero_monom = (0,)*ngens + obj.gens = obj._gens() + obj._gens_set = set(obj.gens) + + obj._one = [(obj.zero_monom, domain.one)] + + if ngens: + # These expect monomials in at least one variable + codegen = MonomialOps(ngens) + obj.monomial_mul = codegen.mul() + obj.monomial_pow = codegen.pow() + obj.monomial_mulpow = codegen.mulpow() + obj.monomial_ldiv = codegen.ldiv() + obj.monomial_div = codegen.div() + obj.monomial_lcm = codegen.lcm() + obj.monomial_gcd = codegen.gcd() + else: + monunit = lambda a, b: () + obj.monomial_mul = monunit + obj.monomial_pow = monunit + obj.monomial_mulpow = lambda a, b, c: () + obj.monomial_ldiv = monunit + obj.monomial_div = monunit + obj.monomial_lcm = monunit + obj.monomial_gcd = monunit + + + if order is lex: + obj.leading_expv = max + else: + obj.leading_expv = lambda f: max(f, key=order) - if not hasattr(obj, name): - setattr(obj, name, generator) + for symbol, generator in zip(obj.symbols, obj.gens): + if isinstance(symbol, Symbol): + name = symbol.name - _ring_cache[_hash_tuple] = obj + if not hasattr(obj, name): + setattr(obj, name, generator) return obj @@ -304,6 +300,13 @@ def __ne__(self, other): return not self == other def clone(self, symbols=None, domain=None, order=None): + # Need a hashable tuple for cacheit to work + if symbols is not None and isinstance(symbols, list): + symbols = tuple(symbols) + return self._clone(symbols, domain, order) + + @cacheit + def _clone(self, symbols, domain, order): return self.__class__(symbols or self.symbols, domain or self.domain, order or self.order) def monomial_basis(self, i): @@ -314,12 +317,16 @@ def monomial_basis(self, i): @property def zero(self): - return self.dtype() + return self.dtype([]) @property def one(self): return self.dtype(self._one) + def is_element(self, element): + """True if ``element`` is an element of this ring. False otherwise. """ + return isinstance(element, PolyElement) and element.ring == self + def domain_new(self, element, orig_domain=None): return self.domain.convert(element, orig_domain) @@ -423,7 +430,7 @@ def index(self, gen): i = -i - 1 else: raise ValueError("invalid generator index: %s" % gen) - elif isinstance(gen, self.dtype): + elif self.is_element(gen): try: i = self.gens.index(gen) except ValueError: @@ -579,8 +586,24 @@ def symmetric_poly(self, n): class PolyElement(DomainElement, DefaultPrinting, CantSympify, dict): """Element of multivariate distributed polynomial ring. """ + def __init__(self, ring, init): + super().__init__(init) + self.ring = ring + # This check would be too slow to run every time: + # self._check() + + def _check(self): + assert isinstance(self, PolyElement) + assert isinstance(self.ring, PolyRing) + dom = self.ring.domain + assert isinstance(dom, Domain) + for monom, coeff in self.items(): + assert dom.of_type(coeff) + assert len(monom) == self.ring.ngens + assert all(isinstance(exp, int) and exp >= 0 for exp in monom) + def new(self, init): - return self.__class__(init) + return self.__class__(self.ring, init) def parent(self): return self.ring.to_domain() @@ -695,7 +718,7 @@ def __eq__(p1, p2): """ if not p2: return not p1 - elif isinstance(p2, PolyElement) and p2.ring == p1.ring: + elif p1.ring.is_element(p2): return dict.__eq__(p1, p2) elif len(p1) > 1: return False @@ -709,7 +732,7 @@ def almosteq(p1, p2, tolerance=None): """Approximate equality test for polynomials. """ ring = p1.ring - if isinstance(p2, ring.dtype): + if ring.is_element(p2): if set(p1.keys()) != set(p2.keys()): return False @@ -733,7 +756,7 @@ def sort_key(self): return (len(self), self.terms()) def _cmp(p1, p2, op): - if isinstance(p2, p1.ring.dtype): + if p1.ring.is_element(p2): return op(p1.sort_key(), p2.sort_key()) else: return NotImplemented @@ -956,7 +979,7 @@ def __add__(p1, p2): if not p2: return p1.copy() ring = p1.ring - if isinstance(p2, ring.dtype): + if ring.is_element(p2): p = p1.copy() get = p.get zero = ring.domain.zero @@ -1032,7 +1055,7 @@ def __sub__(p1, p2): if not p2: return p1.copy() ring = p1.ring - if isinstance(p2, ring.dtype): + if ring.is_element(p2): p = p1.copy() get = p.get zero = ring.domain.zero @@ -1092,6 +1115,7 @@ def __rsub__(p1, n): for expv in p1: p[expv] = -p1[expv] p += n + # p._check() return p def __mul__(p1, p2): @@ -1114,7 +1138,7 @@ def __mul__(p1, p2): p = ring.zero if not p1 or not p2: return p - elif isinstance(p2, ring.dtype): + elif ring.is_element(p2): get = p.get zero = ring.domain.zero monomial_mul = ring.monomial_mul @@ -1124,6 +1148,7 @@ def __mul__(p1, p2): exp = monomial_mul(exp1, exp2) p[exp] = get(exp, zero) + v1*v2 p.strip_zero() + # p._check() return p elif isinstance(p2, PolyElement): if isinstance(ring.domain, PolynomialRing) and ring.domain.ring == p2.ring: @@ -1142,6 +1167,7 @@ def __mul__(p1, p2): v = v1*p2 if v: p[exp1] = v + # p._check() return p def __rmul__(p1, p2): @@ -1188,6 +1214,11 @@ def __pow__(self, n): x**3 + 3*x**2*y**2 + 3*x*y**4 + y**6 """ + if not isinstance(n, int): + raise TypeError("exponent must be an integer, got %s" % n) + elif n < 0: + raise ValueError("exponent must be a non-negative integer, got %s" % n) + ring = self.ring if not n: @@ -1202,6 +1233,7 @@ def __pow__(self, n): p[ring.monomial_pow(monom, n)] = coeff else: p[ring.monomial_pow(monom, n)] = coeff**n + # p._check() return p # For ring series, we need negative and rational exponent support only @@ -1300,6 +1332,7 @@ def square(self): k2 = monomial_mul(k, k) p[k2] = get(k2, zero) + v**2 p.strip_zero() + # p._check() return p def __divmod__(p1, p2): @@ -1307,7 +1340,7 @@ def __divmod__(p1, p2): if not p2: raise ZeroDivisionError("polynomial division") - elif isinstance(p2, ring.dtype): + elif ring.is_element(p2): return p1.div(p2) elif isinstance(p2, PolyElement): if isinstance(ring.domain, PolynomialRing) and ring.domain.ring == p2.ring: @@ -1325,14 +1358,20 @@ def __divmod__(p1, p2): return (p1.quo_ground(p2), p1.rem_ground(p2)) def __rdivmod__(p1, p2): - return NotImplemented + ring = p1.ring + try: + p2 = ring.ground_new(p2) + except CoercionFailed: + return NotImplemented + else: + return p2.div(p1) def __mod__(p1, p2): ring = p1.ring if not p2: raise ZeroDivisionError("polynomial division") - elif isinstance(p2, ring.dtype): + elif ring.is_element(p2): return p1.rem(p2) elif isinstance(p2, PolyElement): if isinstance(ring.domain, PolynomialRing) and ring.domain.ring == p2.ring: @@ -1350,18 +1389,21 @@ def __mod__(p1, p2): return p1.rem_ground(p2) def __rmod__(p1, p2): - return NotImplemented + ring = p1.ring + try: + p2 = ring.ground_new(p2) + except CoercionFailed: + return NotImplemented + else: + return p2.rem(p1) - def __truediv__(p1, p2): + def __floordiv__(p1, p2): ring = p1.ring if not p2: raise ZeroDivisionError("polynomial division") - elif isinstance(p2, ring.dtype): - if p2.is_monomial: - return p1*(p2**(-1)) - else: - return p1.quo(p2) + elif ring.is_element(p2): + return p1.quo(p2) elif isinstance(p2, PolyElement): if isinstance(ring.domain, PolynomialRing) and ring.domain.ring == p2.ring: pass @@ -1377,13 +1419,45 @@ def __truediv__(p1, p2): else: return p1.quo_ground(p2) - def __rtruediv__(p1, p2): - return NotImplemented + def __rfloordiv__(p1, p2): + ring = p1.ring + try: + p2 = ring.ground_new(p2) + except CoercionFailed: + return NotImplemented + else: + return p2.quo(p1) - __floordiv__ = __truediv__ - __rfloordiv__ = __rtruediv__ + def __truediv__(p1, p2): + ring = p1.ring + + if not p2: + raise ZeroDivisionError("polynomial division") + elif ring.is_element(p2): + return p1.exquo(p2) + elif isinstance(p2, PolyElement): + if isinstance(ring.domain, PolynomialRing) and ring.domain.ring == p2.ring: + pass + elif isinstance(p2.ring.domain, PolynomialRing) and p2.ring.domain.ring == ring: + return p2.__rtruediv__(p1) + else: + return NotImplemented - # TODO: use // (__floordiv__) for exquo()? + try: + p2 = ring.domain_new(p2) + except CoercionFailed: + return NotImplemented + else: + return p1.quo_ground(p2) + + def __rtruediv__(p1, p2): + ring = p1.ring + try: + p2 = ring.ground_new(p2) + except CoercionFailed: + return NotImplemented + else: + return p2.exquo(p1) def _term_div(self): zm = self.ring.zero_monom @@ -1738,7 +1812,7 @@ def coeff(self, element): """ if element == 1: return self._get_coeff(self.ring.zero_monom) - elif isinstance(element, self.ring.dtype): + elif self.ring.is_element(element): terms = list(element.iterterms()) if len(terms) == 1: monom, coeff = terms[0] diff --git a/sympy/polys/tests/test_fields.py b/sympy/polys/tests/test_fields.py index da9f39101599..4f85a00d75dc 100644 --- a/sympy/polys/tests/test_fields.py +++ b/sympy/polys/tests/test_fields.py @@ -29,19 +29,10 @@ def test_FracField___hash__(): def test_FracField___eq__(): assert field("x,y,z", QQ)[0] == field("x,y,z", QQ)[0] - assert field("x,y,z", QQ)[0] is field("x,y,z", QQ)[0] - assert field("x,y,z", QQ)[0] != field("x,y,z", ZZ)[0] - assert field("x,y,z", QQ)[0] is not field("x,y,z", ZZ)[0] - assert field("x,y,z", ZZ)[0] != field("x,y,z", QQ)[0] - assert field("x,y,z", ZZ)[0] is not field("x,y,z", QQ)[0] - assert field("x,y,z", QQ)[0] != field("x,y", QQ)[0] - assert field("x,y,z", QQ)[0] is not field("x,y", QQ)[0] - assert field("x,y", QQ)[0] != field("x,y,z", QQ)[0] - assert field("x,y", QQ)[0] is not field("x,y,z", QQ)[0] def test_sfield(): x = symbols("x") @@ -99,34 +90,34 @@ def test_FracElement_from_expr(): F, X, Y, Z = field((x, y, z), ZZ) f = F.from_expr(1) - assert f == 1 and isinstance(f, F.dtype) + assert f == 1 and F.is_element(f) f = F.from_expr(Rational(3, 7)) - assert f == F(3)/7 and isinstance(f, F.dtype) + assert f == F(3)/7 and F.is_element(f) f = F.from_expr(x) - assert f == X and isinstance(f, F.dtype) + assert f == X and F.is_element(f) f = F.from_expr(Rational(3,7)*x) - assert f == X*Rational(3, 7) and isinstance(f, F.dtype) + assert f == X*Rational(3, 7) and F.is_element(f) f = F.from_expr(1/x) - assert f == 1/X and isinstance(f, F.dtype) + assert f == 1/X and F.is_element(f) f = F.from_expr(x*y*z) - assert f == X*Y*Z and isinstance(f, F.dtype) + assert f == X*Y*Z and F.is_element(f) f = F.from_expr(x*y/z) - assert f == X*Y/Z and isinstance(f, F.dtype) + assert f == X*Y/Z and F.is_element(f) f = F.from_expr(x*y*z + x*y + x) - assert f == X*Y*Z + X*Y + X and isinstance(f, F.dtype) + assert f == X*Y*Z + X*Y + X and F.is_element(f) f = F.from_expr((x*y*z + x*y + x)/(x*y + 7)) - assert f == (X*Y*Z + X*Y + X)/(X*Y + 7) and isinstance(f, F.dtype) + assert f == (X*Y*Z + X*Y + X)/(X*Y + 7) and F.is_element(f) f = F.from_expr(x**3*y*z + x**2*y**7 + 1) - assert f == X**3*Y*Z + X**2*Y**7 + 1 and isinstance(f, F.dtype) + assert f == X**3*Y*Z + X**2*Y**7 + 1 and F.is_element(f) raises(ValueError, lambda: F.from_expr(2**x)) raises(ValueError, lambda: F.from_expr(7*x + sqrt(2))) diff --git a/sympy/polys/tests/test_puiseux.py b/sympy/polys/tests/test_puiseux.py new file mode 100644 index 000000000000..031881e9d12c --- /dev/null +++ b/sympy/polys/tests/test_puiseux.py @@ -0,0 +1,204 @@ +# +# Tests for PuiseuxRing and PuiseuxPoly +# + +from sympy.testing.pytest import raises + +from sympy import ZZ, QQ, ring +from sympy.polys.puiseux import PuiseuxRing, PuiseuxPoly, puiseux_ring + +from sympy.abc import x, y + + +def test_puiseux_ring(): + R, px = puiseux_ring('x', QQ) + R2, px2 = puiseux_ring([x], QQ) + assert isinstance(R, PuiseuxRing) + assert isinstance(px, PuiseuxPoly) + assert R == R2 + assert px == px2 + assert R == PuiseuxRing('x', QQ) + assert R == PuiseuxRing([x], QQ) + assert R != PuiseuxRing('y', QQ) + assert R != PuiseuxRing('x', ZZ) + assert R != PuiseuxRing('x, y', QQ) + assert R != QQ + assert str(R) == 'PuiseuxRing((x,), QQ)' + + +def test_puiseux_ring_attributes(): + R1, px1, py1 = ring('x, y', QQ) + R2, px2, py2 = puiseux_ring('x, y', QQ) + assert R2.domain == QQ + assert R2.symbols == (x, y) + assert R2.gens == (px2, py2) + assert R2.ngens == 2 + assert R2.poly_ring == R1 + assert R2.zero == PuiseuxPoly(R1.zero, R2) + assert R2.one == PuiseuxPoly(R1.one, R2) + assert R2.zero_monom == R1.zero_monom == (0, 0) # type: ignore + assert R2.monomial_mul((1, 2), (3, 4)) == (4, 6) + + +def test_puiseux_ring_methods(): + R1, px1, py1 = ring('x, y', QQ) + R2, px2, py2 = puiseux_ring('x, y', QQ) + assert R2({(1, 2): 3}) == 3*px2*py2**2 + assert R2(px1) == px2 + assert R2(1) == R2.one + assert R2(QQ(1,2)) == QQ(1,2)*R2.one + assert R2.from_poly(px1) == px2 + assert R2.from_poly(px1) != py2 + assert R2.from_dict({(1, 2): QQ(3)}) == 3*px2*py2**2 + assert R2.from_dict({(QQ(1,2), 2): QQ(3)}) == 3*px2**QQ(1,2)*py2**2 + assert R2.from_int(3) == 3*R2.one + assert R2.domain_new(3) == QQ(3) + assert QQ.of_type(R2.domain_new(3)) + assert R2.ground_new(3) == 3*R2.one + assert isinstance(R2.ground_new(3), PuiseuxPoly) + assert R2.index(px2) == 0 + assert R2.index(py2) == 1 + + +def test_puiseux_poly(): + R1, px1 = ring('x', QQ) + R2, px2 = puiseux_ring('x', QQ) + assert PuiseuxPoly(px1, R2) == px2 + assert px2.ring == R2 + assert px2.as_expr() == px1.as_expr() == x + assert px1 != px2 + assert R2.one == px2**0 == 1 + assert px2 == px1 + assert px2 != 2.0 + assert px2**QQ(1,2) != px1 + + +def test_puiseux_poly_normalization(): + R, x = puiseux_ring('x', QQ) + assert (x**2 + 1) / x == x + 1/x == R({(1,): 1, (-1,): 1}) + assert (x**QQ(1,6))**2 == x**QQ(1,3) == R({(QQ(1,3),): 1}) + assert (x**QQ(1,6))**(-2) == x**(-QQ(1,3)) == R({(-QQ(1,3),): 1}) + assert (x**QQ(1,6))**QQ(1,2) == x**QQ(1,12) == R({(QQ(1,12),): 1}) + assert (x**QQ(1,6))**6 == x == R({(1,): 1}) + assert x**QQ(1,6) * x**QQ(1,3) == x**QQ(1,2) == R({(QQ(1,2),): 1}) + assert 1/x * x**2 == x == R({(1,): 1}) + assert 1/x**QQ(1,3) * x**QQ(1,3) == 1 == R({(0,): 1}) + + +def test_puiseux_poly_monoms(): + R, x = puiseux_ring('x', QQ) + assert x.monoms() == [(1,)] + assert list(x) == [(1,)] + assert (x**2 + 1).monoms() == [(2,), (0,)] + assert R({(1,): 1, (-1,): 1}).monoms() == [(1,), (-1,)] + assert R({(QQ(1,3),): 1}).monoms() == [(QQ(1,3),)] + assert R({(-QQ(1,3),): 1}).monoms() == [(-QQ(1,3),)] + p = x**QQ(1,6) + assert p[(QQ(1,6),)] == 1 + raises(KeyError, lambda: p[(1,)]) + assert p.to_dict() == {(QQ(1,6),): 1} + assert R(p.to_dict()) == p + assert PuiseuxPoly.from_dict({(QQ(1,6),): 1}, R) == p + + +def test_puiseux_poly_repr(): + R, x = puiseux_ring('x', QQ) + assert repr(x) == 'x' + assert repr(x**QQ(1,2)) == 'x**(1/2)' + assert repr(1/x) == 'x**(-1)' + assert repr(2*x**2 + 1) == '1 + 2*x**2' + assert repr(R.one) == '1' + assert repr(2*R.one) == '2' + + +def test_puiseux_poly_unify(): + R, x = puiseux_ring('x', QQ) + assert 1/x + x == x + 1/x == R({(1,): 1, (-1,): 1}) + assert repr(1/x + x) == 'x**(-1) + x' + assert 1/x + 1/x == 2/x == R({(-1,): 2}) + assert repr(1/x + 1/x) == '2*x**(-1)' + assert x**QQ(1,2) + x**QQ(1,2) == 2*x**QQ(1,2) == R({(QQ(1,2),): 2}) + assert repr(x**QQ(1,2) + x**QQ(1,2)) == '2*x**(1/2)' + assert x**QQ(1,2) + x**QQ(1,3) == R({(QQ(1,2),): 1, (QQ(1,3),): 1}) + assert repr(x**QQ(1,2) + x**QQ(1,3)) == 'x**(1/3) + x**(1/2)' + assert x + x**QQ(1,2) == R({(1,): 1, (QQ(1,2),): 1}) + assert repr(x + x**QQ(1,2)) == 'x**(1/2) + x' + assert 1/x**QQ(1,2) + 1/x**QQ(1,3) == R({(-QQ(1,2),): 1, (-QQ(1,3),): 1}) + assert repr(1/x**QQ(1,2) + 1/x**QQ(1,3)) == 'x**(-1/2) + x**(-1/3)' + assert 1/x + x**QQ(1,2) == x**QQ(1,2) + 1/x == R({(-1,): 1, (QQ(1,2),): 1}) + assert repr(1/x + x**QQ(1,2)) == 'x**(-1) + x**(1/2)' + + +def test_puiseux_poly_arit(): + R, x = puiseux_ring('x', QQ) + R2, y = puiseux_ring('y', QQ) + p = x**2 + 1 + assert +p == p + assert -p == -1 - x**2 + assert p + p == 2*p == 2*x**2 + 2 + assert p + 1 == 1 + p == x**2 + 2 + assert p + QQ(1,2) == QQ(1,2) + p == x**2 + QQ(3,2) + assert p - p == 0 + assert p - 1 == -1 + p == x**2 + assert p - QQ(1,2) == -QQ(1,2) + p == x**2 + QQ(1,2) + assert 1 - p == -p + 1 == -x**2 + assert QQ(1,2) - p == -p + QQ(1,2) == -x**2 - QQ(1,2) + assert p * p == x**4 + 2*x**2 + 1 + assert p * 1 == 1 * p == p + assert 2 * p == p * 2 == 2*x**2 + 2 + assert p * QQ(1,2) == QQ(1,2) * p == QQ(1,2)*x**2 + QQ(1,2) + assert x**QQ(1,2) * x**QQ(1,2) == x + raises(ValueError, lambda: x + y) + raises(ValueError, lambda: x - y) + raises(ValueError, lambda: x * y) + raises(TypeError, lambda: x + None) + raises(TypeError, lambda: x - None) + raises(TypeError, lambda: x * None) + raises(TypeError, lambda: None + x) + raises(TypeError, lambda: None - x) + raises(TypeError, lambda: None * x) + + +def test_puiseux_poly_div(): + R, x = puiseux_ring('x', QQ) + R2, y = puiseux_ring('y', QQ) + p = x**2 - 1 + assert p / 1 == p + assert p / QQ(1,2) == 2*p == 2*x**2 - 2 + assert p / x == x - 1/x == R({(1,): 1, (-1,): -1}) + assert 2 / x == 2*x**-1 == R({(-1,): 2}) + assert QQ(1,2) / x == QQ(1,2)*x**-1 == 1/(2*x) == 1/x/2 == R({(-1,): QQ(1,2)}) + raises(ZeroDivisionError, lambda: p / 0) + raises(ValueError, lambda: (x + 1) / (x + 2)) + raises(ValueError, lambda: (x + 1) / (x + 1)) + raises(ValueError, lambda: x / y) + raises(TypeError, lambda: x / None) + raises(TypeError, lambda: None / x) + + +def test_puiseux_poly_pow(): + R, x = puiseux_ring('x', QQ) + Rz, xz = puiseux_ring('x', ZZ) + assert x**0 == 1 == R({(0,): 1}) + assert x**1 == x == R({(1,): 1}) + assert x**2 == x*x == R({(2,): 1}) + assert x**QQ(1,2) == R({(QQ(1,2),): 1}) + assert x**-1 == 1/x == R({(-1,): 1}) + assert x**-QQ(1,2) == 1/x**QQ(1,2) == R({(-QQ(1,2),): 1}) + assert (2*x)**-1 == 1/(2*x) == QQ(1,2)/x == QQ(1,2)*x**-1 == R({(-1,): QQ(1,2)}) + assert 2/x**2 == 2*x**-2 == R({(-2,): 2}) + assert 2/xz**2 == 2*xz**-2 == Rz({(-2,): 2}) + raises(TypeError, lambda: x**None) + raises(ValueError, lambda: (x + 1)**-1) + raises(ValueError, lambda: (x + 1)**QQ(1,2)) + raises(ValueError, lambda: (2*x)**QQ(1,2)) + raises(ValueError, lambda: (2*xz)**-1) + + +def test_puiseux_poly_diff(): + R, x, y = puiseux_ring('x, y', QQ) + assert (x**2 + 1).diff(x) == 2*x + assert (x**2 + 1).diff(y) == 0 + assert (x**2 + y**2).diff(x) == 2*x + assert (x**QQ(1,2) + y**QQ(1,2)).diff(x) == QQ(1,2)*x**-QQ(1,2) + assert ((x*y)**QQ(1,2)).diff(x) == QQ(1,2)*y**QQ(1,2)*x**-QQ(1,2) diff --git a/sympy/polys/tests/test_ring_series.py b/sympy/polys/tests/test_ring_series.py index 0f70c05d3888..b19156fbaceb 100644 --- a/sympy/polys/tests/test_ring_series.py +++ b/sympy/polys/tests/test_ring_series.py @@ -1,5 +1,6 @@ from sympy.polys.domains import QQ, EX, RR from sympy.polys.rings import ring +from sympy.polys.puiseux import puiseux_ring from sympy.polys.ring_series import (_invert_monoms, rs_integrate, rs_trunc, rs_mul, rs_square, rs_pow, _has_constant_term, rs_hadamard_exp, rs_series_from_list, rs_exp, rs_log, rs_newton, rs_series_inversion, @@ -141,11 +142,11 @@ def test_series_from_list(): p2 += cx*rs_pow(p, i, x, h) assert p1 == p2 + def test_log(): R, x = ring('x', QQ) p = 1 + x - p1 = rs_log(p, x, 4)/x**2 - assert p1 == Rational(1, 3)*x - S.Half + x**(-1) + assert rs_log(p, x, 4) == x - x**2/2 + x**3/3 p = 1 + x +2*x**2/3 p1 = rs_log(p, x, 9) assert p1 == -17*x**8/648 + 13*x**7/189 - 11*x**6/162 - x**5/45 + \ @@ -172,6 +173,7 @@ def test_log(): p = x + x**2 + 3 assert rs_log(p, x, 10).compose(x, 5) == EX(log(3) + Rational(19281291595, 9920232)) + def test_exp(): R, x = ring('x', QQ) p = x + x**4 @@ -222,8 +224,9 @@ def test_fun(): assert rs_fun(p, rs_tan, x, 10) == rs_tan(p, x, 10) assert rs_fun(p, _tan1, x, 10) == _tan1(p, x, 10) + def test_nth_root(): - R, x, y = ring('x, y', QQ) + R, x, y = puiseux_ring('x, y', QQ) assert rs_nth_root(1 + x**2*y, 4, x, 10) == -77*x**8*y**4/2048 + \ 7*x**6*y**3/128 - 3*x**4*y**2/32 + x**2*y/4 + 1 assert rs_nth_root(1 + x*y + x**2*y**3, 3, x, 5) == -x**4*y**6/9 + \ @@ -236,14 +239,15 @@ def test_nth_root(): # Constant term in series a = symbols('a') - R, x, y = ring('x, y', EX) - assert rs_nth_root(x + a, 3, x, 4) == EX(5/(81*a**QQ(8, 3)))*x**3 - \ + R, x, y = puiseux_ring('x, y', EX) + assert rs_nth_root(x + EX(a), 3, x, 4) == EX(5/(81*a**QQ(8, 3)))*x**3 - \ EX(1/(9*a**QQ(5, 3)))*x**2 + EX(1/(3*a**QQ(2, 3)))*x + EX(a**QQ(1, 3)) assert rs_nth_root(x**QQ(2, 3) + x**2*y + 5, 2, x, 3) == -EX(sqrt(5)/100)*\ x**QQ(8, 3)*y - EX(sqrt(5)/16000)*x**QQ(8, 3) + EX(sqrt(5)/10)*x**2*y + \ EX(sqrt(5)/2000)*x**2 - EX(sqrt(5)/200)*x**QQ(4, 3) + \ EX(sqrt(5)/10)*x**QQ(2, 3) + EX(sqrt(5)) + def test_atan(): R, x, y = ring('x, y', QQ) assert rs_atan(x, x, 9) == -x**7/7 + x**5/5 - x**3/3 + x @@ -272,8 +276,7 @@ def test_asin(): def test_tan(): R, x, y = ring('x, y', QQ) - assert rs_tan(x, x, 9)/x**5 == \ - Rational(17, 315)*x**2 + Rational(2, 15) + Rational(1, 3)*x**(-2) + x**(-4) + assert rs_tan(x, x, 9) == x + x**3/3 + QQ(2,15)*x**5 + QQ(17,315)*x**7 assert rs_tan(x*y + x**2*y**3, x, 9) == 4*x**8*y**11/3 + 17*x**8*y**9/45 + \ 4*x**7*y**9/3 + 17*x**7*y**7/315 + x**6*y**9/3 + 2*x**6*y**7/3 + \ x**5*y**7 + 2*x**5*y**5/15 + x**4*y**5 + x**3*y**3/3 + x**2*y**3 + x*y @@ -301,18 +304,19 @@ def test_tan(): assert rs_atan(p, x, 10).compose(x, 10) == EX(atan(5) + S(67701870330562640) / \ 668083460499) + def test_cot(): - R, x, y = ring('x, y', QQ) + R, x, y = puiseux_ring('x, y', QQ) assert rs_cot(x**6 + x**7, x, 8) == x**(-6) - x**(-5) + x**(-4) - \ x**(-3) + x**(-2) - x**(-1) + 1 - x + x**2 - x**3 + x**4 - x**5 + \ 2*x**6/3 - 4*x**7/3 assert rs_cot(x + x**2*y, x, 5) == -x**4*y**5 - x**4*y/15 + x**3*y**4 - \ x**3/45 - x**2*y**3 - x**2*y/3 + x*y**2 - x/3 - y + x**(-1) + def test_sin(): R, x, y = ring('x, y', QQ) - assert rs_sin(x, x, 9)/x**5 == \ - Rational(-1, 5040)*x**2 + Rational(1, 120) - Rational(1, 6)*x**(-2) + x**(-4) + assert rs_sin(x, x, 9) == x - x**3/6 + x**5/120 - x**7/5040 assert rs_sin(x*y + x**2*y**3, x, 9) == x**8*y**11/12 - \ x**8*y**9/720 + x**7*y**9/12 - x**7*y**7/5040 - x**6*y**9/6 + \ x**6*y**7/24 - x**5*y**7/2 + x**5*y**5/120 - x**4*y**5/2 - \ @@ -337,8 +341,7 @@ def test_sin(): def test_cos(): R, x, y = ring('x, y', QQ) - assert rs_cos(x, x, 9)/x**5 == \ - Rational(1, 40320)*x**3 - Rational(1, 720)*x + Rational(1, 24)*x**(-1) - S.Half*x**(-3) + x**(-5) + assert rs_cos(x, x, 9) == 1 - x**2/2 + x**4/24 - x**6/720 + x**8/40320 assert rs_cos(x*y + x**2*y**3, x, 9) == x**8*y**12/24 - \ x**8*y**10/48 + x**8*y**8/40320 + x**7*y**10/6 - \ x**7*y**8/120 + x**6*y**8/4 - x**6*y**6/720 + x**5*y**6/6 - \ @@ -372,7 +375,7 @@ def test_cos_sin(): def test_atanh(): R, x, y = ring('x, y', QQ) - assert rs_atanh(x, x, 9)/x**5 == Rational(1, 7)*x**2 + Rational(1, 5) + Rational(1, 3)*x**(-2) + x**(-4) + assert rs_atanh(x, x, 9) == x + x**3/3 + x**5/5 + x**7/7 assert rs_atanh(x*y + x**2*y**3, x, 9) == 2*x**8*y**11 + x**8*y**9 + \ 2*x**7*y**9 + x**7*y**7/7 + x**6*y**9/3 + x**6*y**7 + x**5*y**7 + \ x**5*y**5/5 + x**4*y**5 + x**3*y**3/3 + x**2*y**3 + x*y @@ -395,7 +398,7 @@ def test_atanh(): def test_sinh(): R, x, y = ring('x, y', QQ) - assert rs_sinh(x, x, 9)/x**5 == Rational(1, 5040)*x**2 + Rational(1, 120) + Rational(1, 6)*x**(-2) + x**(-4) + assert rs_sinh(x, x, 9) == x + x**3/6 + x**5/120 + x**7/5040 assert rs_sinh(x*y + x**2*y**3, x, 9) == x**8*y**11/12 + \ x**8*y**9/720 + x**7*y**9/12 + x**7*y**7/5040 + x**6*y**9/6 + \ x**6*y**7/24 + x**5*y**7/2 + x**5*y**5/120 + x**4*y**5/2 + \ @@ -403,8 +406,7 @@ def test_sinh(): def test_cosh(): R, x, y = ring('x, y', QQ) - assert rs_cosh(x, x, 9)/x**5 == Rational(1, 40320)*x**3 + Rational(1, 720)*x + Rational(1, 24)*x**(-1) + \ - S.Half*x**(-3) + x**(-5) + assert rs_cosh(x, x, 9) == 1 + x**2/2 + x**4/24 + x**6/720 + x**8/40320 assert rs_cosh(x*y + x**2*y**3, x, 9) == x**8*y**12/24 + \ x**8*y**10/48 + x**8*y**8/40320 + x**7*y**10/6 + \ x**7*y**8/120 + x**6*y**8/4 + x**6*y**6/720 + x**5*y**6/6 + \ @@ -412,7 +414,7 @@ def test_cosh(): def test_tanh(): R, x, y = ring('x, y', QQ) - assert rs_tanh(x, x, 9)/x**5 == Rational(-17, 315)*x**2 + Rational(2, 15) - Rational(1, 3)*x**(-2) + x**(-4) + assert rs_tanh(x, x, 9) == x - QQ(1,3)*x**3 + QQ(2,15)*x**5 - QQ(17,315)*x**7 assert rs_tanh(x*y + x**2*y**3, x, 9) == 4*x**8*y**11/3 - \ 17*x**8*y**9/45 + 4*x**7*y**9/3 - 17*x**7*y**7/315 - x**6*y**9/3 + \ 2*x**6*y**7/3 - x**5*y**7 + 2*x**5*y**5/15 - x**4*y**5 - \ @@ -443,8 +445,9 @@ def test_RR(): q = ((2 + a)**QQ(1, 5)).series(a, 0, 5).removeO() is_close(p.as_expr(), q.subs(a, 5).n()) + def test_is_regular(): - R, x, y = ring('x, y', QQ) + R, x, y = puiseux_ring('x, y', QQ) p = 1 + 2*x + x**2 + 3*x**3 assert not rs_is_puiseux(p, x) @@ -455,8 +458,9 @@ def test_is_regular(): p = x + x**2*y**QQ(1,5)*y assert not rs_is_puiseux(p, x) + def test_puiseux(): - R, x, y = ring('x, y', QQ) + R, x, y = puiseux_ring('x, y', QQ) p = x**QQ(2,5) + x**QQ(2,3) + x r = rs_series_inversion(p, x, 1) @@ -518,20 +522,21 @@ def test_puiseux(): assert r == -x**QQ(9,5) - x**QQ(26,15) - x**QQ(22,15) - x**QQ(6,5)/3 + \ x + x**QQ(2,3) + x**QQ(2,5) + def test_puiseux_algebraic(): # https://github.com/sympy/sympy/issues/24395 K = QQ.algebraic_field(sqrt(2)) sqrt2 = K.from_sympy(sqrt(2)) x, y = symbols('x, y') - R, xr, yr = ring([x, y], K) + R, xr, yr = puiseux_ring([x, y], K) p = (1+sqrt2)*xr**QQ(1,2) + (1-sqrt2)*yr**QQ(2,3) - assert dict(p) == {(QQ(1,2),QQ(0)):1+sqrt2, (QQ(0),QQ(2,3)):1-sqrt2} + assert p.to_dict() == {(QQ(1,2),QQ(0)):1+sqrt2, (QQ(0),QQ(2,3)):1-sqrt2} assert p.as_expr() == (1 + sqrt(2))*x**(S(1)/2) + (1 - sqrt(2))*y**(S(2)/3) def test1(): - R, x = ring('x', QQ) + R, x = puiseux_ring('x', QQ) r = rs_sin(x, x, 15)*x**(-5) assert r == x**8/6227020800 - x**6/39916800 + x**4/362880 - x**2/5040 + \ QQ(1,120) - x**-2/6 + x**-4 @@ -556,9 +561,10 @@ def test1(): x**3/720 + x**QQ(5,2)/120 + x**2/24 + x**QQ(3,2)/6 + x/2 + \ x**QQ(1,2) + 1 + def test_puiseux2(): R, y = ring('y', QQ) - S, x = ring('x', R) + S, x = puiseux_ring('x', R.to_domain()) p = x + x**QQ(1,5)*y r = rs_atan(p, x, 3) diff --git a/sympy/polys/tests/test_rings.py b/sympy/polys/tests/test_rings.py index 3a48d45a6f15..4fd3c4c05eff 100644 --- a/sympy/polys/tests/test_rings.py +++ b/sympy/polys/tests/test_rings.py @@ -58,19 +58,10 @@ def test_PolyRing___hash__(): def test_PolyRing___eq__(): assert ring("x,y,z", QQ)[0] == ring("x,y,z", QQ)[0] - assert ring("x,y,z", QQ)[0] is ring("x,y,z", QQ)[0] - assert ring("x,y,z", QQ)[0] != ring("x,y,z", ZZ)[0] - assert ring("x,y,z", QQ)[0] is not ring("x,y,z", ZZ)[0] - assert ring("x,y,z", ZZ)[0] != ring("x,y,z", QQ)[0] - assert ring("x,y,z", ZZ)[0] is not ring("x,y,z", QQ)[0] - assert ring("x,y,z", QQ)[0] != ring("x,y", QQ)[0] - assert ring("x,y,z", QQ)[0] is not ring("x,y", QQ)[0] - assert ring("x,y", QQ)[0] != ring("x,y,z", QQ)[0] - assert ring("x,y", QQ)[0] is not ring("x,y,z", QQ)[0] def test_PolyRing_ring_new(): R, x, y, z = ring("x,y,z", QQ) @@ -288,23 +279,23 @@ def test_PolyElement_from_expr(): R, X, Y, Z = ring((x, y, z), ZZ) f = R.from_expr(1) - assert f == 1 and isinstance(f, R.dtype) + assert f == 1 and R.is_element(f) f = R.from_expr(x) - assert f == X and isinstance(f, R.dtype) + assert f == X and R.is_element(f) f = R.from_expr(x*y*z) - assert f == X*Y*Z and isinstance(f, R.dtype) + assert f == X*Y*Z and R.is_element(f) f = R.from_expr(x*y*z + x*y + x) - assert f == X*Y*Z + X*Y + X and isinstance(f, R.dtype) + assert f == X*Y*Z + X*Y + X and R.is_element(f) f = R.from_expr(x**3*y*z + x**2*y**7 + 1) - assert f == X**3*Y*Z + X**2*Y**7 + 1 and isinstance(f, R.dtype) + assert f == X**3*Y*Z + X**2*Y**7 + 1 and R.is_element(f) r, F = sring([exp(2)]) f = r.from_expr(exp(2)) - assert f == F[0] and isinstance(f, r.dtype) + assert f == F[0] and r.is_element(f) raises(ValueError, lambda: R.from_expr(1/x)) raises(ValueError, lambda: R.from_expr(2**x)) @@ -312,7 +303,7 @@ def test_PolyElement_from_expr(): R, = ring("", ZZ) f = R.from_expr(1) - assert f == 1 and isinstance(f, R.dtype) + assert f == 1 and R.is_element(f) def test_PolyElement_degree(): R, x,y,z = ring("x,y,z", ZZ) @@ -615,9 +606,9 @@ def test_PolyElement___truediv__(): assert (x**2 - 1).quo(x) == x assert (x**2 - x).quo(x) == x - 1 - assert (x**2 - 1)/x == x - x**(-1) + raises(ExactQuotientFailed, lambda: (x**2 - 1)/x) assert (x**2 - x)/x == x - 1 - assert (x**2 - 1)/(2*x) == x/2 - x**(-1)/2 + raises(ExactQuotientFailed, lambda: (x**2 - 1)/(2*x)) assert (x**2 - 1).quo(2*x) == 0 assert (x**2 - x)/(x - 1) == (x**2 - x).quo(x - 1) == x @@ -634,7 +625,7 @@ def test_PolyElement___truediv__(): Rxyz, x,y,z = ring("x,y,z", Ruv) assert dict((u**2*x + u)/u) == {(1, 0, 0): u, (0, 0, 0): 1} - raises(TypeError, lambda: u/(u**2*x + u)) + raises(ExactQuotientFailed, lambda: u/(u**2*x + u)) raises(TypeError, lambda: t/x) raises(TypeError, lambda: x/t) @@ -670,7 +661,8 @@ def test_PolyElement___truediv__(): assert f.div(g) == divmod(f, g) == (q, r) assert f.rem(g) == f % g == r - assert f.quo(g) == f / g == q + assert f.quo(g) == q + raises(ExactQuotientFailed, lambda: f / g) raises(ExactQuotientFailed, lambda: f.exquo(g)) f, g = 3*x**3 + x**2 + x + 5, 5*x**2 - 3*x + 1 @@ -678,7 +670,8 @@ def test_PolyElement___truediv__(): assert f.div(g) == divmod(f, g) == (q, r) assert f.rem(g) == f % g == r - assert f.quo(g) == f / g == q + assert f.quo(g) == q + raises(ExactQuotientFailed, lambda: f / g) raises(ExactQuotientFailed, lambda: f.exquo(g)) f, g = 5*x**4 + 4*x**3 + 3*x**2 + 2*x + 1, x**2 + 2*x + 3 @@ -686,7 +679,8 @@ def test_PolyElement___truediv__(): assert f.div(g) == divmod(f, g) == (q, r) assert f.rem(g) == f % g == r - assert f.quo(g) == f / g == q + assert f.quo(g) == q + raises(ExactQuotientFailed, lambda: f / g) raises(ExactQuotientFailed, lambda: f.exquo(g)) f, g = 5*x**5 + 4*x**4 + 3*x**3 + 2*x**2 + x, x**4 + 2*x**3 + 9 @@ -694,7 +688,8 @@ def test_PolyElement___truediv__(): assert f.div(g) == divmod(f, g) == (q, r) assert f.rem(g) == f % g == r - assert f.quo(g) == f / g == q + assert f.quo(g) == q + raises(ExactQuotientFailed, lambda: f / g) raises(ExactQuotientFailed, lambda: f.exquo(g)) R, x = ring("x", QQ) @@ -704,7 +699,8 @@ def test_PolyElement___truediv__(): assert f.div(g) == divmod(f, g) == (q, r) assert f.rem(g) == f % g == r - assert f.quo(g) == f / g == q + assert f.quo(g) == q + raises(ExactQuotientFailed, lambda: f / g) raises(ExactQuotientFailed, lambda: f.exquo(g)) f, g = 3*x**3 + x**2 + x + 5, 5*x**2 - 3*x + 1 @@ -712,7 +708,8 @@ def test_PolyElement___truediv__(): assert f.div(g) == divmod(f, g) == (q, r) assert f.rem(g) == f % g == r - assert f.quo(g) == f / g == q + assert f.quo(g) == q + raises(ExactQuotientFailed, lambda: f / g) raises(ExactQuotientFailed, lambda: f.exquo(g)) R, x,y = ring("x,y", ZZ) @@ -722,15 +719,16 @@ def test_PolyElement___truediv__(): assert f.div(g) == divmod(f, g) == (q, r) assert f.rem(g) == f % g == r - assert f.quo(g) == f / g == q - assert f.exquo(g) == q + assert f.quo(g) == q + assert f.exquo(g) == f / g == q f, g = x**2 + y**2, x - y q, r = x + y, 2*y**2 assert f.div(g) == divmod(f, g) == (q, r) assert f.rem(g) == f % g == r - assert f.quo(g) == f / g == q + assert f.quo(g) == q + raises(ExactQuotientFailed, lambda: f / g) raises(ExactQuotientFailed, lambda: f.exquo(g)) f, g = x**2 + y**2, -x + y @@ -738,7 +736,8 @@ def test_PolyElement___truediv__(): assert f.div(g) == divmod(f, g) == (q, r) assert f.rem(g) == f % g == r - assert f.quo(g) == f / g == q + assert f.quo(g) == q + raises(ExactQuotientFailed, lambda: f / g) raises(ExactQuotientFailed, lambda: f.exquo(g)) f, g = x**2 + y**2, 2*x - 2*y @@ -746,7 +745,8 @@ def test_PolyElement___truediv__(): assert f.div(g) == divmod(f, g) == (q, r) assert f.rem(g) == f % g == r - assert f.quo(g) == f / g == q + assert f.quo(g) == q + raises(ExactQuotientFailed, lambda: f / g) raises(ExactQuotientFailed, lambda: f.exquo(g)) R, x,y = ring("x,y", QQ) @@ -756,15 +756,16 @@ def test_PolyElement___truediv__(): assert f.div(g) == divmod(f, g) == (q, r) assert f.rem(g) == f % g == r - assert f.quo(g) == f / g == q - assert f.exquo(g) == q + assert f.quo(g) == q + assert f.exquo(g) == f / g == q f, g = x**2 + y**2, x - y q, r = x + y, 2*y**2 assert f.div(g) == divmod(f, g) == (q, r) assert f.rem(g) == f % g == r - assert f.quo(g) == f / g == q + assert f.quo(g) == q + raises(ExactQuotientFailed, lambda: f / g) raises(ExactQuotientFailed, lambda: f.exquo(g)) f, g = x**2 + y**2, -x + y @@ -772,7 +773,8 @@ def test_PolyElement___truediv__(): assert f.div(g) == divmod(f, g) == (q, r) assert f.rem(g) == f % g == r - assert f.quo(g) == f / g == q + assert f.quo(g) == q + raises(ExactQuotientFailed, lambda: f / g) raises(ExactQuotientFailed, lambda: f.exquo(g)) f, g = x**2 + y**2, 2*x - 2*y @@ -780,7 +782,8 @@ def test_PolyElement___truediv__(): assert f.div(g) == divmod(f, g) == (q, r) assert f.rem(g) == f % g == r - assert f.quo(g) == f / g == q + assert f.quo(g) == q + raises(ExactQuotientFailed, lambda: f / g) raises(ExactQuotientFailed, lambda: f.exquo(g)) def test_PolyElement___pow__(): @@ -791,8 +794,6 @@ def test_PolyElement___pow__(): assert f**1 == f raises(ValueError, lambda: f**(-1)) - assert x**(-1) == x**(-1) - assert f**2 == f._pow_generic(2) == f._pow_multinomial(2) == 4*x**2 + 12*x + 9 assert f**3 == f._pow_generic(3) == f._pow_multinomial(3) == 8*x**3 + 36*x**2 + 54*x + 27 assert f**4 == f._pow_generic(4) == f._pow_multinomial(4) == 16*x**4 + 96*x**3 + 216*x**2 + 216*x + 81 @@ -1172,13 +1173,13 @@ def test_PolyElement_evaluate(): f = (x*y)**3 + 4*(x*y)**2 + 2*x*y + 3 r = f.evaluate(x, 0) - assert r == 3 and isinstance(r, R.drop(x).dtype) + assert r == 3 and R.drop(x).is_element(r) r = f.evaluate([(x, 0), (y, 0)]) - assert r == 3 and isinstance(r, R.drop(x, y).dtype) + assert r == 3 and R.drop(x, y).is_element(r) r = f.evaluate(y, 0) - assert r == 3 and isinstance(r, R.drop(y).dtype) + assert r == 3 and R.drop(y).is_element(r) r = f.evaluate([(y, 0), (x, 0)]) - assert r == 3 and isinstance(r, R.drop(y, x).dtype) + assert r == 3 and R.drop(y, x).is_element(r) r = f.evaluate([(x, 0), (y, 0), (z, 0)]) assert r == 3 and not isinstance(r, PolyElement) @@ -1192,7 +1193,7 @@ def test_PolyElement_subs(): f = x**3 + 4*x**2 + 2*x + 3 r = f.subs(x, 0) - assert r == 3 and isinstance(r, R.dtype) + assert r == 3 and R.is_element(r) raises(CoercionFailed, lambda: f.subs(x, QQ(1,7))) @@ -1200,9 +1201,9 @@ def test_PolyElement_subs(): f = x**3 + 4*x**2 + 2*x + 3 r = f.subs(x, 0) - assert r == 3 and isinstance(r, R.dtype) + assert r == 3 and R.is_element(r) r = f.subs([(x, 0), (y, 0)]) - assert r == 3 and isinstance(r, R.dtype) + assert r == 3 and R.is_element(r) raises(CoercionFailed, lambda: f.subs([(x, 1), (y, QQ(1,7))])) raises(CoercionFailed, lambda: f.subs([(x, QQ(1,7)), (y, 1)])) @@ -1252,7 +1253,7 @@ def test_PolyElement_compose(): f = x**3 + 4*x**2 + 2*x + 3 r = f.compose(x, 0) - assert r == 3 and isinstance(r, R.dtype) + assert r == 3 and R.is_element(r) assert f.compose(x, x) == f assert f.compose(x, x**2) == x**6 + 4*x**4 + 2*x**2 + 3 @@ -1263,13 +1264,13 @@ def test_PolyElement_compose(): f = x**3 + 4*x**2 + 2*x + 3 r = f.compose(x, 0) - assert r == 3 and isinstance(r, R.dtype) + assert r == 3 and R.is_element(r) r = f.compose([(x, 0), (y, 0)]) - assert r == 3 and isinstance(r, R.dtype) + assert r == 3 and R.is_element(r) r = (x**3 + 4*x**2 + 2*x*y*z + 3).compose(x, y*z**2 - 1) q = (y*z**2 - 1)**3 + 4*(y*z**2 - 1)**2 + 2*(y*z**2 - 1)*y*z + 3 - assert r == q and isinstance(r, R.dtype) + assert r == q and R.is_element(r) def test_PolyElement_is_(): R, x,y,z = ring("x,y,z", QQ) @@ -1350,7 +1351,7 @@ def test_PolyElement_drop(): assert R(1).drop(0).ring == PolyRing("y,z", ZZ, lex) assert R(1).drop(0).drop(0).ring == PolyRing("z", ZZ, lex) - assert isinstance(R(1).drop(0).drop(0).drop(0), R.dtype) is False + assert R.is_element(R(1).drop(0).drop(0).drop(0)) is False raises(ValueError, lambda: z.drop(0).drop(0).drop(0)) raises(ValueError, lambda: x.drop(0)) diff --git a/sympy/polys/tests/test_solvers.py b/sympy/polys/tests/test_solvers.py index 9b7c2b3c9f74..bf8708314466 100644 --- a/sympy/polys/tests/test_solvers.py +++ b/sympy/polys/tests/test_solvers.py @@ -13,7 +13,7 @@ def test_solve_lin_sys_2x2_one(): 2*x1 - x2] sol = {x1: QQ(5, 3), x2: QQ(10, 3)} _sol = solve_lin_sys(eqs, domain) - assert _sol == sol and all(isinstance(s, domain.dtype) for s in _sol) + assert _sol == sol and all(s.ring == domain for s in _sol) def test_solve_lin_sys_2x4_none(): domain, x1,x2 = ring("x1,x2", QQ) diff --git a/sympy/printing/tests/test_str.py b/sympy/printing/tests/test_str.py index 5140136b2465..675212964b03 100644 --- a/sympy/printing/tests/test_str.py +++ b/sympy/printing/tests/test_str.py @@ -463,8 +463,6 @@ def test_PolyElement(): assert str(x - 1) == "x - 1" assert str(x + 1) == "x + 1" assert str(x**2) == "x**2" - assert str(x**(-2)) == "x**(-2)" - assert str(x**QQ(1, 2)) == "x**(1/2)" assert str((u**2 + 3*u*v + 1)*x**2*y + u + 1) == "(u**2 + 3*u*v + 1)*x**2*y + u + 1" assert str((u**2 + 3*u*v + 1)*x**2*y + (u + 1)*x) == "(u**2 + 3*u*v + 1)*x**2*y + (u + 1)*x"
{ "difficulty": "high", "estimated_review_effort": 4, "problem_domain": "Bug Fixes" }
sympy__sympy-27524@67e24d4
sympy/sympy
Python
27,524
Fix: Return None in finite predicate Mul expression when a finite argument’s nonzero status is indeterminate
## Fixes #27447 #### Brief description of what is fixed or changed Previously, in a Mul expression, if any argument was unbounded (infinite), the result would immediately be False, ignoring cases where a finite argument could be zero. However, if the finite argument is zero, the correct result should be None (undefined). This PR accounts for that case and ensures the correct behavior by returning None when the finite argument’s value cannot be determined as nonzero. #### Release Notes <!-- BEGIN RELEASE NOTES --> * assumptions * Make Q.finite(Mul(...)) return `None` when a finite argument’s nonzero status is indeterminate instead of `False`. <!-- END RELEASE NOTES -->
2025-01-29T11:50:46Z
ask(Q.finite(f * i)) wrongly returns False (for finite f and infinite i) ```python >>>f = Symbol('f', finite=True) >>>i = Symbol('i', infinite=True) >>>ask(Q.finite(f * i)) False ``` ~~An infinite number times zero is zero as far as I'm aware. So since zero is a finite number, this behavior is incorrect.~~ An infinite number times zero is undefined, so the above behavior is incorrect.
> An infinite number times zero is zero as far as I'm aware. I would say that it is undefined. A useful thing to check is how sympy itself evaluates these things. The assumption system should be consistent with the evaluation rules. In this case: ``` In [1]: 0*oo Out[1]: nan ``` Should this be considered a bug then? Should undefined values be considered to be neither finite nor infinite? Or should they be considered to possibly be finite or infinite? In other cases, SymPy seems to prefer the later interpretation that undefined values may be finite or infinite. ```python >>>ask(Q.infinite(f * i)) is None True >>>S.NaN.is_finite is None True ``` There exist tests for the old assumptions to ensure this sort of behavior (see [test_Mul_is_infinite](https://github.com/sympy/sympy/blob/d7938cbc2f703801413075c147f54db350d29bff/sympy/core/tests/test_assumptions.py#L927)). Any assumptions query on something undefined should return None. https://github.com/sympy/sympy/blob/bc83192924727faf4cd6bd21cec8cde39d0604f2/sympy/assumptions/handlers/calculus.py#L153-L167 I think the issue happens as It returns `False` as soon as it encounters any non-finite factor, without considering the possibility that the other factor might be zero. #### Fix: as we encounter any non-finite arg we can check until now if is there any finite arg that has the possibility of being zero and instantly return None.
[ { "body": "```python\n>>>f = Symbol('f', finite=True)\n>>>i = Symbol('i', infinite=True)\n>>>ask(Q.finite(f * i))\nFalse\n```\n\n~~An infinite number times zero is zero as far as I'm aware. So since zero is a finite number, this behavior is incorrect.~~\nAn infinite number times zero is undefined, so the above behavior is incorrect. ", "number": 27447, "title": "ask(Q.finite(f * i)) wrongly returns False (for finite f and infinite i)" } ]
f07466ae38d6f7985c4e9eec2c7dfff43fec3cf7
{ "head_commit": "67e24d47fea04eca26846713d1019b0d8fdbb118", "head_commit_message": "remove failing test from test-issue-27447. Make the separate xfail function for failing test issue 27662", "patch_to_review": "diff --git a/.mailmap b/.mailmap\nindex 46892edcb46f..614f0d5712e2 100644\n--- a/.mailmap\n+++ b/.mailmap\n@@ -771,6 +771,7 @@ Jason Ross <[email protected]>\n Jason Siefken <[email protected]>\n Jason Tokayer <[email protected]>\n Jason Tokayer <[email protected]> <[email protected]>\n+Jatin Bhardwaj <[email protected]> <[email protected]>\n Jatin Yadav <[email protected]>\n Javed Nissar <[email protected]>\n Jay Patankar <[email protected]> Jay-Patankar <[email protected]>\ndiff --git a/sympy/assumptions/handlers/calculus.py b/sympy/assumptions/handlers/calculus.py\nindex 263bed6da00c..40820ae4b169 100644\n--- a/sympy/assumptions/handlers/calculus.py\n+++ b/sympy/assumptions/handlers/calculus.py\n@@ -151,10 +151,14 @@ def _(expr, assumptions):\n * /s = not signed\n \"\"\"\n result = True\n+ possible_zero = False\n for arg in expr.args:\n _bounded = ask(Q.finite(arg), assumptions)\n if _bounded:\n- continue\n+ if ask(Q.zero(arg), assumptions) is not False:\n+ if result is False:\n+ return None\n+ possible_zero = None\n elif _bounded is None:\n if result is None:\n return None\n@@ -163,6 +167,8 @@ def _(expr, assumptions):\n if result is not False:\n result = None\n else:\n+ if possible_zero is None:\n+ return None\n result = False\n return result\n \ndiff --git a/sympy/assumptions/tests/test_query.py b/sympy/assumptions/tests/test_query.py\nindex 14f1d88c944c..06384198f088 100644\n--- a/sympy/assumptions/tests/test_query.py\n+++ b/sympy/assumptions/tests/test_query.py\n@@ -1019,9 +1019,9 @@ def test_bounded():\n a = x*y\n x, y = a.args\n assert ask(Q.finite(a), Q.finite(x) & Q.finite(y)) is True\n- assert ask(Q.finite(a), Q.finite(x) & ~Q.finite(y)) is False\n+ assert ask(Q.finite(a), Q.finite(x) & ~Q.zero(x) & ~Q.finite(y)) is False\n assert ask(Q.finite(a), Q.finite(x)) is None\n- assert ask(Q.finite(a), ~Q.finite(x) & Q.finite(y)) is False\n+ assert ask(Q.finite(a), ~Q.finite(x) & Q.finite(y) &~Q.zero(y)) is False\n assert ask(Q.finite(a), ~Q.finite(x) & ~Q.finite(y)) is False\n assert ask(Q.finite(a), ~Q.finite(x)) is None\n assert ask(Q.finite(a), Q.finite(y)) is None\n@@ -1031,24 +1031,24 @@ def test_bounded():\n x, y, z = a.args\n assert ask(Q.finite(a), Q.finite(x) & Q.finite(y)\n & Q.finite(z)) is True\n- assert ask(Q.finite(a), Q.finite(x) & Q.finite(y)\n- & ~Q.finite(z)) is False\n+ assert ask(Q.finite(a), Q.finite(x) & ~Q.zero(x) & Q.finite(y)\n+ & ~Q.zero(y) & ~Q.finite(z)) is False\n assert ask(Q.finite(a), Q.finite(x) & Q.finite(y)) is None\n- assert ask(Q.finite(a), Q.finite(x) & ~Q.finite(y)\n- & Q.finite(z)) is False\n- assert ask(Q.finite(a), Q.finite(x) & ~Q.finite(y)\n+ assert ask(Q.finite(a), Q.finite(x) & ~Q.zero(x) & ~Q.finite(y)\n+ & Q.finite(z) & ~Q.zero(z)) is False\n+ assert ask(Q.finite(a), Q.finite(x) & ~Q.zero(x) & ~Q.finite(y)\n & ~Q.finite(z)) is False\n assert ask(Q.finite(a), Q.finite(x) & ~Q.finite(y)) is None\n assert ask(Q.finite(a), Q.finite(x) & Q.finite(z)) is None\n assert ask(Q.finite(a), Q.finite(x) & ~Q.finite(z)) is None\n assert ask(Q.finite(a), Q.finite(x)) is None\n- assert ask(Q.finite(a), ~Q.finite(x) & Q.finite(y)\n- & Q.finite(z)) is False\n- assert ask(Q.finite(a), ~Q.finite(x) & Q.finite(y)\n- & ~Q.finite(z)) is False\n+ assert ask(Q.finite(a), ~Q.finite(x) & Q.finite(y) & ~Q.zero(y)\n+ & Q.finite(z) & ~Q.zero(z)) is False\n+ assert ask(Q.finite(a), ~Q.finite(x) & ~Q.zero(x) & Q.finite(y)\n+ & ~Q.zero(y) & ~Q.finite(z)) is False\n assert ask(Q.finite(a), ~Q.finite(x) & Q.finite(y)) is None\n assert ask(Q.finite(a), ~Q.finite(x) & ~Q.finite(y)\n- & Q.finite(z)) is False\n+ & Q.finite(z) & ~Q.zero(z)) is False\n assert ask(Q.finite(a), ~Q.finite(x) & ~Q.finite(y)\n & ~Q.finite(z)) is False\n assert ask(Q.finite(a), ~Q.finite(x) & ~Q.finite(y)) is None\n@@ -1112,6 +1112,32 @@ def test_issue_27441():\n assert ask(Q.composite(y), Q.integer(y) & Q.positive(y) & ~Q.prime(y)) is None\n \n \n+def test_issue_27447():\n+ x,y,z = symbols('x y z')\n+ a = x*y\n+ assert ask(Q.finite(a), Q.finite(x) & ~Q.finite(y)) is None\n+ assert ask(Q.finite(a), ~Q.finite(x) & Q.finite(y)) is None\n+\n+ a = x*y*z\n+ assert ask(Q.finite(a), Q.finite(x) & Q.finite(y)\n+ & ~Q.finite(z)) is None\n+ assert ask(Q.finite(a), Q.finite(x) & ~Q.finite(y)\n+ & Q.finite(z) ) is None\n+ assert ask(Q.finite(a), Q.finite(x) & ~Q.finite(y)\n+ & ~Q.finite(z)) is None\n+ assert ask(Q.finite(a), ~Q.finite(x) & Q.finite(y)\n+ & Q.finite(z)) is None\n+ assert ask(Q.finite(a), ~Q.finite(x) & Q.finite(y)\n+ & ~Q.finite(z)) is None\n+ assert ask(Q.finite(a), ~Q.finite(x) & ~Q.finite(y)\n+ & Q.finite(z)) is None\n+\n+@XFAIL\n+def test_issue_27662_xfail():\n+ assert ask(Q.finite(x*y), ~Q.finite(x)\n+ & Q.zero(y)) is None\n+\n+\n @XFAIL\n def test_bounded_xfail():\n \"\"\"We need to support relations in ask for this to work\"\"\"\n" }
[ { "diff_hunk": "@@ -151,10 +151,14 @@ def _(expr, assumptions):\n * /s = not signed\n \"\"\"\n result = True\n+ possible_zero = False\n for arg in expr.args:\n _bounded = ask(Q.finite(arg), assumptions)\n if _bounded:\n- continue\n+ if ask(Q.zero(arg), assumptions) is not False:\n+ if result is False:\n+ return None\n+ possible_zero = None", "line": null, "original_line": 161, "original_start_line": null, "path": "sympy/assumptions/handlers/calculus.py", "start_line": null, "text": "@user1:\nI think it would make more sense to just set possible_zero to True" } ]
278c80d818e45816400c384d9e7f35c79045f1f4
diff --git a/.mailmap b/.mailmap index 46892edcb46f..614f0d5712e2 100644 --- a/.mailmap +++ b/.mailmap @@ -771,6 +771,7 @@ Jason Ross <[email protected]> Jason Siefken <[email protected]> Jason Tokayer <[email protected]> Jason Tokayer <[email protected]> <[email protected]> +Jatin Bhardwaj <[email protected]> <[email protected]> Jatin Yadav <[email protected]> Javed Nissar <[email protected]> Jay Patankar <[email protected]> Jay-Patankar <[email protected]> diff --git a/sympy/assumptions/handlers/calculus.py b/sympy/assumptions/handlers/calculus.py index 263bed6da00c..9e60b565d027 100644 --- a/sympy/assumptions/handlers/calculus.py +++ b/sympy/assumptions/handlers/calculus.py @@ -151,10 +151,14 @@ def _(expr, assumptions): * /s = not signed """ result = True + possible_zero = False for arg in expr.args: _bounded = ask(Q.finite(arg), assumptions) if _bounded: - continue + if ask(Q.zero(arg), assumptions) is not False: + if result is False: + return None + possible_zero = True elif _bounded is None: if result is None: return None @@ -163,6 +167,8 @@ def _(expr, assumptions): if result is not False: result = None else: + if possible_zero: + return None result = False return result diff --git a/sympy/assumptions/tests/test_query.py b/sympy/assumptions/tests/test_query.py index 14f1d88c944c..a13d7e12dedf 100644 --- a/sympy/assumptions/tests/test_query.py +++ b/sympy/assumptions/tests/test_query.py @@ -1019,9 +1019,9 @@ def test_bounded(): a = x*y x, y = a.args assert ask(Q.finite(a), Q.finite(x) & Q.finite(y)) is True - assert ask(Q.finite(a), Q.finite(x) & ~Q.finite(y)) is False + assert ask(Q.finite(a), Q.finite(x) & ~Q.zero(x) & ~Q.finite(y)) is False assert ask(Q.finite(a), Q.finite(x)) is None - assert ask(Q.finite(a), ~Q.finite(x) & Q.finite(y)) is False + assert ask(Q.finite(a), ~Q.finite(x) & Q.finite(y) &~Q.zero(y)) is False assert ask(Q.finite(a), ~Q.finite(x) & ~Q.finite(y)) is False assert ask(Q.finite(a), ~Q.finite(x)) is None assert ask(Q.finite(a), Q.finite(y)) is None @@ -1031,24 +1031,24 @@ def test_bounded(): x, y, z = a.args assert ask(Q.finite(a), Q.finite(x) & Q.finite(y) & Q.finite(z)) is True - assert ask(Q.finite(a), Q.finite(x) & Q.finite(y) - & ~Q.finite(z)) is False + assert ask(Q.finite(a), Q.finite(x) & ~Q.zero(x) & Q.finite(y) + & ~Q.zero(y) & ~Q.finite(z)) is False assert ask(Q.finite(a), Q.finite(x) & Q.finite(y)) is None - assert ask(Q.finite(a), Q.finite(x) & ~Q.finite(y) - & Q.finite(z)) is False - assert ask(Q.finite(a), Q.finite(x) & ~Q.finite(y) + assert ask(Q.finite(a), Q.finite(x) & ~Q.zero(x) & ~Q.finite(y) + & Q.finite(z) & ~Q.zero(z)) is False + assert ask(Q.finite(a), Q.finite(x) & ~Q.zero(x) & ~Q.finite(y) & ~Q.finite(z)) is False assert ask(Q.finite(a), Q.finite(x) & ~Q.finite(y)) is None assert ask(Q.finite(a), Q.finite(x) & Q.finite(z)) is None assert ask(Q.finite(a), Q.finite(x) & ~Q.finite(z)) is None assert ask(Q.finite(a), Q.finite(x)) is None - assert ask(Q.finite(a), ~Q.finite(x) & Q.finite(y) - & Q.finite(z)) is False - assert ask(Q.finite(a), ~Q.finite(x) & Q.finite(y) - & ~Q.finite(z)) is False + assert ask(Q.finite(a), ~Q.finite(x) & Q.finite(y) & ~Q.zero(y) + & Q.finite(z) & ~Q.zero(z)) is False + assert ask(Q.finite(a), ~Q.finite(x) & ~Q.zero(x) & Q.finite(y) + & ~Q.zero(y) & ~Q.finite(z)) is False assert ask(Q.finite(a), ~Q.finite(x) & Q.finite(y)) is None assert ask(Q.finite(a), ~Q.finite(x) & ~Q.finite(y) - & Q.finite(z)) is False + & Q.finite(z) & ~Q.zero(z)) is False assert ask(Q.finite(a), ~Q.finite(x) & ~Q.finite(y) & ~Q.finite(z)) is False assert ask(Q.finite(a), ~Q.finite(x) & ~Q.finite(y)) is None @@ -1112,6 +1112,33 @@ def test_issue_27441(): assert ask(Q.composite(y), Q.integer(y) & Q.positive(y) & ~Q.prime(y)) is None +def test_issue_27447(): + x,y,z = symbols('x y z') + a = x*y + assert ask(Q.finite(a), Q.finite(x) & ~Q.finite(y)) is None + assert ask(Q.finite(a), ~Q.finite(x) & Q.finite(y)) is None + + a = x*y*z + assert ask(Q.finite(a), Q.finite(x) & Q.finite(y) + & ~Q.finite(z)) is None + assert ask(Q.finite(a), Q.finite(x) & ~Q.finite(y) + & Q.finite(z) ) is None + assert ask(Q.finite(a), Q.finite(x) & ~Q.finite(y) + & ~Q.finite(z)) is None + assert ask(Q.finite(a), ~Q.finite(x) & Q.finite(y) + & Q.finite(z)) is None + assert ask(Q.finite(a), ~Q.finite(x) & Q.finite(y) + & ~Q.finite(z)) is None + assert ask(Q.finite(a), ~Q.finite(x) & ~Q.finite(y) + & Q.finite(z)) is None + + +@XFAIL +def test_issue_27662_xfail(): + assert ask(Q.finite(x*y), ~Q.finite(x) + & Q.zero(y)) is None + + @XFAIL def test_bounded_xfail(): """We need to support relations in ask for this to work"""
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Bug Fixes" }
xorbitsai__inference-616@333bd7e
xorbitsai/inference
Python
616
ENH: cache status
Resolves #592 Introduce `cache_status` into the output of `list_model_registrations`. For LLMs, `cache_status` will be incorporated into each model's specification. For PyTorch models, this will be a boolean value. For GGML models, it will manifest as a list of booleans to reflect the status across various quantizations. For embedding and image models, `cache_status` will be attributed to the entire model family and will consistently be a boolean. The frontend has been updated to display whether a model is cached, using the cache_status provided by the backend: ![image](https://github.com/xorbitsai/inference/assets/109661872/56ec799f-e5c9-4464-8472-8be272c016aa)
2023-11-06T09:59:12Z
ENH: let users know if the model is cached or not ### Is your feature request related to a problem? Please describe The describe model API should indicate to the caller whether the model has been cached. Additionally, updates are required for the web UI: - A "cached" tag should be added to the model card if the model has been cached. - For GGML/GGUF models, alongside the tag, a hint should be provided when a quantization model is selected.
[ { "body": "### Is your feature request related to a problem? Please describe\r\nThe describe model API should indicate to the caller whether the model has been cached.\r\n\r\nAdditionally, updates are required for the web UI:\r\n- A \"cached\" tag should be added to the model card if the model has been cached.\r\n- For GGML/GGUF models, alongside the tag, a hint should be provided when a quantization model is selected.\r\n\r\n\r\n", "number": 592, "title": "ENH: let users know if the model is cached or not" } ]
e8e189568397b6c76f9942e25c5fb6d74258102e
{ "head_commit": "333bd7ec43ada87405fa499cd8428e3d281cdc1a", "head_commit_message": "UT", "patch_to_review": "diff --git a/xinference/api/restful_api.py b/xinference/api/restful_api.py\nindex 9075d48479..930b1b37ee 100644\n--- a/xinference/api/restful_api.py\n+++ b/xinference/api/restful_api.py\n@@ -23,7 +23,16 @@\n \n import gradio as gr\n import xoscar as xo\n-from fastapi import APIRouter, FastAPI, File, Form, HTTPException, Request, UploadFile\n+from fastapi import (\n+ APIRouter,\n+ FastAPI,\n+ File,\n+ Form,\n+ HTTPException,\n+ Query,\n+ Request,\n+ UploadFile,\n+)\n from fastapi.middleware.cors import CORSMiddleware\n from fastapi.responses import JSONResponse\n from fastapi.staticfiles import StaticFiles\n@@ -860,10 +869,12 @@ async def unregister_model(self, model_type: str, model_name: str):\n logger.error(e, exc_info=True)\n raise HTTPException(status_code=500, detail=str(e))\n \n- async def list_model_registrations(self, model_type: str) -> List[Dict[str, Any]]:\n+ async def list_model_registrations(\n+ self, model_type: str, detailed: bool = Query(False)\n+ ) -> List[Dict[str, Any]]:\n try:\n return await (await self._get_supervisor_ref()).list_model_registrations(\n- model_type\n+ model_type, detailed=detailed\n )\n except ValueError as re:\n logger.error(re, exc_info=True)\ndiff --git a/xinference/core/supervisor.py b/xinference/core/supervisor.py\nindex 63ba0c44ef..ed08c474a9 100644\n--- a/xinference/core/supervisor.py\n+++ b/xinference/core/supervisor.py\n@@ -33,8 +33,12 @@\n )\n \n if TYPE_CHECKING:\n+ from ..model.embedding import EmbeddingModelSpec\n+ from ..model.image import ImageModelFamilyV1\n+ from ..model.llm import LLMFamilyV1\n from .worker import WorkerActor\n \n+\n logger = getLogger(__name__)\n \n \n@@ -102,8 +106,47 @@ def get_status(self) -> Dict:\n \"workers\": self._worker_status,\n }\n \n+ @staticmethod\n+ def _to_llm_reg(llm_family: \"LLMFamilyV1\", is_builtin: bool) -> Dict[str, Any]:\n+ from ..model.llm import get_cache_status\n+\n+ specs = []\n+ for spec in llm_family.model_specs:\n+ cache_status = get_cache_status(llm_family, spec)\n+ specs.append({**spec.dict(), \"cache_status\": cache_status})\n+\n+ return {**llm_family.dict(), \"is_builtin\": is_builtin, \"model_specs\": specs}\n+\n+ @staticmethod\n+ def _to_embedding_model_reg(\n+ model_spec: \"EmbeddingModelSpec\", is_builtin: bool\n+ ) -> Dict[str, Any]:\n+ from ..model.embedding import get_cache_status\n+\n+ cache_status = get_cache_status(model_spec)\n+ return {\n+ **model_spec.dict(),\n+ \"cache_status\": cache_status,\n+ \"is_builtin\": is_builtin,\n+ }\n+\n+ @staticmethod\n+ def _to_image_model_reg(\n+ model_family: \"ImageModelFamilyV1\", is_builtin: bool\n+ ) -> Dict[str, Any]:\n+ from ..model.image import get_cache_status\n+\n+ cache_status = get_cache_status(model_family)\n+ return {\n+ **model_family.dict(),\n+ \"cache_status\": cache_status,\n+ \"is_builtin\": is_builtin,\n+ }\n+\n @log_sync(logger=logger)\n- def list_model_registrations(self, model_type: str) -> List[Dict[str, Any]]:\n+ def list_model_registrations(\n+ self, model_type: str, detailed: bool = False\n+ ) -> List[Dict[str, Any]]:\n def sort_helper(item):\n assert isinstance(item[\"model_name\"], str)\n return item.get(\"model_name\").lower()\n@@ -111,37 +154,43 @@ def sort_helper(item):\n if model_type == \"LLM\":\n from ..model.llm import BUILTIN_LLM_FAMILIES, get_user_defined_llm_families\n \n- ret = [\n- {\"model_name\": f.model_name, \"is_builtin\": True}\n- for f in BUILTIN_LLM_FAMILIES\n- ]\n- user_defined_llm_families = get_user_defined_llm_families()\n- ret.extend(\n- [\n- {\"model_name\": f.model_name, \"is_builtin\": False}\n- for f in user_defined_llm_families\n- ]\n- )\n+ ret = []\n+ for family in BUILTIN_LLM_FAMILIES:\n+ if detailed:\n+ ret.append(self._to_llm_reg(family, True))\n+ else:\n+ ret.append({\"model_name\": family.model_name, \"is_builtin\": True})\n \n- ret.sort(key=sort_helper)\n+ for family in get_user_defined_llm_families():\n+ if detailed:\n+ ret.append(self._to_llm_reg(family, False))\n+ else:\n+ ret.append({\"model_name\": family.model_name, \"is_builtin\": False})\n \n+ ret.sort(key=sort_helper)\n return ret\n elif model_type == \"embedding\":\n from ..model.embedding import BUILTIN_EMBEDDING_MODELS\n \n- ret = [\n- {\"model_name\": model_name, \"is_builtin\": True}\n- for model_name in BUILTIN_EMBEDDING_MODELS\n- ]\n+ ret = []\n+ for model_name, family in BUILTIN_EMBEDDING_MODELS.items():\n+ if detailed:\n+ ret.append(self._to_embedding_model_reg(family, is_builtin=True))\n+ else:\n+ ret.append({\"model_name\": model_name, \"is_builtin\": True})\n+\n ret.sort(key=sort_helper)\n return ret\n elif model_type == \"image\":\n from ..model.image import BUILTIN_IMAGE_MODELS\n \n- ret = [\n- {\"model_name\": model_name, \"is_builtin\": True}\n- for model_name in BUILTIN_IMAGE_MODELS\n- ]\n+ ret = []\n+ for model_name, family in BUILTIN_IMAGE_MODELS.items():\n+ if detailed:\n+ ret.append(self._to_image_model_reg(family, is_builtin=True))\n+ else:\n+ ret.append({\"model_name\": model_name, \"is_builtin\": True})\n+\n ret.sort(key=sort_helper)\n return ret\n else:\ndiff --git a/xinference/model/embedding/__init__.py b/xinference/model/embedding/__init__.py\nindex 3c018c9da4..59a2592b5b 100644\n--- a/xinference/model/embedding/__init__.py\n+++ b/xinference/model/embedding/__init__.py\n@@ -16,7 +16,7 @@\n import json\n import os\n \n-from .core import EmbeddingModelSpec\n+from .core import EmbeddingModelSpec, get_cache_status\n \n _model_spec_json = os.path.join(os.path.dirname(__file__), \"model_spec.json\")\n _model_spec_modelscope_json = os.path.join(\ndiff --git a/xinference/model/embedding/core.py b/xinference/model/embedding/core.py\nindex 94c193eb26..9a6364dafb 100644\n--- a/xinference/model/embedding/core.py\n+++ b/xinference/model/embedding/core.py\n@@ -98,6 +98,18 @@ def cache(model_spec: EmbeddingModelSpec):\n return cache_dir\n \n \n+def get_cache_status(\n+ model_spec: EmbeddingModelSpec,\n+) -> bool:\n+ cache_dir = os.path.realpath(\n+ os.path.join(XINFERENCE_CACHE_DIR, model_spec.model_name)\n+ )\n+ if not os.path.exists(cache_dir):\n+ os.makedirs(cache_dir, exist_ok=True)\n+ meta_path = os.path.join(cache_dir, \"__valid_download\")\n+ return valid_model_revision(meta_path, model_spec.model_revision)\n+\n+\n class EmbeddingModel:\n def __init__(self, model_uid: str, model_path: str, device: Optional[str] = None):\n self._model_uid = model_uid\ndiff --git a/xinference/model/embedding/tests/test_embedding_models.py b/xinference/model/embedding/tests/test_embedding_models.py\nindex 9bbe132fa7..7dc6a9171e 100644\n--- a/xinference/model/embedding/tests/test_embedding_models.py\n+++ b/xinference/model/embedding/tests/test_embedding_models.py\n@@ -11,7 +11,9 @@\n # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n # See the License for the specific language governing permissions and\n # limitations under the License.\n+\n import os\n+import shutil\n \n from ...utils import valid_model_revision\n from ..core import EmbeddingModel, EmbeddingModelSpec, cache\n@@ -45,28 +47,33 @@\n \n \n def test_model():\n- model_path = cache(TEST_MODEL_SPEC)\n- model = EmbeddingModel(\"mock\", model_path)\n- # input is a string\n- input_text = \"what is the capital of China?\"\n- model.load()\n- r = model.create_embedding(input_text)\n- assert len(r[\"data\"]) == 1\n- for d in r[\"data\"]:\n- assert len(d[\"embedding\"]) == 384\n-\n- # input is a lit\n- input_texts = [\n- \"what is the capital of China?\",\n- \"how to implement quick sort in python?\",\n- \"Beijing\",\n- \"sorting algorithms\",\n- ]\n- model.load()\n- r = model.create_embedding(input_texts)\n- assert len(r[\"data\"]) == 4\n- for d in r[\"data\"]:\n- assert len(d[\"embedding\"]) == 384\n+ model_path = None\n+ try:\n+ model_path = cache(TEST_MODEL_SPEC)\n+ model = EmbeddingModel(\"mock\", model_path)\n+ # input is a string\n+ input_text = \"what is the capital of China?\"\n+ model.load()\n+ r = model.create_embedding(input_text)\n+ assert len(r[\"data\"]) == 1\n+ for d in r[\"data\"]:\n+ assert len(d[\"embedding\"]) == 384\n+\n+ # input is a lit\n+ input_texts = [\n+ \"what is the capital of China?\",\n+ \"how to implement quick sort in python?\",\n+ \"Beijing\",\n+ \"sorting algorithms\",\n+ ]\n+ model.load()\n+ r = model.create_embedding(input_texts)\n+ assert len(r[\"data\"]) == 4\n+ for d in r[\"data\"]:\n+ assert len(d[\"embedding\"]) == 384\n+ finally:\n+ if model_path is not None:\n+ shutil.rmtree(model_path, ignore_errors=True)\n \n \n def test_model_from_modelscope():\n@@ -82,21 +89,38 @@ def test_model_from_modelscope():\n \n \n def test_meta_file():\n- cache_dir = cache(TEST_MODEL_SPEC)\n- meta_path = os.path.join(cache_dir, \"__valid_download\")\n- assert valid_model_revision(meta_path, TEST_MODEL_SPEC.model_revision)\n-\n- # test another version of the same model\n- assert not valid_model_revision(meta_path, TEST_MODEL_SPEC2.model_revision)\n- cache_dir = cache(TEST_MODEL_SPEC2)\n- meta_path = os.path.join(cache_dir, \"__valid_download\")\n- assert valid_model_revision(meta_path, TEST_MODEL_SPEC2.model_revision)\n-\n- # test functionality of the new version model\n- model = EmbeddingModel(\"mock\", cache_dir)\n- input_text = \"I can do this all day.\"\n- model.load()\n- r = model.create_embedding(input_text)\n- assert len(r[\"data\"]) == 1\n- for d in r[\"data\"]:\n- assert len(d[\"embedding\"]) == 384\n+ cache_dir = None\n+ try:\n+ cache_dir = cache(TEST_MODEL_SPEC)\n+ meta_path = os.path.join(cache_dir, \"__valid_download\")\n+ assert valid_model_revision(meta_path, TEST_MODEL_SPEC.model_revision)\n+\n+ # test another version of the same model\n+ assert not valid_model_revision(meta_path, TEST_MODEL_SPEC2.model_revision)\n+ cache_dir = cache(TEST_MODEL_SPEC2)\n+ meta_path = os.path.join(cache_dir, \"__valid_download\")\n+ assert valid_model_revision(meta_path, TEST_MODEL_SPEC2.model_revision)\n+\n+ # test functionality of the new version model\n+ model = EmbeddingModel(\"mock\", cache_dir)\n+ input_text = \"I can do this all day.\"\n+ model.load()\n+ r = model.create_embedding(input_text)\n+ assert len(r[\"data\"]) == 1\n+ for d in r[\"data\"]:\n+ assert len(d[\"embedding\"]) == 384\n+ finally:\n+ shutil.rmtree(cache_dir, ignore_errors=True)\n+\n+\n+def test_get_cache_status():\n+ from ..core import get_cache_status\n+\n+ model_path = None\n+ try:\n+ assert get_cache_status(TEST_MODEL_SPEC) is False\n+ model_path = cache(TEST_MODEL_SPEC)\n+ assert get_cache_status(TEST_MODEL_SPEC) is True\n+ finally:\n+ if model_path is not None:\n+ shutil.rmtree(model_path, ignore_errors=True)\ndiff --git a/xinference/model/image/__init__.py b/xinference/model/image/__init__.py\nindex 2f912d3928..432bbdd3a9 100644\n--- a/xinference/model/image/__init__.py\n+++ b/xinference/model/image/__init__.py\n@@ -16,7 +16,7 @@\n import json\n import os\n \n-from .core import ImageModelFamilyV1\n+from .core import ImageModelFamilyV1, get_cache_status\n \n _model_spec_json = os.path.join(os.path.dirname(__file__), \"model_spec.json\")\n BUILTIN_IMAGE_MODELS = dict(\ndiff --git a/xinference/model/image/core.py b/xinference/model/image/core.py\nindex 0170f7c8cb..65c699d024 100644\n--- a/xinference/model/image/core.py\n+++ b/xinference/model/image/core.py\n@@ -20,6 +20,7 @@\n \n from ...constants import XINFERENCE_CACHE_DIR\n from ..core import ModelDescription\n+from ..utils import valid_model_revision\n from .stable_diffusion.core import DiffusionModel\n \n MAX_ATTEMPTS = 3\n@@ -70,6 +71,11 @@ def cache(model_spec: ImageModelFamilyV1):\n )\n if not os.path.exists(cache_dir):\n os.makedirs(cache_dir, exist_ok=True)\n+\n+ meta_path = os.path.join(cache_dir, \"__valid_download\")\n+ if valid_model_revision(meta_path, model_spec.model_revision):\n+ return cache_dir\n+\n for current_attempt in range(1, MAX_ATTEMPTS + 1):\n try:\n huggingface_hub.snapshot_download(\n@@ -89,9 +95,28 @@ def cache(model_spec: ImageModelFamilyV1):\n raise RuntimeError(\n f\"Failed to download model '{model_spec.model_name}' after {MAX_ATTEMPTS} attempts\"\n )\n+\n+ with open(meta_path, \"w\") as f:\n+ import json\n+\n+ desc = ImageModelDescription(model_spec)\n+ json.dump(desc.to_dict(), f)\n+\n return cache_dir\n \n \n+def get_cache_status(\n+ model_spec: ImageModelFamilyV1,\n+) -> bool:\n+ cache_dir = os.path.realpath(\n+ os.path.join(XINFERENCE_CACHE_DIR, model_spec.model_name)\n+ )\n+ if not os.path.exists(cache_dir):\n+ os.makedirs(cache_dir, exist_ok=True)\n+ meta_path = os.path.join(cache_dir, \"__valid_download\")\n+ return valid_model_revision(meta_path, model_spec.model_revision)\n+\n+\n def create_image_model_instance(\n model_uid: str, model_name: str, **kwargs\n ) -> Tuple[DiffusionModel, ImageModelDescription]:\ndiff --git a/xinference/model/image/tests/test_stable_diffusion.py b/xinference/model/image/tests/test_stable_diffusion.py\nindex 809cd82d3d..397474bfe8 100644\n--- a/xinference/model/image/tests/test_stable_diffusion.py\n+++ b/xinference/model/image/tests/test_stable_diffusion.py\n@@ -15,6 +15,7 @@\n import io\n import logging\n import os.path\n+import shutil\n from io import BytesIO\n \n import pytest\n@@ -34,20 +35,25 @@\n \n \n def test_model():\n- model_path = cache(TEST_MODEL_SPEC)\n- model = DiffusionModel(\"mock\", model_path)\n- # input is a string\n- input_text = \"an apple\"\n- model.load()\n- r = model.text_to_image(input_text, size=\"256*256\")\n- assert len(r[\"data\"]) == 1\n- assert os.path.exists(r[\"data\"][0][\"url\"])\n- r = model.text_to_image(input_text, size=\"256*256\", response_format=\"b64_json\")\n- assert len(r[\"data\"]) == 1\n- b64_json = r[\"data\"][0][\"b64_json\"]\n- image_bytes = base64.decodebytes(b64_json)\n- img = Image.open(BytesIO(image_bytes))\n- assert img.size == (256, 256)\n+ model_path = None\n+ try:\n+ model_path = cache(TEST_MODEL_SPEC)\n+ model = DiffusionModel(\"mock\", model_path)\n+ # input is a string\n+ input_text = \"an apple\"\n+ model.load()\n+ r = model.text_to_image(input_text, size=\"256*256\")\n+ assert len(r[\"data\"]) == 1\n+ assert os.path.exists(r[\"data\"][0][\"url\"])\n+ r = model.text_to_image(input_text, size=\"256*256\", response_format=\"b64_json\")\n+ assert len(r[\"data\"]) == 1\n+ b64_json = r[\"data\"][0][\"b64_json\"]\n+ image_bytes = base64.decodebytes(b64_json)\n+ img = Image.open(BytesIO(image_bytes))\n+ assert img.size == (256, 256)\n+ finally:\n+ if model_path is not None:\n+ shutil.rmtree(model_path)\n \n \n @pytest.mark.skip(reason=\"Stable diffusion controlnet requires too many GRAM.\")\n@@ -136,3 +142,16 @@ def test_restful_api_for_image_with_mlsd_controlnet(setup):\n num_inference_steps=20,\n )\n logger.info(\"test result %s\", r)\n+\n+\n+def test_get_cache_status():\n+ from ..core import get_cache_status\n+\n+ model_path = None\n+ try:\n+ assert get_cache_status(TEST_MODEL_SPEC) is False\n+ model_path = cache(TEST_MODEL_SPEC)\n+ assert get_cache_status(TEST_MODEL_SPEC) is True\n+ finally:\n+ if model_path is not None:\n+ shutil.rmtree(model_path)\ndiff --git a/xinference/model/llm/__init__.py b/xinference/model/llm/__init__.py\nindex abb44caaa8..6271c9bec0 100644\n--- a/xinference/model/llm/__init__.py\n+++ b/xinference/model/llm/__init__.py\n@@ -26,6 +26,7 @@\n LLMSpecV1,\n PromptStyleV1,\n PytorchLLMSpecV1,\n+ get_cache_status,\n get_user_defined_llm_families,\n match_llm,\n match_llm_cls,\ndiff --git a/xinference/model/llm/llm_family.py b/xinference/model/llm/llm_family.py\nindex d555755918..d4ec01f973 100644\n--- a/xinference/model/llm/llm_family.py\n+++ b/xinference/model/llm/llm_family.py\n@@ -581,6 +581,36 @@ def cache_from_huggingface(\n return cache_dir\n \n \n+def get_cache_status(\n+ llm_family: LLMFamilyV1,\n+ llm_spec: \"LLMSpecV1\",\n+) -> Union[bool, List[bool]]:\n+ cache_dir = _get_cache_dir(llm_family, llm_spec)\n+ if llm_spec.model_format == \"pytorch\":\n+ return _skip_download(\n+ cache_dir,\n+ llm_spec.model_format,\n+ llm_spec.model_hub,\n+ llm_spec.model_revision,\n+ \"none\",\n+ )\n+ elif llm_spec.model_format in [\"ggmlv3\", \"ggufv2\"]:\n+ ret = []\n+ for q in llm_spec.quantizations:\n+ ret.append(\n+ _skip_download(\n+ cache_dir,\n+ llm_spec.model_format,\n+ llm_spec.model_hub,\n+ llm_spec.model_revision,\n+ q,\n+ )\n+ )\n+ return ret\n+ else:\n+ raise ValueError(f\"Unsupported model format: {llm_spec.model_format}\")\n+\n+\n def _is_linux():\n return platform.system() == \"Linux\"\n \ndiff --git a/xinference/model/llm/tests/test_llm_family.py b/xinference/model/llm/tests/test_llm_family.py\nindex 107afe75ae..46f190b703 100644\n--- a/xinference/model/llm/tests/test_llm_family.py\n+++ b/xinference/model/llm/tests/test_llm_family.py\n@@ -866,3 +866,75 @@ def test_skip_download_ggml():\n finally:\n os.remove(ms_meta_path)\n assert not os.path.exists(ms_meta_path)\n+\n+\n+def test_get_cache_status_pytorch():\n+ from ..llm_family import cache_from_huggingface, get_cache_status\n+\n+ spec = PytorchLLMSpecV1(\n+ model_format=\"pytorch\",\n+ model_size_in_billions=1,\n+ quantizations=[\"4-bit\", \"8-bit\", \"none\"],\n+ model_id=\"facebook/opt-125m\",\n+ )\n+ family = LLMFamilyV1(\n+ version=1,\n+ context_length=2048,\n+ model_type=\"LLM\",\n+ model_name=\"opt\",\n+ model_lang=[\"en\"],\n+ model_ability=[\"embed\", \"generate\"],\n+ model_specs=[spec],\n+ prompt_style=None,\n+ )\n+\n+ cache_status = get_cache_status(llm_family=family, llm_spec=spec)\n+ assert not isinstance(cache_status, list)\n+ assert not cache_status\n+\n+ cache_dir = cache_from_huggingface(family, spec, quantization=None)\n+ cache_status = get_cache_status(llm_family=family, llm_spec=spec)\n+ assert not isinstance(cache_status, list)\n+ assert cache_status\n+\n+ assert os.path.exists(cache_dir)\n+ assert os.path.exists(os.path.join(cache_dir, \"README.md\"))\n+ assert os.path.islink(os.path.join(cache_dir, \"README.md\"))\n+ shutil.rmtree(cache_dir)\n+\n+\n+def test_get_cache_status_ggml():\n+ from ..llm_family import cache_from_huggingface, get_cache_status\n+\n+ spec = GgmlLLMSpecV1(\n+ model_format=\"ggmlv3\",\n+ model_size_in_billions=3,\n+ model_id=\"TheBloke/orca_mini_3B-GGML\",\n+ quantizations=[\"q4_0\", \"q5_0\"],\n+ model_file_name_template=\"README.md\",\n+ )\n+ family = LLMFamilyV1(\n+ version=1,\n+ context_length=2048,\n+ model_type=\"LLM\",\n+ model_name=\"orca\",\n+ model_lang=[\"en\"],\n+ model_ability=[\"embed\", \"chat\"],\n+ model_specs=[spec],\n+ prompt_style=None,\n+ )\n+\n+ cache_status = get_cache_status(llm_family=family, llm_spec=spec)\n+ assert isinstance(cache_status, list)\n+ assert not any(cache_status)\n+\n+ cache_dir = cache_from_huggingface(family, spec, quantization=\"q4_0\")\n+ cache_status = get_cache_status(llm_family=family, llm_spec=spec)\n+ assert isinstance(cache_status, list)\n+ assert len(cache_status) == 2\n+ assert cache_status[0] and not cache_status[1]\n+\n+ assert os.path.exists(cache_dir)\n+ assert os.path.exists(os.path.join(cache_dir, \"README.md\"))\n+ assert os.path.islink(os.path.join(cache_dir, \"README.md\"))\n+ shutil.rmtree(cache_dir)\ndiff --git a/xinference/model/utils.py b/xinference/model/utils.py\nindex 5ab0425fd6..7f84136c89 100644\n--- a/xinference/model/utils.py\n+++ b/xinference/model/utils.py\n@@ -118,7 +118,7 @@ def valid_model_revision(\n logger.debug(\"Legacy meta file detected.\")\n return True\n \n- if \"model_revision\" in meta_data: # embedding\n+ if \"model_revision\" in meta_data: # embedding, image\n real_revision = meta_data[\"model_revision\"]\n elif \"revision\" in meta_data: # llm\n real_revision = meta_data[\"revision\"]\ndiff --git a/xinference/web/ui/src/scenes/launch_model/embeddingCard.js b/xinference/web/ui/src/scenes/launch_model/embeddingCard.js\nindex c42e00d6d2..20dfe33caa 100644\n--- a/xinference/web/ui/src/scenes/launch_model/embeddingCard.js\n+++ b/xinference/web/ui/src/scenes/launch_model/embeddingCard.js\n@@ -3,10 +3,7 @@ import { v1 as uuidv1 } from \"uuid\";\n import { ApiContext } from \"../../components/apiContext\";\n import { Box, Chip } from \"@mui/material\";\n import { CircularProgress } from \"@mui/material\";\n-import {\n- UndoOutlined,\n- RocketLaunchOutlined,\n-} from \"@mui/icons-material\";\n+import { UndoOutlined, RocketLaunchOutlined } from \"@mui/icons-material\";\n \n const CARD_HEIGHT = 270;\n const CARD_WIDTH = 270;\n@@ -33,7 +30,7 @@ const EmbeddingCard = ({ url, modelData }) => {\n const modelDataWithID = {\n model_uid: uuid,\n model_name: modelData.model_name,\n- model_type: \"embedding\"\n+ model_type: \"embedding\",\n };\n \n // First fetch request to initiate the model\n@@ -175,8 +172,8 @@ const EmbeddingCard = ({ url, modelData }) => {\n fontSize: \"0.8em\",\n },\n langRow: {\n- margin: \"2px 5px 40px 5px\"\n- }\n+ margin: \"2px 5px 40px 5px\",\n+ },\n };\n \n // Set two different states based on mouse hover\n@@ -199,30 +196,29 @@ const EmbeddingCard = ({ url, modelData }) => {\n {(() => {\n if (modelData.language.includes(\"en\")) {\n return (\n- <Chip label=\"EN\" variant=\"outlined\" size=\"small\" sx={{ marginRight: \"10px\" }} />\n+ <Chip\n+ label=\"EN\"\n+ variant=\"outlined\"\n+ size=\"small\"\n+ sx={{ marginRight: \"10px\" }}\n+ />\n );\n }\n })()}\n {(() => {\n if (modelData.language.includes(\"zh\")) {\n- return (\n- <Chip label=\"ZH\" variant=\"outlined\" size=\"small\" />\n- );\n+ return <Chip label=\"ZH\" variant=\"outlined\" size=\"small\" />;\n }\n })()}\n </div>\n </div>\n <div style={styles.iconRow}>\n <div style={styles.iconItem}>\n- <span style={styles.boldIconText}>\n- {modelData.dimensions}\n- </span>\n+ <span style={styles.boldIconText}>{modelData.dimensions}</span>\n <small style={styles.smallText}>dimensions</small>\n </div>\n <div style={styles.iconItem}>\n- <span style={styles.boldIconText}>\n- {modelData.max_tokens}\n- </span>\n+ <span style={styles.boldIconText}>{modelData.max_tokens}</span>\n <small style={styles.smallText}>max tokens</small>\n </div>\n </div>\n@@ -248,13 +244,7 @@ const EmbeddingCard = ({ url, modelData }) => {\n title=\"Launch Embedding\"\n style={styles.buttonContainer}\n onClick={() => launchModel(url, modelData)}\n- disabled={\n- isCallingApi ||\n- isUpdatingModel ||\n- !(\n- modelData\n- )\n- }\n+ disabled={isCallingApi || isUpdatingModel || !modelData}\n >\n {(() => {\n if (isCallingApi || isUpdatingModel) {\n@@ -270,11 +260,7 @@ const EmbeddingCard = ({ url, modelData }) => {\n />\n </Box>\n );\n- } else if (\n- !(\n- modelData\n- )\n- ) {\n+ } else if (!modelData) {\n return (\n <Box\n style={{ ...styles.buttonItem, backgroundColor: \"#f2f2f2\" }}\ndiff --git a/xinference/web/ui/src/scenes/launch_model/index.js b/xinference/web/ui/src/scenes/launch_model/index.js\nindex 111576d481..69d5979f01 100644\n--- a/xinference/web/ui/src/scenes/launch_model/index.js\n+++ b/xinference/web/ui/src/scenes/launch_model/index.js\n@@ -1,13 +1,9 @@\n-import React, { } from \"react\";\n+import React from \"react\";\n import Title from \"../../components/Title\";\n import LaunchLLM from \"./launchLLM\";\n import LaunchEmbedding from \"./launchEmbedding\";\n-import {\n- Box,\n- Tab\n-} from \"@mui/material\";\n-import { TabContext, TabList, TabPanel } from \"@mui/lab\"\n-\n+import { Box, Tab } from \"@mui/material\";\n+import { TabContext, TabList, TabPanel } from \"@mui/lab\";\n \n const LaunchModel = () => {\n const [value, setValue] = React.useState(\"1\");\n@@ -26,14 +22,14 @@ const LaunchModel = () => {\n <Tab label=\"Embedding Models\" value=\"2\" />\n </TabList>\n </Box>\n- <TabPanel value=\"1\" sx={{padding: 0}}>\n+ <TabPanel value=\"1\" sx={{ padding: 0 }}>\n <LaunchLLM />\n </TabPanel>\n- <TabPanel value=\"2\" sx={{padding: 0}}>\n+ <TabPanel value=\"2\" sx={{ padding: 0 }}>\n <LaunchEmbedding />\n </TabPanel>\n </TabContext>\n- </Box >\n+ </Box>\n );\n };\n \ndiff --git a/xinference/web/ui/src/scenes/launch_model/launchEmbedding.js b/xinference/web/ui/src/scenes/launch_model/launchEmbedding.js\nindex 69756c7d6a..55aa23e2c0 100644\n--- a/xinference/web/ui/src/scenes/launch_model/launchEmbedding.js\n+++ b/xinference/web/ui/src/scenes/launch_model/launchEmbedding.js\n@@ -21,9 +21,7 @@ const LaunchEmbedding = () => {\n const modelName = registration.model_name\n ? registration.model_name.toLowerCase()\n : \"\";\n- if (\n- !modelName.includes(searchTerm.toLowerCase())\n- ) {\n+ if (!modelName.includes(searchTerm.toLowerCase())) {\n return false;\n }\n return true;\n@@ -35,9 +33,12 @@ const LaunchEmbedding = () => {\n try {\n setIsCallingApi(true);\n \n- const response = await fetch(`${endPoint}/v1/model_registrations/embedding`, {\n- method: \"GET\",\n- });\n+ const response = await fetch(\n+ `${endPoint}/v1/model_registrations/embedding`,\n+ {\n+ method: \"GET\",\n+ },\n+ );\n \n const registrations = await response.json();\n const newRegistrationData = await Promise.all(\n@@ -78,11 +79,14 @@ const LaunchEmbedding = () => {\n \n return (\n <Box m=\"20px\">\n- <div style={{ display: \"grid\", gridTemplateColumns: \"1fr\", margin: \"30px 2rem\" }}>\n- <FormControl\n- variant=\"outlined\"\n- margin=\"normal\"\n- >\n+ <div\n+ style={{\n+ display: \"grid\",\n+ gridTemplateColumns: \"1fr\",\n+ margin: \"30px 2rem\",\n+ }}\n+ >\n+ <FormControl variant=\"outlined\" margin=\"normal\">\n <TextField\n id=\"search\"\n type=\"search\"\ndiff --git a/xinference/web/ui/src/scenes/launch_model/launchLLM.js b/xinference/web/ui/src/scenes/launch_model/launchLLM.js\nindex 07c967d5d1..281cf57098 100644\n--- a/xinference/web/ui/src/scenes/launch_model/launchLLM.js\n+++ b/xinference/web/ui/src/scenes/launch_model/launchLLM.js\n@@ -8,7 +8,8 @@ import {\n Select,\n MenuItem,\n InputLabel,\n- Tabs, Tab\n+ Tabs,\n+ Tab,\n } from \"@mui/material\";\n import { ApiContext } from \"../../components/apiContext\";\n \n@@ -65,30 +66,16 @@ const LaunchLLM = () => {\n try {\n setIsCallingApi(true);\n \n- const response = await fetch(`${endPoint}/v1/model_registrations/LLM`, {\n- method: \"GET\",\n- });\n+ const response = await fetch(\n+ `${endPoint}/v1/model_registrations/LLM?detailed=true`,\n+ {\n+ method: \"GET\",\n+ },\n+ );\n \n const registrations = await response.json();\n \n- const newRegistrationData = await Promise.all(\n- registrations.map(async (registration) => {\n- const desc = await fetch(\n- `${endPoint}/v1/model_registrations/LLM/${registration.model_name}`,\n- {\n- method: \"GET\",\n- },\n- );\n-\n- return {\n- ...(await desc.json()),\n- is_builtin: registration.is_builtin,\n- };\n- }),\n- );\n-\n- setRegistrationData(newRegistrationData);\n- console.log(newRegistrationData);\n+ setRegistrationData(registrations);\n } catch (error) {\n console.error(\"Error:\", error);\n } finally {\n@@ -111,17 +98,21 @@ const LaunchLLM = () => {\n function a11yProps(index) {\n return {\n id: `simple-tab-${index}`,\n- 'aria-controls': `simple-tabpanel-${index}`,\n+ \"aria-controls\": `simple-tabpanel-${index}`,\n };\n }\n \n return (\n <Box m=\"20px\">\n- <div style={{ display: \"grid\", gridTemplateColumns: \"150px 1fr\", columnGap: \"20px\", margin: \"30px 2rem\" }}>\n- <FormControl\n- variant=\"outlined\"\n- margin=\"normal\"\n- >\n+ <div\n+ style={{\n+ display: \"grid\",\n+ gridTemplateColumns: \"150px 1fr\",\n+ columnGap: \"20px\",\n+ margin: \"30px 2rem\",\n+ }}\n+ >\n+ <FormControl variant=\"outlined\" margin=\"normal\">\n <InputLabel id=\"ability-select-label\">Model Ability</InputLabel>\n <Select\n id=\"ability\"\n@@ -137,10 +128,7 @@ const LaunchLLM = () => {\n <MenuItem value=\"chat\">chat</MenuItem>\n </Select>\n </FormControl>\n- <FormControl\n- variant=\"outlined\"\n- margin=\"normal\"\n- >\n+ <FormControl variant=\"outlined\" margin=\"normal\">\n <TextField\n id=\"search\"\n type=\"search\"\n@@ -158,7 +146,7 @@ const LaunchLLM = () => {\n <ModelCard url={endPoint} modelData={filteredRegistration} />\n ))}\n </div>\n- </Box >\n+ </Box>\n );\n };\n \ndiff --git a/xinference/web/ui/src/scenes/launch_model/modelCard.js b/xinference/web/ui/src/scenes/launch_model/modelCard.js\nindex b440006a92..560ea266de 100644\n--- a/xinference/web/ui/src/scenes/launch_model/modelCard.js\n+++ b/xinference/web/ui/src/scenes/launch_model/modelCard.js\n@@ -1,7 +1,14 @@\n import React, { useState, useContext, useEffect } from \"react\";\n import { v1 as uuidv1 } from \"uuid\";\n import { ApiContext } from \"../../components/apiContext\";\n-import { FormControl, InputLabel, Select, MenuItem, Box, Chip } from \"@mui/material\";\n+import {\n+ FormControl,\n+ InputLabel,\n+ Select,\n+ MenuItem,\n+ Box,\n+ Chip,\n+} from \"@mui/material\";\n import { CircularProgress } from \"@mui/material\";\n import {\n ChatOutlined,\n@@ -242,9 +249,9 @@ const ModelCard = ({ url, modelData }) => {\n smallText: {\n fontSize: \"0.8em\",\n },\n- langRow: {\n- margin: \"2px 5px\"\n- }\n+ tagRow: {\n+ margin: \"2px 5px\",\n+ },\n };\n \n // Set two different states based on mouse hover\n@@ -262,19 +269,40 @@ const ModelCard = ({ url, modelData }) => {\n {/* First state: show description page */}\n <Box style={styles.descriptionCard}>\n <h2 style={styles.h2}>{modelData.model_name}</h2>\n- <div style={styles.langRow}>\n- {(()=> {\n- if ( modelData.model_lang.includes(\"en\")) {\n+ <div style={styles.tagRow}>\n+ {(() => {\n+ if (modelData.model_lang.includes(\"en\")) {\n+ return <Chip label=\"EN\" variant=\"outlined\" size=\"small\" />;\n+ }\n+ })()}\n+ {(() => {\n+ if (modelData.model_lang.includes(\"zh\")) {\n return (\n- <Chip label=\"EN\" variant=\"outlined\" size=\"small\" sx={{marginRight: \"10px\"}}/>\n+ <Chip\n+ label=\"ZH\"\n+ variant=\"outlined\"\n+ size=\"small\"\n+ sx={{ marginLeft: \"10px\" }}\n+ />\n );\n }\n })()}\n- {(()=> {\n- if ( modelData.model_lang.includes(\"zh\")) {\n- return (\n- <Chip label=\"ZH\" variant=\"outlined\" size=\"small\"/>\n+ {(() => {\n+ if (\n+ modelData.model_specs.some((spec) =>\n+ spec.model_format === \"pytorch\"\n+ ? spec.cache_status\n+ : spec.cache_status.some((cs) => cs),\n )\n+ ) {\n+ return (\n+ <Chip\n+ label=\"Cached\"\n+ variant=\"outlined\"\n+ size=\"small\"\n+ sx={{ marginLeft: \"10px\" }}\n+ />\n+ );\n }\n })()}\n </div>\n@@ -338,11 +366,22 @@ const ModelCard = ({ url, modelData }) => {\n onChange={(e) => setModelFormat(e.target.value)}\n label=\"Model Format\"\n >\n- {formatOptions.map((format) => (\n- <MenuItem key={format} value={format}>\n- {format}\n- </MenuItem>\n- ))}\n+ {formatOptions.map((format) => {\n+ const specs = modelData.model_specs.filter(\n+ (spec) => spec.model_format === format,\n+ );\n+ const cached =\n+ format === \"pytorch\"\n+ ? specs.some((spec) => spec.cache_status)\n+ : specs.some((spec) => spec.cache_status.some((cs) => cs));\n+ const displayedFormat = cached ? format + \" (cached)\" : format;\n+\n+ return (\n+ <MenuItem key={format} value={format}>\n+ {displayedFormat}\n+ </MenuItem>\n+ );\n+ })}\n </Select>\n </FormControl>\n <FormControl\n@@ -358,11 +397,22 @@ const ModelCard = ({ url, modelData }) => {\n onChange={(e) => setModelSize(e.target.value)}\n label=\"Model Size\"\n >\n- {sizeOptions.map((size) => (\n- <MenuItem key={size} value={size}>\n- {size}\n- </MenuItem>\n- ))}\n+ {sizeOptions.map((size) => {\n+ const specs = modelData.model_specs\n+ .filter((spec) => spec.model_format === modelFormat)\n+ .filter((spec) => spec.model_size_in_billions === size);\n+ const cached =\n+ modelFormat === \"pytorch\"\n+ ? specs.some((spec) => spec.cache_status)\n+ : specs.some((spec) => spec.cache_status.some((cs) => cs));\n+ const displayedSize = cached ? size + \" (cached)\" : size;\n+\n+ return (\n+ <MenuItem key={size} value={size}>\n+ {displayedSize}\n+ </MenuItem>\n+ );\n+ })}\n </Select>\n </FormControl>\n {(modelData.is_builtin || modelFormat === \"pytorch\") && (\n@@ -379,11 +429,25 @@ const ModelCard = ({ url, modelData }) => {\n onChange={(e) => setQuantization(e.target.value)}\n label=\"Quantization\"\n >\n- {quantizationOptions.map((quant) => (\n- <MenuItem key={quant} value={quant}>\n- {quant}\n- </MenuItem>\n- ))}\n+ {quantizationOptions.map((quant, index) => {\n+ const specs = modelData.model_specs\n+ .filter((spec) => spec.model_format === modelFormat)\n+ .filter(\n+ (spec) => spec.model_size_in_billions === modelSize,\n+ );\n+\n+ const cached =\n+ modelFormat === \"pytorch\"\n+ ? specs[0].cache_status\n+ : specs[0].cache_status[index];\n+ const displayedQuant = cached ? quant + \" (cached)\" : quant;\n+\n+ return (\n+ <MenuItem key={quant} value={quant}>\n+ {displayedQuant}\n+ </MenuItem>\n+ );\n+ })}\n </Select>\n </FormControl>\n )}\n" }
[ { "diff_hunk": "@@ -102,46 +106,91 @@ def get_status(self) -> Dict:\n \"workers\": self._worker_status,\n }\n \n+ @staticmethod\n+ def _to_llm_reg(llm_family: \"LLMFamilyV1\", is_builtin: bool) -> Dict[str, Any]:", "line": null, "original_line": 110, "original_start_line": null, "path": "xinference/core/supervisor.py", "start_line": null, "text": "@user1:\nUse lru_cache to wrap this function.\n\n@author:\nUsing an LRU cache might not be suitable for this scenario because the return value is subject to change." }, { "diff_hunk": "@@ -102,46 +106,91 @@ def get_status(self) -> Dict:\n \"workers\": self._worker_status,\n }\n \n+ @staticmethod\n+ def _to_llm_reg(llm_family: \"LLMFamilyV1\", is_builtin: bool) -> Dict[str, Any]:\n+ from ..model.llm import get_cache_status\n+\n+ specs = []\n+ for spec in llm_family.model_specs:\n+ cache_status = get_cache_status(llm_family, spec)\n+ specs.append({**spec.dict(), \"cache_status\": cache_status})\n+\n+ return {**llm_family.dict(), \"is_builtin\": is_builtin, \"model_specs\": specs}\n+\n+ @staticmethod\n+ def _to_embedding_model_reg(", "line": 124, "original_line": 121, "original_start_line": null, "path": "xinference/core/supervisor.py", "start_line": null, "text": "@user1:\nDitto.\n\n@author:\nUsing an LRU cache might not be suitable for this scenario because the return value is subject to change." }, { "diff_hunk": "@@ -102,46 +106,91 @@ def get_status(self) -> Dict:\n \"workers\": self._worker_status,\n }\n \n+ @staticmethod\n+ def _to_llm_reg(llm_family: \"LLMFamilyV1\", is_builtin: bool) -> Dict[str, Any]:\n+ from ..model.llm import get_cache_status\n+\n+ specs = []\n+ for spec in llm_family.model_specs:\n+ cache_status = get_cache_status(llm_family, spec)\n+ specs.append({**spec.dict(), \"cache_status\": cache_status})\n+\n+ return {**llm_family.dict(), \"is_builtin\": is_builtin, \"model_specs\": specs}\n+\n+ @staticmethod\n+ def _to_embedding_model_reg(\n+ model_spec: \"EmbeddingModelSpec\", is_builtin: bool\n+ ) -> Dict[str, Any]:\n+ from ..model.embedding import get_cache_status\n+\n+ cache_status = get_cache_status(model_spec)\n+ return {\n+ **model_spec.dict(),\n+ \"cache_status\": cache_status,\n+ \"is_builtin\": is_builtin,\n+ }\n+\n+ @staticmethod\n+ def _to_image_model_reg(", "line": 143, "original_line": 134, "original_start_line": null, "path": "xinference/core/supervisor.py", "start_line": null, "text": "@user1:\nDitto.\n\n@author:\nUsing an LRU cache might not be suitable for this scenario because the return value is subject to change." } ]
c5462cc8fb92d6175ef6ddb970e5836b9de7da86
diff --git a/xinference/api/restful_api.py b/xinference/api/restful_api.py index 9075d48479..930b1b37ee 100644 --- a/xinference/api/restful_api.py +++ b/xinference/api/restful_api.py @@ -23,7 +23,16 @@ import gradio as gr import xoscar as xo -from fastapi import APIRouter, FastAPI, File, Form, HTTPException, Request, UploadFile +from fastapi import ( + APIRouter, + FastAPI, + File, + Form, + HTTPException, + Query, + Request, + UploadFile, +) from fastapi.middleware.cors import CORSMiddleware from fastapi.responses import JSONResponse from fastapi.staticfiles import StaticFiles @@ -860,10 +869,12 @@ async def unregister_model(self, model_type: str, model_name: str): logger.error(e, exc_info=True) raise HTTPException(status_code=500, detail=str(e)) - async def list_model_registrations(self, model_type: str) -> List[Dict[str, Any]]: + async def list_model_registrations( + self, model_type: str, detailed: bool = Query(False) + ) -> List[Dict[str, Any]]: try: return await (await self._get_supervisor_ref()).list_model_registrations( - model_type + model_type, detailed=detailed ) except ValueError as re: logger.error(re, exc_info=True) diff --git a/xinference/core/supervisor.py b/xinference/core/supervisor.py index 63ba0c44ef..33942c5ce0 100644 --- a/xinference/core/supervisor.py +++ b/xinference/core/supervisor.py @@ -33,8 +33,12 @@ ) if TYPE_CHECKING: + from ..model.embedding import EmbeddingModelSpec + from ..model.image import ImageModelFamilyV1 + from ..model.llm import LLMFamilyV1 from .worker import WorkerActor + logger = getLogger(__name__) @@ -102,8 +106,63 @@ def get_status(self) -> Dict: "workers": self._worker_status, } + def _to_llm_reg( + self, llm_family: "LLMFamilyV1", is_builtin: bool + ) -> Dict[str, Any]: + from ..model.llm import get_cache_status + + if self.is_local_deployment(): + specs = [] + # TODO: does not work when the supervisor and worker are running on separate nodes. + for spec in llm_family.model_specs: + cache_status = get_cache_status(llm_family, spec) + specs.append({**spec.dict(), "cache_status": cache_status}) + return {**llm_family.dict(), "is_builtin": is_builtin, "model_specs": specs} + else: + return {**llm_family.dict(), "is_builtin": is_builtin} + + def _to_embedding_model_reg( + self, model_spec: "EmbeddingModelSpec", is_builtin: bool + ) -> Dict[str, Any]: + from ..model.embedding import get_cache_status + + if self.is_local_deployment(): + # TODO: does not work when the supervisor and worker are running on separate nodes. + cache_status = get_cache_status(model_spec) + return { + **model_spec.dict(), + "cache_status": cache_status, + "is_builtin": is_builtin, + } + else: + return { + **model_spec.dict(), + "is_builtin": is_builtin, + } + + def _to_image_model_reg( + self, model_family: "ImageModelFamilyV1", is_builtin: bool + ) -> Dict[str, Any]: + from ..model.image import get_cache_status + + if self.is_local_deployment(): + # TODO: does not work when the supervisor and worker are running on separate nodes. + cache_status = get_cache_status(model_family) + return { + **model_family.dict(), + "cache_status": cache_status, + "is_builtin": is_builtin, + } + else: + return { + **model_family.dict(), + "is_builtin": is_builtin, + } + @log_sync(logger=logger) - def list_model_registrations(self, model_type: str) -> List[Dict[str, Any]]: + def list_model_registrations( + self, model_type: str, detailed: bool = False + ) -> List[Dict[str, Any]]: def sort_helper(item): assert isinstance(item["model_name"], str) return item.get("model_name").lower() @@ -111,37 +170,43 @@ def sort_helper(item): if model_type == "LLM": from ..model.llm import BUILTIN_LLM_FAMILIES, get_user_defined_llm_families - ret = [ - {"model_name": f.model_name, "is_builtin": True} - for f in BUILTIN_LLM_FAMILIES - ] - user_defined_llm_families = get_user_defined_llm_families() - ret.extend( - [ - {"model_name": f.model_name, "is_builtin": False} - for f in user_defined_llm_families - ] - ) + ret = [] + for family in BUILTIN_LLM_FAMILIES: + if detailed: + ret.append(self._to_llm_reg(family, True)) + else: + ret.append({"model_name": family.model_name, "is_builtin": True}) - ret.sort(key=sort_helper) + for family in get_user_defined_llm_families(): + if detailed: + ret.append(self._to_llm_reg(family, False)) + else: + ret.append({"model_name": family.model_name, "is_builtin": False}) + ret.sort(key=sort_helper) return ret elif model_type == "embedding": from ..model.embedding import BUILTIN_EMBEDDING_MODELS - ret = [ - {"model_name": model_name, "is_builtin": True} - for model_name in BUILTIN_EMBEDDING_MODELS - ] + ret = [] + for model_name, family in BUILTIN_EMBEDDING_MODELS.items(): + if detailed: + ret.append(self._to_embedding_model_reg(family, is_builtin=True)) + else: + ret.append({"model_name": model_name, "is_builtin": True}) + ret.sort(key=sort_helper) return ret elif model_type == "image": from ..model.image import BUILTIN_IMAGE_MODELS - ret = [ - {"model_name": model_name, "is_builtin": True} - for model_name in BUILTIN_IMAGE_MODELS - ] + ret = [] + for model_name, family in BUILTIN_IMAGE_MODELS.items(): + if detailed: + ret.append(self._to_image_model_reg(family, is_builtin=True)) + else: + ret.append({"model_name": model_name, "is_builtin": True}) + ret.sort(key=sort_helper) return ret else: @@ -181,7 +246,7 @@ async def register_model(self, model_type: str, model: str, persist: bool): if model_type == "LLM": from ..model.llm import LLMFamilyV1, register_llm - if not self.is_local_deployment: + if not self.is_local_deployment(): workers = list(self._worker_address_to_worker.values()) for worker in workers: await worker.register_model(model_type, model, persist) @@ -198,7 +263,7 @@ async def unregister_model(self, model_type: str, model_name: str): unregister_llm(model_name) - if not self.is_local_deployment: + if not self.is_local_deployment(): workers = list(self._worker_address_to_worker.values()) for worker in workers: await worker.unregister_model(model_name) @@ -421,7 +486,6 @@ async def list_models(self) -> Dict[str, Dict[str, Any]]: ret.update(await worker.list_models()) return {parse_replica_model_uid(k)[0]: v for k, v in ret.items()} - @log_sync(logger=logger) def is_local_deployment(self) -> bool: # TODO: temporary. return ( diff --git a/xinference/core/worker.py b/xinference/core/worker.py index a8dcd7794d..b35642e785 100644 --- a/xinference/core/worker.py +++ b/xinference/core/worker.py @@ -133,7 +133,7 @@ def _check_model_is_valid(self, model_name: str, model_format: Optional[str]): raise ValueError(f"{model_name} model can't run on Darwin system.") @log_sync(logger=logger) - async def register_model(self, model_type: str, model: str, persist: bool): + def register_model(self, model_type: str, model: str, persist: bool): # TODO: centralized model registrations if model_type == "LLM": from ..model.llm import LLMFamilyV1, register_llm @@ -144,7 +144,7 @@ async def register_model(self, model_type: str, model: str, persist: bool): raise ValueError(f"Unsupported model type: {model_type}") @log_sync(logger=logger) - async def unregister_model(self, model_type: str, model_name: str): + def unregister_model(self, model_type: str, model_name: str): # TODO: centralized model registrations if model_type == "LLM": from ..model.llm import unregister_llm diff --git a/xinference/model/embedding/__init__.py b/xinference/model/embedding/__init__.py index 3c018c9da4..59a2592b5b 100644 --- a/xinference/model/embedding/__init__.py +++ b/xinference/model/embedding/__init__.py @@ -16,7 +16,7 @@ import json import os -from .core import EmbeddingModelSpec +from .core import EmbeddingModelSpec, get_cache_status _model_spec_json = os.path.join(os.path.dirname(__file__), "model_spec.json") _model_spec_modelscope_json = os.path.join( diff --git a/xinference/model/embedding/core.py b/xinference/model/embedding/core.py index 94c193eb26..9a6364dafb 100644 --- a/xinference/model/embedding/core.py +++ b/xinference/model/embedding/core.py @@ -98,6 +98,18 @@ def cache(model_spec: EmbeddingModelSpec): return cache_dir +def get_cache_status( + model_spec: EmbeddingModelSpec, +) -> bool: + cache_dir = os.path.realpath( + os.path.join(XINFERENCE_CACHE_DIR, model_spec.model_name) + ) + if not os.path.exists(cache_dir): + os.makedirs(cache_dir, exist_ok=True) + meta_path = os.path.join(cache_dir, "__valid_download") + return valid_model_revision(meta_path, model_spec.model_revision) + + class EmbeddingModel: def __init__(self, model_uid: str, model_path: str, device: Optional[str] = None): self._model_uid = model_uid diff --git a/xinference/model/embedding/tests/test_embedding_models.py b/xinference/model/embedding/tests/test_embedding_models.py index 9bbe132fa7..7dc6a9171e 100644 --- a/xinference/model/embedding/tests/test_embedding_models.py +++ b/xinference/model/embedding/tests/test_embedding_models.py @@ -11,7 +11,9 @@ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. + import os +import shutil from ...utils import valid_model_revision from ..core import EmbeddingModel, EmbeddingModelSpec, cache @@ -45,28 +47,33 @@ def test_model(): - model_path = cache(TEST_MODEL_SPEC) - model = EmbeddingModel("mock", model_path) - # input is a string - input_text = "what is the capital of China?" - model.load() - r = model.create_embedding(input_text) - assert len(r["data"]) == 1 - for d in r["data"]: - assert len(d["embedding"]) == 384 - - # input is a lit - input_texts = [ - "what is the capital of China?", - "how to implement quick sort in python?", - "Beijing", - "sorting algorithms", - ] - model.load() - r = model.create_embedding(input_texts) - assert len(r["data"]) == 4 - for d in r["data"]: - assert len(d["embedding"]) == 384 + model_path = None + try: + model_path = cache(TEST_MODEL_SPEC) + model = EmbeddingModel("mock", model_path) + # input is a string + input_text = "what is the capital of China?" + model.load() + r = model.create_embedding(input_text) + assert len(r["data"]) == 1 + for d in r["data"]: + assert len(d["embedding"]) == 384 + + # input is a lit + input_texts = [ + "what is the capital of China?", + "how to implement quick sort in python?", + "Beijing", + "sorting algorithms", + ] + model.load() + r = model.create_embedding(input_texts) + assert len(r["data"]) == 4 + for d in r["data"]: + assert len(d["embedding"]) == 384 + finally: + if model_path is not None: + shutil.rmtree(model_path, ignore_errors=True) def test_model_from_modelscope(): @@ -82,21 +89,38 @@ def test_model_from_modelscope(): def test_meta_file(): - cache_dir = cache(TEST_MODEL_SPEC) - meta_path = os.path.join(cache_dir, "__valid_download") - assert valid_model_revision(meta_path, TEST_MODEL_SPEC.model_revision) - - # test another version of the same model - assert not valid_model_revision(meta_path, TEST_MODEL_SPEC2.model_revision) - cache_dir = cache(TEST_MODEL_SPEC2) - meta_path = os.path.join(cache_dir, "__valid_download") - assert valid_model_revision(meta_path, TEST_MODEL_SPEC2.model_revision) - - # test functionality of the new version model - model = EmbeddingModel("mock", cache_dir) - input_text = "I can do this all day." - model.load() - r = model.create_embedding(input_text) - assert len(r["data"]) == 1 - for d in r["data"]: - assert len(d["embedding"]) == 384 + cache_dir = None + try: + cache_dir = cache(TEST_MODEL_SPEC) + meta_path = os.path.join(cache_dir, "__valid_download") + assert valid_model_revision(meta_path, TEST_MODEL_SPEC.model_revision) + + # test another version of the same model + assert not valid_model_revision(meta_path, TEST_MODEL_SPEC2.model_revision) + cache_dir = cache(TEST_MODEL_SPEC2) + meta_path = os.path.join(cache_dir, "__valid_download") + assert valid_model_revision(meta_path, TEST_MODEL_SPEC2.model_revision) + + # test functionality of the new version model + model = EmbeddingModel("mock", cache_dir) + input_text = "I can do this all day." + model.load() + r = model.create_embedding(input_text) + assert len(r["data"]) == 1 + for d in r["data"]: + assert len(d["embedding"]) == 384 + finally: + shutil.rmtree(cache_dir, ignore_errors=True) + + +def test_get_cache_status(): + from ..core import get_cache_status + + model_path = None + try: + assert get_cache_status(TEST_MODEL_SPEC) is False + model_path = cache(TEST_MODEL_SPEC) + assert get_cache_status(TEST_MODEL_SPEC) is True + finally: + if model_path is not None: + shutil.rmtree(model_path, ignore_errors=True) diff --git a/xinference/model/image/__init__.py b/xinference/model/image/__init__.py index 2f912d3928..432bbdd3a9 100644 --- a/xinference/model/image/__init__.py +++ b/xinference/model/image/__init__.py @@ -16,7 +16,7 @@ import json import os -from .core import ImageModelFamilyV1 +from .core import ImageModelFamilyV1, get_cache_status _model_spec_json = os.path.join(os.path.dirname(__file__), "model_spec.json") BUILTIN_IMAGE_MODELS = dict( diff --git a/xinference/model/image/core.py b/xinference/model/image/core.py index 0170f7c8cb..65c699d024 100644 --- a/xinference/model/image/core.py +++ b/xinference/model/image/core.py @@ -20,6 +20,7 @@ from ...constants import XINFERENCE_CACHE_DIR from ..core import ModelDescription +from ..utils import valid_model_revision from .stable_diffusion.core import DiffusionModel MAX_ATTEMPTS = 3 @@ -70,6 +71,11 @@ def cache(model_spec: ImageModelFamilyV1): ) if not os.path.exists(cache_dir): os.makedirs(cache_dir, exist_ok=True) + + meta_path = os.path.join(cache_dir, "__valid_download") + if valid_model_revision(meta_path, model_spec.model_revision): + return cache_dir + for current_attempt in range(1, MAX_ATTEMPTS + 1): try: huggingface_hub.snapshot_download( @@ -89,9 +95,28 @@ def cache(model_spec: ImageModelFamilyV1): raise RuntimeError( f"Failed to download model '{model_spec.model_name}' after {MAX_ATTEMPTS} attempts" ) + + with open(meta_path, "w") as f: + import json + + desc = ImageModelDescription(model_spec) + json.dump(desc.to_dict(), f) + return cache_dir +def get_cache_status( + model_spec: ImageModelFamilyV1, +) -> bool: + cache_dir = os.path.realpath( + os.path.join(XINFERENCE_CACHE_DIR, model_spec.model_name) + ) + if not os.path.exists(cache_dir): + os.makedirs(cache_dir, exist_ok=True) + meta_path = os.path.join(cache_dir, "__valid_download") + return valid_model_revision(meta_path, model_spec.model_revision) + + def create_image_model_instance( model_uid: str, model_name: str, **kwargs ) -> Tuple[DiffusionModel, ImageModelDescription]: diff --git a/xinference/model/image/tests/test_stable_diffusion.py b/xinference/model/image/tests/test_stable_diffusion.py index 809cd82d3d..397474bfe8 100644 --- a/xinference/model/image/tests/test_stable_diffusion.py +++ b/xinference/model/image/tests/test_stable_diffusion.py @@ -15,6 +15,7 @@ import io import logging import os.path +import shutil from io import BytesIO import pytest @@ -34,20 +35,25 @@ def test_model(): - model_path = cache(TEST_MODEL_SPEC) - model = DiffusionModel("mock", model_path) - # input is a string - input_text = "an apple" - model.load() - r = model.text_to_image(input_text, size="256*256") - assert len(r["data"]) == 1 - assert os.path.exists(r["data"][0]["url"]) - r = model.text_to_image(input_text, size="256*256", response_format="b64_json") - assert len(r["data"]) == 1 - b64_json = r["data"][0]["b64_json"] - image_bytes = base64.decodebytes(b64_json) - img = Image.open(BytesIO(image_bytes)) - assert img.size == (256, 256) + model_path = None + try: + model_path = cache(TEST_MODEL_SPEC) + model = DiffusionModel("mock", model_path) + # input is a string + input_text = "an apple" + model.load() + r = model.text_to_image(input_text, size="256*256") + assert len(r["data"]) == 1 + assert os.path.exists(r["data"][0]["url"]) + r = model.text_to_image(input_text, size="256*256", response_format="b64_json") + assert len(r["data"]) == 1 + b64_json = r["data"][0]["b64_json"] + image_bytes = base64.decodebytes(b64_json) + img = Image.open(BytesIO(image_bytes)) + assert img.size == (256, 256) + finally: + if model_path is not None: + shutil.rmtree(model_path) @pytest.mark.skip(reason="Stable diffusion controlnet requires too many GRAM.") @@ -136,3 +142,16 @@ def test_restful_api_for_image_with_mlsd_controlnet(setup): num_inference_steps=20, ) logger.info("test result %s", r) + + +def test_get_cache_status(): + from ..core import get_cache_status + + model_path = None + try: + assert get_cache_status(TEST_MODEL_SPEC) is False + model_path = cache(TEST_MODEL_SPEC) + assert get_cache_status(TEST_MODEL_SPEC) is True + finally: + if model_path is not None: + shutil.rmtree(model_path) diff --git a/xinference/model/llm/__init__.py b/xinference/model/llm/__init__.py index abb44caaa8..6271c9bec0 100644 --- a/xinference/model/llm/__init__.py +++ b/xinference/model/llm/__init__.py @@ -26,6 +26,7 @@ LLMSpecV1, PromptStyleV1, PytorchLLMSpecV1, + get_cache_status, get_user_defined_llm_families, match_llm, match_llm_cls, diff --git a/xinference/model/llm/llm_family.py b/xinference/model/llm/llm_family.py index d555755918..d4ec01f973 100644 --- a/xinference/model/llm/llm_family.py +++ b/xinference/model/llm/llm_family.py @@ -581,6 +581,36 @@ def cache_from_huggingface( return cache_dir +def get_cache_status( + llm_family: LLMFamilyV1, + llm_spec: "LLMSpecV1", +) -> Union[bool, List[bool]]: + cache_dir = _get_cache_dir(llm_family, llm_spec) + if llm_spec.model_format == "pytorch": + return _skip_download( + cache_dir, + llm_spec.model_format, + llm_spec.model_hub, + llm_spec.model_revision, + "none", + ) + elif llm_spec.model_format in ["ggmlv3", "ggufv2"]: + ret = [] + for q in llm_spec.quantizations: + ret.append( + _skip_download( + cache_dir, + llm_spec.model_format, + llm_spec.model_hub, + llm_spec.model_revision, + q, + ) + ) + return ret + else: + raise ValueError(f"Unsupported model format: {llm_spec.model_format}") + + def _is_linux(): return platform.system() == "Linux" diff --git a/xinference/model/llm/tests/test_llm_family.py b/xinference/model/llm/tests/test_llm_family.py index 107afe75ae..46f190b703 100644 --- a/xinference/model/llm/tests/test_llm_family.py +++ b/xinference/model/llm/tests/test_llm_family.py @@ -866,3 +866,75 @@ def test_skip_download_ggml(): finally: os.remove(ms_meta_path) assert not os.path.exists(ms_meta_path) + + +def test_get_cache_status_pytorch(): + from ..llm_family import cache_from_huggingface, get_cache_status + + spec = PytorchLLMSpecV1( + model_format="pytorch", + model_size_in_billions=1, + quantizations=["4-bit", "8-bit", "none"], + model_id="facebook/opt-125m", + ) + family = LLMFamilyV1( + version=1, + context_length=2048, + model_type="LLM", + model_name="opt", + model_lang=["en"], + model_ability=["embed", "generate"], + model_specs=[spec], + prompt_style=None, + ) + + cache_status = get_cache_status(llm_family=family, llm_spec=spec) + assert not isinstance(cache_status, list) + assert not cache_status + + cache_dir = cache_from_huggingface(family, spec, quantization=None) + cache_status = get_cache_status(llm_family=family, llm_spec=spec) + assert not isinstance(cache_status, list) + assert cache_status + + assert os.path.exists(cache_dir) + assert os.path.exists(os.path.join(cache_dir, "README.md")) + assert os.path.islink(os.path.join(cache_dir, "README.md")) + shutil.rmtree(cache_dir) + + +def test_get_cache_status_ggml(): + from ..llm_family import cache_from_huggingface, get_cache_status + + spec = GgmlLLMSpecV1( + model_format="ggmlv3", + model_size_in_billions=3, + model_id="TheBloke/orca_mini_3B-GGML", + quantizations=["q4_0", "q5_0"], + model_file_name_template="README.md", + ) + family = LLMFamilyV1( + version=1, + context_length=2048, + model_type="LLM", + model_name="orca", + model_lang=["en"], + model_ability=["embed", "chat"], + model_specs=[spec], + prompt_style=None, + ) + + cache_status = get_cache_status(llm_family=family, llm_spec=spec) + assert isinstance(cache_status, list) + assert not any(cache_status) + + cache_dir = cache_from_huggingface(family, spec, quantization="q4_0") + cache_status = get_cache_status(llm_family=family, llm_spec=spec) + assert isinstance(cache_status, list) + assert len(cache_status) == 2 + assert cache_status[0] and not cache_status[1] + + assert os.path.exists(cache_dir) + assert os.path.exists(os.path.join(cache_dir, "README.md")) + assert os.path.islink(os.path.join(cache_dir, "README.md")) + shutil.rmtree(cache_dir) diff --git a/xinference/model/utils.py b/xinference/model/utils.py index 5ab0425fd6..7f84136c89 100644 --- a/xinference/model/utils.py +++ b/xinference/model/utils.py @@ -118,7 +118,7 @@ def valid_model_revision( logger.debug("Legacy meta file detected.") return True - if "model_revision" in meta_data: # embedding + if "model_revision" in meta_data: # embedding, image real_revision = meta_data["model_revision"] elif "revision" in meta_data: # llm real_revision = meta_data["revision"] diff --git a/xinference/web/ui/src/scenes/launch_model/embeddingCard.js b/xinference/web/ui/src/scenes/launch_model/embeddingCard.js index c42e00d6d2..20dfe33caa 100644 --- a/xinference/web/ui/src/scenes/launch_model/embeddingCard.js +++ b/xinference/web/ui/src/scenes/launch_model/embeddingCard.js @@ -3,10 +3,7 @@ import { v1 as uuidv1 } from "uuid"; import { ApiContext } from "../../components/apiContext"; import { Box, Chip } from "@mui/material"; import { CircularProgress } from "@mui/material"; -import { - UndoOutlined, - RocketLaunchOutlined, -} from "@mui/icons-material"; +import { UndoOutlined, RocketLaunchOutlined } from "@mui/icons-material"; const CARD_HEIGHT = 270; const CARD_WIDTH = 270; @@ -33,7 +30,7 @@ const EmbeddingCard = ({ url, modelData }) => { const modelDataWithID = { model_uid: uuid, model_name: modelData.model_name, - model_type: "embedding" + model_type: "embedding", }; // First fetch request to initiate the model @@ -175,8 +172,8 @@ const EmbeddingCard = ({ url, modelData }) => { fontSize: "0.8em", }, langRow: { - margin: "2px 5px 40px 5px" - } + margin: "2px 5px 40px 5px", + }, }; // Set two different states based on mouse hover @@ -199,30 +196,29 @@ const EmbeddingCard = ({ url, modelData }) => { {(() => { if (modelData.language.includes("en")) { return ( - <Chip label="EN" variant="outlined" size="small" sx={{ marginRight: "10px" }} /> + <Chip + label="EN" + variant="outlined" + size="small" + sx={{ marginRight: "10px" }} + /> ); } })()} {(() => { if (modelData.language.includes("zh")) { - return ( - <Chip label="ZH" variant="outlined" size="small" /> - ); + return <Chip label="ZH" variant="outlined" size="small" />; } })()} </div> </div> <div style={styles.iconRow}> <div style={styles.iconItem}> - <span style={styles.boldIconText}> - {modelData.dimensions} - </span> + <span style={styles.boldIconText}>{modelData.dimensions}</span> <small style={styles.smallText}>dimensions</small> </div> <div style={styles.iconItem}> - <span style={styles.boldIconText}> - {modelData.max_tokens} - </span> + <span style={styles.boldIconText}>{modelData.max_tokens}</span> <small style={styles.smallText}>max tokens</small> </div> </div> @@ -248,13 +244,7 @@ const EmbeddingCard = ({ url, modelData }) => { title="Launch Embedding" style={styles.buttonContainer} onClick={() => launchModel(url, modelData)} - disabled={ - isCallingApi || - isUpdatingModel || - !( - modelData - ) - } + disabled={isCallingApi || isUpdatingModel || !modelData} > {(() => { if (isCallingApi || isUpdatingModel) { @@ -270,11 +260,7 @@ const EmbeddingCard = ({ url, modelData }) => { /> </Box> ); - } else if ( - !( - modelData - ) - ) { + } else if (!modelData) { return ( <Box style={{ ...styles.buttonItem, backgroundColor: "#f2f2f2" }} diff --git a/xinference/web/ui/src/scenes/launch_model/index.js b/xinference/web/ui/src/scenes/launch_model/index.js index 111576d481..69d5979f01 100644 --- a/xinference/web/ui/src/scenes/launch_model/index.js +++ b/xinference/web/ui/src/scenes/launch_model/index.js @@ -1,13 +1,9 @@ -import React, { } from "react"; +import React from "react"; import Title from "../../components/Title"; import LaunchLLM from "./launchLLM"; import LaunchEmbedding from "./launchEmbedding"; -import { - Box, - Tab -} from "@mui/material"; -import { TabContext, TabList, TabPanel } from "@mui/lab" - +import { Box, Tab } from "@mui/material"; +import { TabContext, TabList, TabPanel } from "@mui/lab"; const LaunchModel = () => { const [value, setValue] = React.useState("1"); @@ -26,14 +22,14 @@ const LaunchModel = () => { <Tab label="Embedding Models" value="2" /> </TabList> </Box> - <TabPanel value="1" sx={{padding: 0}}> + <TabPanel value="1" sx={{ padding: 0 }}> <LaunchLLM /> </TabPanel> - <TabPanel value="2" sx={{padding: 0}}> + <TabPanel value="2" sx={{ padding: 0 }}> <LaunchEmbedding /> </TabPanel> </TabContext> - </Box > + </Box> ); }; diff --git a/xinference/web/ui/src/scenes/launch_model/launchEmbedding.js b/xinference/web/ui/src/scenes/launch_model/launchEmbedding.js index 69756c7d6a..55aa23e2c0 100644 --- a/xinference/web/ui/src/scenes/launch_model/launchEmbedding.js +++ b/xinference/web/ui/src/scenes/launch_model/launchEmbedding.js @@ -21,9 +21,7 @@ const LaunchEmbedding = () => { const modelName = registration.model_name ? registration.model_name.toLowerCase() : ""; - if ( - !modelName.includes(searchTerm.toLowerCase()) - ) { + if (!modelName.includes(searchTerm.toLowerCase())) { return false; } return true; @@ -35,9 +33,12 @@ const LaunchEmbedding = () => { try { setIsCallingApi(true); - const response = await fetch(`${endPoint}/v1/model_registrations/embedding`, { - method: "GET", - }); + const response = await fetch( + `${endPoint}/v1/model_registrations/embedding`, + { + method: "GET", + }, + ); const registrations = await response.json(); const newRegistrationData = await Promise.all( @@ -78,11 +79,14 @@ const LaunchEmbedding = () => { return ( <Box m="20px"> - <div style={{ display: "grid", gridTemplateColumns: "1fr", margin: "30px 2rem" }}> - <FormControl - variant="outlined" - margin="normal" - > + <div + style={{ + display: "grid", + gridTemplateColumns: "1fr", + margin: "30px 2rem", + }} + > + <FormControl variant="outlined" margin="normal"> <TextField id="search" type="search" diff --git a/xinference/web/ui/src/scenes/launch_model/launchLLM.js b/xinference/web/ui/src/scenes/launch_model/launchLLM.js index 07c967d5d1..281cf57098 100644 --- a/xinference/web/ui/src/scenes/launch_model/launchLLM.js +++ b/xinference/web/ui/src/scenes/launch_model/launchLLM.js @@ -8,7 +8,8 @@ import { Select, MenuItem, InputLabel, - Tabs, Tab + Tabs, + Tab, } from "@mui/material"; import { ApiContext } from "../../components/apiContext"; @@ -65,30 +66,16 @@ const LaunchLLM = () => { try { setIsCallingApi(true); - const response = await fetch(`${endPoint}/v1/model_registrations/LLM`, { - method: "GET", - }); + const response = await fetch( + `${endPoint}/v1/model_registrations/LLM?detailed=true`, + { + method: "GET", + }, + ); const registrations = await response.json(); - const newRegistrationData = await Promise.all( - registrations.map(async (registration) => { - const desc = await fetch( - `${endPoint}/v1/model_registrations/LLM/${registration.model_name}`, - { - method: "GET", - }, - ); - - return { - ...(await desc.json()), - is_builtin: registration.is_builtin, - }; - }), - ); - - setRegistrationData(newRegistrationData); - console.log(newRegistrationData); + setRegistrationData(registrations); } catch (error) { console.error("Error:", error); } finally { @@ -111,17 +98,21 @@ const LaunchLLM = () => { function a11yProps(index) { return { id: `simple-tab-${index}`, - 'aria-controls': `simple-tabpanel-${index}`, + "aria-controls": `simple-tabpanel-${index}`, }; } return ( <Box m="20px"> - <div style={{ display: "grid", gridTemplateColumns: "150px 1fr", columnGap: "20px", margin: "30px 2rem" }}> - <FormControl - variant="outlined" - margin="normal" - > + <div + style={{ + display: "grid", + gridTemplateColumns: "150px 1fr", + columnGap: "20px", + margin: "30px 2rem", + }} + > + <FormControl variant="outlined" margin="normal"> <InputLabel id="ability-select-label">Model Ability</InputLabel> <Select id="ability" @@ -137,10 +128,7 @@ const LaunchLLM = () => { <MenuItem value="chat">chat</MenuItem> </Select> </FormControl> - <FormControl - variant="outlined" - margin="normal" - > + <FormControl variant="outlined" margin="normal"> <TextField id="search" type="search" @@ -158,7 +146,7 @@ const LaunchLLM = () => { <ModelCard url={endPoint} modelData={filteredRegistration} /> ))} </div> - </Box > + </Box> ); }; diff --git a/xinference/web/ui/src/scenes/launch_model/modelCard.js b/xinference/web/ui/src/scenes/launch_model/modelCard.js index b440006a92..42def8c0a9 100644 --- a/xinference/web/ui/src/scenes/launch_model/modelCard.js +++ b/xinference/web/ui/src/scenes/launch_model/modelCard.js @@ -1,7 +1,14 @@ import React, { useState, useContext, useEffect } from "react"; import { v1 as uuidv1 } from "uuid"; import { ApiContext } from "../../components/apiContext"; -import { FormControl, InputLabel, Select, MenuItem, Box, Chip } from "@mui/material"; +import { + FormControl, + InputLabel, + Select, + MenuItem, + Box, + Chip, +} from "@mui/material"; import { CircularProgress } from "@mui/material"; import { ChatOutlined, @@ -29,6 +36,14 @@ const ModelCard = ({ url, modelData }) => { const [sizeOptions, setSizeOptions] = useState([]); const [quantizationOptions, setQuantizationOptions] = useState([]); + const isCached = (spec) => { + if (spec.model_format === "pytorch") { + return spec.cache_status && spec.cache_status === true; + } else { + return spec.cache_status && spec.cache_status.some((cs) => cs); + } + }; + // UseEffects for parameter selection, change options based on previous selections useEffect(() => { if (modelData) { @@ -242,9 +257,9 @@ const ModelCard = ({ url, modelData }) => { smallText: { fontSize: "0.8em", }, - langRow: { - margin: "2px 5px" - } + tagRow: { + margin: "2px 5px", + }, }; // Set two different states based on mouse hover @@ -262,19 +277,34 @@ const ModelCard = ({ url, modelData }) => { {/* First state: show description page */} <Box style={styles.descriptionCard}> <h2 style={styles.h2}>{modelData.model_name}</h2> - <div style={styles.langRow}> - {(()=> { - if ( modelData.model_lang.includes("en")) { + <div style={styles.tagRow}> + {(() => { + if (modelData.model_lang.includes("en")) { + return <Chip label="EN" variant="outlined" size="small" />; + } + })()} + {(() => { + if (modelData.model_lang.includes("zh")) { return ( - <Chip label="EN" variant="outlined" size="small" sx={{marginRight: "10px"}}/> + <Chip + label="ZH" + variant="outlined" + size="small" + sx={{ marginLeft: "10px" }} + /> ); } })()} - {(()=> { - if ( modelData.model_lang.includes("zh")) { + {(() => { + if (modelData.model_specs.some((spec) => isCached(spec))) { return ( - <Chip label="ZH" variant="outlined" size="small"/> - ) + <Chip + label="Cached" + variant="outlined" + size="small" + sx={{ marginLeft: "10px" }} + /> + ); } })()} </div> @@ -338,11 +368,19 @@ const ModelCard = ({ url, modelData }) => { onChange={(e) => setModelFormat(e.target.value)} label="Model Format" > - {formatOptions.map((format) => ( - <MenuItem key={format} value={format}> - {format} - </MenuItem> - ))} + {formatOptions.map((format) => { + const specs = modelData.model_specs.filter( + (spec) => spec.model_format === format, + ); + const cached = specs.some((spec) => isCached(spec)); + const displayedFormat = cached ? format + " (cached)" : format; + + return ( + <MenuItem key={format} value={format}> + {displayedFormat} + </MenuItem> + ); + })} </Select> </FormControl> <FormControl @@ -358,11 +396,19 @@ const ModelCard = ({ url, modelData }) => { onChange={(e) => setModelSize(e.target.value)} label="Model Size" > - {sizeOptions.map((size) => ( - <MenuItem key={size} value={size}> - {size} - </MenuItem> - ))} + {sizeOptions.map((size) => { + const specs = modelData.model_specs + .filter((spec) => spec.model_format === modelFormat) + .filter((spec) => spec.model_size_in_billions === size); + const cached = specs.some((spec) => isCached(spec)); + const displayedSize = cached ? size + " (cached)" : size; + + return ( + <MenuItem key={size} value={size}> + {displayedSize} + </MenuItem> + ); + })} </Select> </FormControl> {(modelData.is_builtin || modelFormat === "pytorch") && ( @@ -379,11 +425,26 @@ const ModelCard = ({ url, modelData }) => { onChange={(e) => setQuantization(e.target.value)} label="Quantization" > - {quantizationOptions.map((quant) => ( - <MenuItem key={quant} value={quant}> - {quant} - </MenuItem> - ))} + {quantizationOptions.map((quant, index) => { + const specs = modelData.model_specs + .filter((spec) => spec.model_format === modelFormat) + .filter( + (spec) => spec.model_size_in_billions === modelSize, + ); + + const cached = + modelFormat === "pytorch" + ? specs[0].cache_status && specs[0].cache_status === true + : specs[0].cache_status && + specs[0].cache_status[index] === true; + const displayedQuant = cached ? quant + " (cached)" : quant; + + return ( + <MenuItem key={quant} value={quant}> + {displayedQuant} + </MenuItem> + ); + })} </Select> </FormControl> )}
{ "difficulty": "medium", "estimated_review_effort": 4, "problem_domain": "New Feature Additions" }
xorbitsai__inference-613@dbb4899
xorbitsai/inference
Python
613
FEAT: support chatglm3 with ggml format
Resolve #609 Resolve #608
2023-11-06T07:16:05Z
BUG: Downloading model led to worker timeout ### Describe the bug Worker lost after downloading the model. ``` 2023-11-06 12:00:52,145 - modelscope - INFO - Use user-specified model revision: v1.0.0 Downloading: 100%|███████████████████████████████████████████████████████████████████| 3.27G/3.27G [06:02<00:00, 9.68MB/s] 2023-11-06 12:06:57,211 xinference.core.supervisor 70317 ERROR Worker timeout. address: 127.0.0.1:45560, influenced models: [] ``` ### To Reproduce To help us to reproduce this bug, please provide information below: 1. Your Python version. 2. The version of xinference you use. 3. Versions of crucial packages. 4. Full stack of the error. 5. Minimized code to reproduce the error. ### Expected behavior A clear and concise description of what you expected to happen. ### Additional context Add any other context about the problem here. BUG: Failed to use chatglm model with ggml format ### Describe the bug logs: ``` Traceback (most recent call last): File "/Users/hekaisheng/miniconda3/envs/py311/lib/python3.11/site-packages/gradio/routes.py", line 523, in run_predict output = await app.get_blocks().process_api( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/hekaisheng/miniconda3/envs/py311/lib/python3.11/site-packages/gradio/blocks.py", line 1437, in process_api result = await self.call_function( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/hekaisheng/miniconda3/envs/py311/lib/python3.11/site-packages/gradio/blocks.py", line 1123, in call_function prediction = await utils.async_iteration(iterator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/hekaisheng/miniconda3/envs/py311/lib/python3.11/site-packages/gradio/utils.py", line 508, in async_iteration return await iterator.__anext__() ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/hekaisheng/miniconda3/envs/py311/lib/python3.11/site-packages/gradio/utils.py", line 827, in asyncgen_wrapper async for response in f(*args, **kwargs): File "/Users/hekaisheng/miniconda3/envs/py311/lib/python3.11/site-packages/gradio/chat_interface.py", line 438, in _stream_fn first_response = await async_iteration(generator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/hekaisheng/miniconda3/envs/py311/lib/python3.11/site-packages/gradio/utils.py", line 508, in async_iteration return await iterator.__anext__() ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/hekaisheng/miniconda3/envs/py311/lib/python3.11/site-packages/gradio/utils.py", line 501, in __anext__ return await anyio.to_thread.run_sync( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/hekaisheng/miniconda3/envs/py311/lib/python3.11/site-packages/anyio/to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/hekaisheng/miniconda3/envs/py311/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread return await future ^^^^^^^^^^^^ File "/Users/hekaisheng/miniconda3/envs/py311/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 807, in run result = context.run(func, *args) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/hekaisheng/miniconda3/envs/py311/lib/python3.11/site-packages/gradio/utils.py", line 484, in run_sync_iterator_async return next(iterator) ^^^^^^^^^^^^^^ File "/Users/hekaisheng/Documents/projects/inference/xinference/core/chat_interface.py", line 105, in generate_wrapper assert isinstance(model, RESTfulChatModelHandle) AssertionError ``` ### To Reproduce To help us to reproduce this bug, please provide information below: 1. Your Python version. 2. The version of xinference you use. 3. Versions of crucial packages. 4. Full stack of the error. 5. Minimized code to reproduce the error. ### Expected behavior A clear and concise description of what you expected to happen. ### Additional context Add any other context about the problem here.
[ { "body": "### Describe the bug\r\nWorker lost after downloading the model.\r\n```\r\n2023-11-06 12:00:52,145 - modelscope - INFO - Use user-specified model revision: v1.0.0\r\nDownloading: 100%|███████████████████████████████████████████████████████████████████| 3.27G/3.27G [06:02<00:00, 9.68MB/s]\r\n2023-11-06 12:06:57,211 xinference.core.supervisor 70317 ERROR Worker timeout. address: 127.0.0.1:45560, influenced models: []\r\n```\r\n\r\n### To Reproduce\r\nTo help us to reproduce this bug, please provide information below:\r\n\r\n1. Your Python version.\r\n2. The version of xinference you use.\r\n3. Versions of crucial packages.\r\n4. Full stack of the error.\r\n5. Minimized code to reproduce the error.\r\n\r\n### Expected behavior\r\nA clear and concise description of what you expected to happen.\r\n\r\n### Additional context\r\nAdd any other context about the problem here.\r\n", "number": 608, "title": "BUG: Downloading model led to worker timeout" }, { "body": "### Describe the bug\r\nlogs:\r\n```\r\nTraceback (most recent call last):\r\n File \"/Users/hekaisheng/miniconda3/envs/py311/lib/python3.11/site-packages/gradio/routes.py\", line 523, in run_predict\r\n output = await app.get_blocks().process_api(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/hekaisheng/miniconda3/envs/py311/lib/python3.11/site-packages/gradio/blocks.py\", line 1437, in process_api\r\n result = await self.call_function(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/hekaisheng/miniconda3/envs/py311/lib/python3.11/site-packages/gradio/blocks.py\", line 1123, in call_function\r\n prediction = await utils.async_iteration(iterator)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/hekaisheng/miniconda3/envs/py311/lib/python3.11/site-packages/gradio/utils.py\", line 508, in async_iteration\r\n return await iterator.__anext__()\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/hekaisheng/miniconda3/envs/py311/lib/python3.11/site-packages/gradio/utils.py\", line 827, in asyncgen_wrapper\r\n async for response in f(*args, **kwargs):\r\n File \"/Users/hekaisheng/miniconda3/envs/py311/lib/python3.11/site-packages/gradio/chat_interface.py\", line 438, in _stream_fn\r\n first_response = await async_iteration(generator)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/hekaisheng/miniconda3/envs/py311/lib/python3.11/site-packages/gradio/utils.py\", line 508, in async_iteration\r\n return await iterator.__anext__()\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/hekaisheng/miniconda3/envs/py311/lib/python3.11/site-packages/gradio/utils.py\", line 501, in __anext__\r\n return await anyio.to_thread.run_sync(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/hekaisheng/miniconda3/envs/py311/lib/python3.11/site-packages/anyio/to_thread.py\", line 33, in run_sync\r\n return await get_asynclib().run_sync_in_worker_thread(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/hekaisheng/miniconda3/envs/py311/lib/python3.11/site-packages/anyio/_backends/_asyncio.py\", line 877, in run_sync_in_worker_thread\r\n return await future\r\n ^^^^^^^^^^^^\r\n File \"/Users/hekaisheng/miniconda3/envs/py311/lib/python3.11/site-packages/anyio/_backends/_asyncio.py\", line 807, in run\r\n result = context.run(func, *args)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/hekaisheng/miniconda3/envs/py311/lib/python3.11/site-packages/gradio/utils.py\", line 484, in run_sync_iterator_async\r\n return next(iterator)\r\n ^^^^^^^^^^^^^^\r\n File \"/Users/hekaisheng/Documents/projects/inference/xinference/core/chat_interface.py\", line 105, in generate_wrapper\r\n assert isinstance(model, RESTfulChatModelHandle)\r\nAssertionError\r\n```\r\n\r\n### To Reproduce\r\nTo help us to reproduce this bug, please provide information below:\r\n\r\n1. Your Python version.\r\n2. The version of xinference you use.\r\n3. Versions of crucial packages.\r\n4. Full stack of the error.\r\n5. Minimized code to reproduce the error.\r\n\r\n### Expected behavior\r\nA clear and concise description of what you expected to happen.\r\n\r\n### Additional context\r\nAdd any other context about the problem here.\r\n", "number": 609, "title": "BUG: Failed to use chatglm model with ggml format" } ]
4903bae2f4fd60a93af5d79801766d65ee741511
{ "head_commit": "dbb48997e3d0a481fe13800b84a9b0395c11b95e", "head_commit_message": "chatglm3 ggml", "patch_to_review": "diff --git a/xinference/core/chat_interface.py b/xinference/core/chat_interface.py\nindex 86f4f081bc..aa5b284a72 100644\n--- a/xinference/core/chat_interface.py\n+++ b/xinference/core/chat_interface.py\n@@ -21,6 +21,7 @@\n from gradio.layouts import Accordion, Column, Row\n \n from ..client.restful.restful_client import (\n+ RESTfulChatglmCppChatModelHandle,\n RESTfulChatModelHandle,\n RESTfulGenerateModelHandle,\n )\n@@ -102,7 +103,9 @@ def generate_wrapper(\n \n client = RESTfulClient(self.endpoint)\n model = client.get_model(self.model_uid)\n- assert isinstance(model, RESTfulChatModelHandle)\n+ assert isinstance(\n+ model, (RESTfulChatModelHandle, RESTfulChatglmCppChatModelHandle)\n+ )\n \n response_content = \"\"\n for chunk in model.chat(\ndiff --git a/xinference/core/model.py b/xinference/core/model.py\nindex e71fecdd6d..12106c85c1 100644\n--- a/xinference/core/model.py\n+++ b/xinference/core/model.py\n@@ -183,10 +183,10 @@ async def _async_wrapper():\n await getattr(self._model, \"async_chat\")(prompt, *args, **kwargs)\n )\n \n- if hasattr(self._model, \"generate\"):\n- return await self._call_wrapper(_wrapper)\n- else:\n+ if hasattr(self._model, \"async_chat\"):\n return await self._call_async_wrapper(_async_wrapper)\n+ else:\n+ return await self._call_wrapper(_wrapper)\n \n async def create_embedding(self, input: Union[str, List[str]], *args, **kwargs):\n if not hasattr(self._model, \"create_embedding\"):\ndiff --git a/xinference/core/worker.py b/xinference/core/worker.py\nindex 9873e99536..a8dcd7794d 100644\n--- a/xinference/core/worker.py\n+++ b/xinference/core/worker.py\n@@ -231,7 +231,8 @@ async def launch_builtin_model(\n assert self._supervisor_ref is not None\n is_local_deployment = await self._supervisor_ref.is_local_deployment()\n \n- model, model_description = create_model_instance(\n+ model, model_description = await asyncio.to_thread(\n+ create_model_instance,\n model_uid,\n model_type,\n model_name,\ndiff --git a/xinference/model/llm/llm_family.json b/xinference/model/llm/llm_family.json\nindex 45627a020b..575897bfb1 100644\n--- a/xinference/model/llm/llm_family.json\n+++ b/xinference/model/llm/llm_family.json\n@@ -494,6 +494,15 @@\n ],\n \"model_description\": \"ChatGLM3 is the third generation of ChatGLM, still open-source and trained on Chinese and English data.\",\n \"model_specs\": [\n+ {\n+ \"model_format\": \"ggmlv3\",\n+ \"model_size_in_billions\": 6,\n+ \"quantizations\": [\n+ \"q4_0\"\n+ ],\n+ \"model_id\": \"Xorbits/chatglm2-6B-GGML\",\n+ \"model_file_name_template\": \"chatglm2-ggml-{quantization}.bin\"\n+ },\n {\n \"model_format\": \"pytorch\",\n \"model_size_in_billions\": 6,\n" }
[ { "diff_hunk": "@@ -494,6 +494,15 @@\n ],\n \"model_description\": \"ChatGLM3 is the third generation of ChatGLM, still open-source and trained on Chinese and English data.\",\n \"model_specs\": [\n+ {\n+ \"model_format\": \"ggmlv3\",\n+ \"model_size_in_billions\": 6,\n+ \"quantizations\": [\n+ \"q4_0\"\n+ ],\n+ \"model_id\": \"Xorbits/chatglm2-6B-GGML\",", "line": null, "original_line": 503, "original_start_line": null, "path": "xinference/model/llm/llm_family.json", "start_line": null, "text": "@user1:\nshould be `chatglm3`?\r\n\r\nThe filename template also needs to be fixed.\n\n@author:\nFixed." } ]
f6acabc185b20a6690fcd0d3ff42ea6ae44e7056
diff --git a/xinference/core/chat_interface.py b/xinference/core/chat_interface.py index 86f4f081bc..aa5b284a72 100644 --- a/xinference/core/chat_interface.py +++ b/xinference/core/chat_interface.py @@ -21,6 +21,7 @@ from gradio.layouts import Accordion, Column, Row from ..client.restful.restful_client import ( + RESTfulChatglmCppChatModelHandle, RESTfulChatModelHandle, RESTfulGenerateModelHandle, ) @@ -102,7 +103,9 @@ def generate_wrapper( client = RESTfulClient(self.endpoint) model = client.get_model(self.model_uid) - assert isinstance(model, RESTfulChatModelHandle) + assert isinstance( + model, (RESTfulChatModelHandle, RESTfulChatglmCppChatModelHandle) + ) response_content = "" for chunk in model.chat( diff --git a/xinference/core/model.py b/xinference/core/model.py index e71fecdd6d..12106c85c1 100644 --- a/xinference/core/model.py +++ b/xinference/core/model.py @@ -183,10 +183,10 @@ async def _async_wrapper(): await getattr(self._model, "async_chat")(prompt, *args, **kwargs) ) - if hasattr(self._model, "generate"): - return await self._call_wrapper(_wrapper) - else: + if hasattr(self._model, "async_chat"): return await self._call_async_wrapper(_async_wrapper) + else: + return await self._call_wrapper(_wrapper) async def create_embedding(self, input: Union[str, List[str]], *args, **kwargs): if not hasattr(self._model, "create_embedding"): diff --git a/xinference/core/worker.py b/xinference/core/worker.py index 9873e99536..a8dcd7794d 100644 --- a/xinference/core/worker.py +++ b/xinference/core/worker.py @@ -231,7 +231,8 @@ async def launch_builtin_model( assert self._supervisor_ref is not None is_local_deployment = await self._supervisor_ref.is_local_deployment() - model, model_description = create_model_instance( + model, model_description = await asyncio.to_thread( + create_model_instance, model_uid, model_type, model_name, diff --git a/xinference/model/llm/llm_family.json b/xinference/model/llm/llm_family.json index 45627a020b..b87c2ba4d4 100644 --- a/xinference/model/llm/llm_family.json +++ b/xinference/model/llm/llm_family.json @@ -494,6 +494,15 @@ ], "model_description": "ChatGLM3 is the third generation of ChatGLM, still open-source and trained on Chinese and English data.", "model_specs": [ + { + "model_format": "ggmlv3", + "model_size_in_billions": 6, + "quantizations": [ + "q4_0" + ], + "model_id": "Xorbits/chatglm3-6B-GGML", + "model_file_name_template": "chatglm3-ggml-{quantization}.bin" + }, { "model_format": "pytorch", "model_size_in_billions": 6, diff --git a/xinference/model/llm/llm_family_modelscope.json b/xinference/model/llm/llm_family_modelscope.json index 4a0a1e3a51..d8dd81fb31 100644 --- a/xinference/model/llm/llm_family_modelscope.json +++ b/xinference/model/llm/llm_family_modelscope.json @@ -301,6 +301,17 @@ ], "model_description": "ChatGLM3 is the third generation of ChatGLM, still open-source and trained on Chinese and English data.", "model_specs": [ + { + "model_format": "ggmlv3", + "model_size_in_billions": 6, + "quantizations": [ + "q4_0" + ], + "model_hub": "modelscope", + "model_id": "Xorbits/chatglm3-ggml", + "model_revision": "v1.0.0", + "model_file_name_template": "chatglm3-ggml-{quantization}.bin" + }, { "model_format": "pytorch", "model_size_in_billions": 6,
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Bug Fixes" }
sympy__sympy-27510@5ae991f
sympy/sympy
Python
27,510
Improve Custom Function Precedence Printing
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> #### Brief description of what is fixed or changed Improves the precedence handling of SymPy with custom functions in multiplication to ensure correct parenthesization and preservation of precedence. #### Other comments Problem - The current implementation fails to correctly parenthesize expressions involving negative numbers during multiplication. Solution - Introduces a custom printer that intelligently handles parenthesization for custom function expressions. Test Cases - Added comprehensive test for custom function precedence Fixes #25026 #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> NO ENTRY <!-- END RELEASE NOTES -->
2025-01-25T09:17:03Z
Printing multiplication by negative number with custom (infix) function not correctly parenthesized. I am trying to define a custom function with lower precedence than multiplication. However, whenever there is a negative number involved, the parentheses are not there anymore. ----- **Custom function:** ```python class F(sympy.Function): precedence = 45 def _sympystr(self, printer): return f"{printer._print(self.args[0])} F {printer._print(self.args[1])}" ``` **Printing:** ```python >>> a, b = sympy.symbols("a b") >>> 2 * F(a, b) 2*(a F b) >>> -2 * F(a, b) -2*a F b # <-------------- No parentheses! # Expected: -2*(a F b) ``` ----- Although I am able to get around that, this is a bug, right?
Maybe a call to `printer.parenthesize(arg, level)` is missing, but I can't quite get it to work. Ok, so I debugged this to: ```pycon >>> precedence(Mul(Integer(-1), Integer(2), F(Symbol('a'), Symbol('b')))) 40 >>> precedence(Mul(Integer(2), F(Symbol('a'), Symbol('b')))) 50 ``` so if you set your precedence lower than Add, e.g. 35, it does work: ```pycon >>> str(-2*F(a,b)) -2*(a F b) ``` not sure if this is to be considered a workaround or "expected" behavior. (I do find the precedence machinery somewhat convoluted...) Hmm. I guess that could work. The problem I see, however, is the way `_print_Mul` gets its precedence. https://github.com/sympy/sympy/blob/9817721589a769319c22437c116d2deffce26aef/sympy/printing/str.py#L263 This function returns the `Add` precedence due to the leading negative number. Thank you, anyway. Yup, and here's where that's happening: https://github.com/sympy/sympy/blob/9817721589a769319c22437c116d2deffce26aef/sympy/printing/precedence.py#L61-L64 It's been like that [forever](https://github.com/sympy/sympy/blame/598726524de3271c8718e5848a64e1e56dd8aa4a/sympy/core/mul.py#L303), it's not immediately apparent to me why but probably unary minus operator interacting with binary operators or something along those lines. @sympy-bot Assigned me this issue @bjodah @oscarbenjamin I have resolved this issue please look into this #27510 This issue has been resolved by me initially please review it. It has been fixed @bjodah @oscarbenjamin
[ { "body": "I am trying to define a custom function with lower precedence than multiplication. However, whenever there is a negative number involved, the parentheses are not there anymore.\r\n\r\n-----\r\n\r\n**Custom function:**\r\n```python\r\nclass F(sympy.Function):\r\n precedence = 45\r\n\r\n def _sympystr(self, printer):\r\n return f\"{printer._print(self.args[0])} F {printer._print(self.args[1])}\"\r\n```\r\n\r\n**Printing:**\r\n```python\r\n>>> a, b = sympy.symbols(\"a b\")\r\n\r\n>>> 2 * F(a, b)\r\n2*(a F b)\r\n\r\n>>> -2 * F(a, b)\r\n-2*a F b # <-------------- No parentheses!\r\n # Expected: -2*(a F b)\r\n```\r\n\r\n-----\r\n\r\nAlthough I am able to get around that, this is a bug, right?", "number": 25026, "title": "Printing multiplication by negative number with custom (infix) function not correctly parenthesized." } ]
f07466ae38d6f7985c4e9eec2c7dfff43fec3cf7
{ "head_commit": "5ae991f362a2033f00b216a870d9a49d18cb848d", "head_commit_message": "Merge branch 'sympy:master' into preced", "patch_to_review": "diff --git a/sympy/printing/precedence.py b/sympy/printing/precedence.py\nindex 563a04b3494a..6c45215ae72b 100644\n--- a/sympy/printing/precedence.py\n+++ b/sympy/printing/precedence.py\n@@ -59,6 +59,16 @@\n \n \n def precedence_Mul(item):\n+ from sympy.core.function import Function\n+ custom_func_args = [\n+ arg for arg in item.args\n+ if hasattr(arg, 'precedence') and isinstance(arg, Function)\n+ ]\n+ if custom_func_args:\n+ min_custom_precedence = min(arg.precedence for arg in custom_func_args)\n+ if min_custom_precedence < PRECEDENCE[\"Mul\"]:\n+ return PRECEDENCE[\"Mul\"]\n+\n if item.could_extract_minus_sign():\n return PRECEDENCE[\"Add\"]\n return PRECEDENCE[\"Mul\"]\ndiff --git a/sympy/printing/tests/test_precedence.py b/sympy/printing/tests/test_precedence.py\nindex 372a5b0356b7..31d8832acd59 100644\n--- a/sympy/printing/tests/test_precedence.py\n+++ b/sympy/printing/tests/test_precedence.py\n@@ -1,3 +1,4 @@\n+import sympy\n from sympy.concrete.products import Product\n from sympy.concrete.summations import Sum\n from sympy.core.function import Derivative\n@@ -87,3 +88,45 @@ def test_And_Or():\n assert precedence(x & y) == PRECEDENCE[\"And\"]\n assert precedence(x | y) == PRECEDENCE[\"Or\"]\n assert precedence(~y) == PRECEDENCE[\"Not\"]\n+\n+\n+def test_custom_function_precedence_comparison():\n+ \"\"\"\n+ Test cases for custom functions with different precedence values,\n+ specifically handling:\n+ 1. Functions with precedence < PRECEDENCE[\"Mul\"] (50)\n+ 2. Functions with precedence = Func (70)\n+\n+ Key distinction:\n+ - Lower precedence functions (45) need parentheses: -2*(x F y)\n+ - Higher precedence functions (70) don't: -2*x F y\n+ \"\"\"\n+ class LowPrecedenceF(sympy.Function):\n+ precedence = PRECEDENCE[\"Mul\"] - 5\n+ def _sympystr(self, printer):\n+ return f\"{printer._print(self.args[0])} F {printer._print(self.args[1])}\"\n+\n+ class HighPrecedenceF(sympy.Function):\n+ precedence = PRECEDENCE[\"Func\"]\n+ def _sympystr(self, printer):\n+ return f\"{printer._print(self.args[0])} F {printer._print(self.args[1])}\"\n+\n+ def test_low_precedence():\n+ expr1 = 2 * LowPrecedenceF(x, y)\n+ assert str(expr1) == \"2*(x F y)\"\n+\n+ expr2 = -2 * LowPrecedenceF(x, y)\n+ assert str(expr2) == \"-2*(x F y)\"\n+\n+ def test_high_precedence():\n+ expr1 = 2 * HighPrecedenceF(x, y)\n+ assert str(expr1) == \"2*x F y\"\n+\n+ expr2 = -2 * HighPrecedenceF(x, y)\n+ assert str(expr2) == \"-2*x F y\"\n+\n+ test_low_precedence()\n+ test_high_precedence()\n+\n+if __name__ == \"__main__\":\n+ test_custom_function_precedence_comparison()\n" }
[ { "diff_hunk": "@@ -87,3 +88,45 @@ def test_And_Or():\n assert precedence(x & y) == PRECEDENCE[\"And\"]\n assert precedence(x | y) == PRECEDENCE[\"Or\"]\n assert precedence(~y) == PRECEDENCE[\"Not\"]\n+\n+\n+def test_custom_function_precedence_comparison():\n+ \"\"\"\n+ Test cases for custom functions with different precedence values,\n+ specifically handling:\n+ 1. Functions with precedence < PRECEDENCE[\"Mul\"] (50)\n+ 2. Functions with precedence = Func (70)\n+\n+ Key distinction:\n+ - Lower precedence functions (45) need parentheses: -2*(x F y)\n+ - Higher precedence functions (70) don't: -2*x F y\n+ \"\"\"\n+ class LowPrecedenceF(sympy.Function):\n+ precedence = PRECEDENCE[\"Mul\"] - 5\n+ def _sympystr(self, printer):\n+ return f\"{printer._print(self.args[0])} F {printer._print(self.args[1])}\"\n+\n+ class HighPrecedenceF(sympy.Function):\n+ precedence = PRECEDENCE[\"Func\"]\n+ def _sympystr(self, printer):\n+ return f\"{printer._print(self.args[0])} F {printer._print(self.args[1])}\"\n+\n+ def test_low_precedence():\n+ expr1 = 2 * LowPrecedenceF(x, y)\n+ assert str(expr1) == \"2*(x F y)\"\n+\n+ expr2 = -2 * LowPrecedenceF(x, y)\n+ assert str(expr2) == \"-2*(x F y)\"\n+\n+ def test_high_precedence():\n+ expr1 = 2 * HighPrecedenceF(x, y)\n+ assert str(expr1) == \"2*x F y\"\n+\n+ expr2 = -2 * HighPrecedenceF(x, y)\n+ assert str(expr2) == \"-2*x F y\"\n+\n+ test_low_precedence()\n+ test_high_precedence()\n+\n+if __name__ == \"__main__\":", "line": null, "original_line": 131, "original_start_line": null, "path": "sympy/printing/tests/test_precedence.py", "start_line": null, "text": "@user1:\nshould remove the main\n\n@author:\nRemoved the main" }, { "diff_hunk": "@@ -59,6 +59,16 @@\n \n \n def precedence_Mul(item):\n+ from sympy.core.function import Function\n+ custom_func_args = [", "line": null, "original_line": 63, "original_start_line": null, "path": "sympy/printing/precedence.py", "start_line": null, "text": "@user1:\nperhaps the new logic could short circuit like\r\n\r\n```python\r\nif any(arg.precedence < PRECEDENCE[\"MUL\"] for ... if hasattr ...\r\n```\n\n@author:\nUpdated the condition of custom precedence" }, { "diff_hunk": "@@ -1,3 +1,4 @@\n+import sympy", "line": null, "original_line": 1, "original_start_line": null, "path": "sympy/printing/tests/test_precedence.py", "start_line": null, "text": "@user1:\nshould be able to remove\n\n@author:\nRemoved the import" } ]
9bf37c0982cf49826e39c17388a85ea6de0b4a02
diff --git a/sympy/printing/precedence.py b/sympy/printing/precedence.py index 563a04b3494a..d22d5746aeee 100644 --- a/sympy/printing/precedence.py +++ b/sympy/printing/precedence.py @@ -59,6 +59,11 @@ def precedence_Mul(item): + from sympy.core.function import Function + if any(hasattr(arg, 'precedence') and isinstance(arg, Function) and + arg.precedence < PRECEDENCE["Mul"] for arg in item.args): + return PRECEDENCE["Mul"] + if item.could_extract_minus_sign(): return PRECEDENCE["Add"] return PRECEDENCE["Mul"] diff --git a/sympy/printing/tests/test_precedence.py b/sympy/printing/tests/test_precedence.py index 372a5b0356b7..d08ea0748385 100644 --- a/sympy/printing/tests/test_precedence.py +++ b/sympy/printing/tests/test_precedence.py @@ -1,6 +1,6 @@ from sympy.concrete.products import Product from sympy.concrete.summations import Sum -from sympy.core.function import Derivative +from sympy.core.function import Derivative, Function from sympy.core.numbers import Integer, Rational, Float, oo from sympy.core.relational import Rel from sympy.core.symbol import symbols @@ -87,3 +87,42 @@ def test_And_Or(): assert precedence(x & y) == PRECEDENCE["And"] assert precedence(x | y) == PRECEDENCE["Or"] assert precedence(~y) == PRECEDENCE["Not"] + + +def test_custom_function_precedence_comparison(): + """ + Test cases for custom functions with different precedence values, + specifically handling: + 1. Functions with precedence < PRECEDENCE["Mul"] (50) + 2. Functions with precedence = Func (70) + + Key distinction: + 1. Lower precedence functions (45) need parentheses: -2*(x F y) + 2. Higher precedence functions (70) don't: -2*x F y + """ + class LowPrecedenceF(Function): + precedence = PRECEDENCE["Mul"] - 5 + def _sympystr(self, printer): + return f"{printer._print(self.args[0])} F {printer._print(self.args[1])}" + + class HighPrecedenceF(Function): + precedence = PRECEDENCE["Func"] + def _sympystr(self, printer): + return f"{printer._print(self.args[0])} F {printer._print(self.args[1])}" + + def test_low_precedence(): + expr1 = 2 * LowPrecedenceF(x, y) + assert str(expr1) == "2*(x F y)" + + expr2 = -2 * LowPrecedenceF(x, y) + assert str(expr2) == "-2*(x F y)" + + def test_high_precedence(): + expr1 = 2 * HighPrecedenceF(x, y) + assert str(expr1) == "2*x F y" + + expr2 = -2 * HighPrecedenceF(x, y) + assert str(expr2) == "-2*x F y" + + test_low_precedence() + test_high_precedence()
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Bug Fixes" }
sympy__sympy-27503@71b92cf
sympy/sympy
Python
27,503
Fix issues in polynomial multiplication with RR domain
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> This PR tackles numerical instability over the RR field when multiplying polynomials when coefficients differ by several orders of magnitude in magnitude. #### References to other Issues or PRs Fixes #27484 <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> #### Brief description of what is fixed or changed When one multiplies polynomials whose coefficients are floating point in the RR domain, accumulation of numerical errors occurs so that the Karatsuba algorithm does not produce correct results quite often, notably if coefficients vary over several orders of magnitude. #### Other comments For RR domain it will use the standard method but for other domains it will use Karatsuba algorithm. #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> NO ENTRY <!-- END RELEASE NOTES -->
2025-01-22T19:46:39Z
Incorrect result when doing multiplication of polynomials in SymPy ``` import sympy x = sympy.symbols('x') e = sympy.poly(18786186952704.0*x**391 + 607420044804096.0*x**390 + 1.00192653483704e+16*x**389 + 1.12340813861618e+17*x**388 + 9.62398129745822e+17*x**387 + 6.71245471589807e+18*x**386 + 3.96638528291356e+19*x**385 + 2.04022369386301e+20*x**384 + 9.31645053399718e+20*x**383 + 3.83300315791215e+21*x**382 + 1.43731755825706e+22*x**381 + 4.9579170438376e+22*x**380 + 1.58510523198181e+23*x**379 + 4.72677246134559e+23*x**378 + 1.32172534702695e+24*x**377 + 3.48170338234977e+24*x**376 + 8.67508667977031e+24*x**375 + 2.05188548716969e+25*x**374 + 4.62217891489198e+25*x**373 + 9.94620611226133e+25*x**372 + 2.05022259222962e+26*x**371 + 4.05908315644194e+26*x**370 + 7.73820136814458e+26*x**369 + 1.42397337632569e+27*x**368 + 2.53540660922645e+27*x**367 + 4.37807004799877e+27*x**366 + 7.34819956155604e+27*x**365 + 1.20137214778114e+28*x**364 + 1.91716468432695e+28*x**363 + 2.9919487946825e+28*x**362 + 4.5742546781703e+28*x**361 + 6.86184291643935e+28*x**360 + 1.01139401836376e+29*x**359 + 1.46652441620162e+29*x**358 + 2.09416284080091e+29*x**357 + 2.94774606786176e+29*x**356 + 4.09344545520338e+29*x**355 + 5.61219599549279e+29*x**354 + 7.60185071357956e+29*x**353 + 1.01794218645138e+30*x**352 + 1.34833422466908e+30*x**351 + 1.7675653090294e+30*x**350 + 2.29440117371239e+30*x**349 + 2.9503420548127e+30*x**348 + 3.75976046156287e+30*x**347 + 4.74999937073871e+30*x**346 + 5.95142719009883e+30*x**345 + 7.39744333715976e+30*x**344 + 9.12442320361216e+30*x**343 + 1.11715865398631e+31*x**342 + 1.35807726405611e+31*x**341 + 1.6396111359939e+31*x**340 + 1.96635896889856e+31*x**339 + 2.34305246185014e+31*x**338 + 2.77449583729775e+31*x**337 + 3.26549884864953e+31*x**336 + 3.82080346521502e+31*x**335 + 4.44500339884872e+31*x**334 + 5.14245548133296e+31*x**333 + 5.9171830506378e+31*x**332 + 6.77277360121932e+31*x**331 + 7.7122749480933e+31*x**330 + 8.73809487670393e+31*x**329 + 9.85190815750957e+31*x**328 + 1.10545724219769e+32*x**327 + 1.23460521197106e+32*x**326 + 1.37253490695579e+32*x**325 + 1.51904395757882e+32*x**324 + 1.67382209506145e+32*x**323 + 1.83644728118423e+32*x**322 + 2.00638389805687e+32*x**321 + 2.18298335862547e+32*x**320 + 2.36548710870913e+32*x**319 + 2.55303164068604e+32*x**318 + 2.74465502057376e+32*x**317 + 2.93930460272433e+32*x**316 + 3.13584595246087e+32*x**315 + 3.33307329029845e+32*x**314 + 3.52972181242525e+32*x**313 + 3.7244819808164e+32*x**312 + 3.91601544485273e+32*x**311 + 4.10297189098036e+32*x**310 + 4.28400601723887e+32*x**309 + 4.45779404000103e+32*x**308 + 4.6230495269906e+32*x**307 + 4.77853868383823e+32*x**306 + 4.92309531273987e+32*x**305 + 5.05563547223298e+32*x**304 + 5.17517152342847e+32*x**303 + 5.28082496187536e+32*x**302 + 5.37183737651272e+32*x**301 + 5.44757908299232e+32*x**300 + 5.50755533567183e+32*x**299 + 5.55141034358678e+32*x**298 + 5.57892944810279e+32*x**297 + 5.5900397271145e+32*x**296 + 5.58480906204685e+32*x**295 + 5.56344349448449e+32*x**294 + 5.52628263849091e+32*x**293 + 5.47379303835558e+32*x**292 + 5.40655960133798e+32*x**291 + 5.32527546572947e+32*x**290 + 5.23073077683624e+32*x**289 + 5.12380080005117e+32*x**288 + 5.0054336473724e+32*x**287 + 4.87663772724559e+32*x**286 + 4.73846893635791e+32*x**285 + 4.59201763250845e+32*x**284 + 4.43839553177586e+32*x**283 + 4.27872279128691e+32*x**282 + 4.11411560166392e+32*x**281 + 3.94567458771032e+32*x**280 + 3.77447421763255e+32*x**279 + 3.60155329959775e+32*x**278 + 3.42790655320098e+32*x**277 + 3.25447721236868e+32*x**276 + 3.08215064031296e+32*x**275 + 2.91174898459487e+32*x**274 + 2.74402693317065e+32*x**273 + 2.57966862663438e+32*x**272 + 2.41928573783986e+32*x**271 + 2.26341666779407e+32*x**270 + 2.11252675290002e+32*x**269 + 1.9670093522436e+32*x**268 + 1.82718768785443e+32*x**267 + 1.69331733442125e+32*x**266 + 1.56558928024408e+32*x**265 + 1.44413349419411e+32*x**264 + 1.32902292973695e+32*x**263 + 1.22027788212924e+32*x**262 + 1.11787059936967e+32*x**261 + 1.02173004087736e+32*x**260 + 9.31746684052255e+31*x**259 + 8.47777295239818e+31*x**258 + 7.69649601580008e+31*x**257 + 6.97166817091038e+31*x**256 + 6.30111986490116e+31*x**255 + 5.68252114106536e+31*x**254 + 5.11342046105023e+31*x**253 + 4.59128076016368e+31*x**252 + 4.11351248709378e+31*x**251 + 3.67750346516565e+31*x**250 + 3.280645514238e+31*x**249 + 2.92035786596398e+31*x**248 + 2.59410747247874e+31*x**247 + 2.29942634518776e+31*x**246 + 2.03392607416766e+31*x**245 + 1.79530968342324e+31*x**244 + 1.58138098451387e+31*x**243 + 1.39005160563051e+31*x**242 + 1.21934589258332e+31*x**241 + 1.06740389580791e+31*x**240 + 9.32482667165709e+30*x**239 + 8.1295608916701e+30*x**238 + 7.07313448617111e+30*x**237 + 6.1415695063951e+30*x**236 + 5.32198352029589e+30*x**235 + 4.602548778383e+30*x**234 + 3.97244572596053e+30*x**233 + 3.42181226502325e+30*x**232 + 2.94169005481941e+30*x**231 + 2.52396901031627e+30*x**230 + 2.16133101142353e+30*x**229 + 1.84719368145017e+30*x**228 + 1.5756549435175e+30*x**227 + 1.34143892913845e+30*x**226 + 1.13984369843053e+30*x**225 + 9.66691135076816e+29*x**224 + 8.18279296212723e+29*x**223 + 6.91337422766721e+29*x**222 + 5.82983746453424e+29*x**221 + 4.90686165535244e+29*x**220 + 4.12225804367941e+29*x**219 + 3.4566342372779e+29*x**218 + 2.89308611068738e+29*x**217 + 2.41691651708124e+29*x**216 + 2.0153796190433e+29*x**215 + 1.67744950765163e+29*x**214 + 1.39361168169125e+29*x**213 + 1.15567589314061e+29*x**212 + 9.56608827143122e+28*x**211 + 7.90385074249973e+28*x**210 + 6.51854869552611e+28*x**209 + 5.36627114870467e+28*x**208 + 4.40966261989531e+28*x**207 + 3.61701710225607e+28*x**206 + 2.96148454683881e+28*x**205 + 2.42037807632271e+28*x**204 + 1.97457101717268e+28*x**203 + 1.6079736918933e+28*x**202 + 1.30708075344097e+28*x**201 + 1.06058066598448e+28*x**200 + 8.59019735266542e+27*x**199 + 6.94513855589117e+27*x**198 + 5.60501861937276e+27*x**197 + 4.51535044872758e+27*x**196 + 3.63098001128394e+27*x**195 + 2.91456552715732e+27*x**194 + 2.33528975814278e+27*x**193 + 1.86777240689401e+27*x**192 + 1.49115379140818e+27*x**191 + 1.188324688484e+27*x**190 + 9.45280579391412e+26*x**189 + 7.50581492553531e+26*x**188 + 5.94901241401661e+26*x**187 + 4.70652152586838e+26*x**186 + 3.71673370989202e+26*x**185 + 2.92972569731752e+26*x**184 + 2.30512401872165e+26*x**183 + 1.81034340676203e+26*x**182 + 1.41913687033006e+26*x**181 + 1.11040492005887e+26*x**180 + 8.67219796721523e+25*x**179 + 6.76027631408076e+25*x**178 + 5.25997556818479e+25*x**177 + 4.08491829759755e+25*x**176 + 3.16635446647424e+25*x**175 + 2.44967304875125e+25*x**174 + 1.89158054889391e+25*x**173 + 1.45782414732817e+25*x**172 + 1.12135791467499e+25*x**171 + 8.60869197122726e+24*x**170 + 6.59597006043051e+24*x**169 + 5.0438634649684e+24*x**168 + 3.84933227879042e+24*x**167 + 2.93183152367596e+24*x**166 + 2.22852706967105e+24*x**165 + 1.69050194946342e+24*x**164 + 1.27975122104309e+24*x**163 + 9.6680881289861e+23*x**162 + 7.28875829238494e+23*x**161 + 5.48348800710694e+23*x**160 + 4.11663955183463e+23*x**159 + 3.08392351664647e+23*x**158 + 2.30531741539953e+23*x**157 + 1.7195533162247e+23*x**156 + 1.27982334666812e+23*x**155 + 9.50438312503859e+22*x**154 + 7.04253447662509e+22*x**153 + 5.20660666190957e+22*x**152 + 3.84053327419543e+22*x**151 + 2.82637478194694e+22*x**150 + 2.07519363919927e+22*x**149 + 1.52008380464323e+22*x**148 + 1.11082481827257e+22*x**147 + 8.09807989309473e+21*x**146 + 5.88931041753267e+21*x**145 + 4.27250564717805e+21*x**144 + 3.09188307656375e+21*x**143 + 2.23190051864519e+21*x**142 + 1.60703144783557e+21*x**141 + 1.15414058643475e+21*x**140 + 8.26734051209549e+20*x**139 + 5.906526391579e+20*x**138 + 4.20866452815223e+20*x**137 + 2.99080554595048e+20*x**136 + 2.11957904907271e+20*x**135 + 1.49800754528007e+20*x**134 + 1.055762803507e+20*x**133 + 7.4197863666214e+19*x**132 + 5.19964435311348e+19*x**131 + 3.6332674072111e+19*x**130 + 2.5313174369628e+19*x**129 + 1.75834913998358e+19*x**128 + 1.21774422135393e+19*x**127 + 8.40779420386497e+18*x**126 + 5.78715428446679e+18*x**125 + 3.970875524594e+18*x**124 + 2.71597982386884e+18*x**123 + 1.85168214872764e+18*x**122 + 1.25830615369268e+18*x**121 + 8.52248961334998e+17*x**120 + 5.75289976644556e+17*x**119 + 3.87013642816788e+17*x**118 + 2.5945547266754e+17*x**117 + 1.73329658702463e+17*x**116 + 1.15380239021783e+17*x**115 + 7.65266794627435e+16*x**114 + 5.05697395733019e+16*x**113 + 3.32919678677403e+16*x**112 + 2.18340096225946e+16*x**111 + 1.42641771653066e+16*x**110 + 9.28222858958434e+15*x**109 + 6.01617039744338e+15*x**108 + 3.88344180735753e+15*x**107 + 2.49635298837213e+15*x**106 + 1.59789587032747e+15*x**105 + 1.01837064242638e+15*x**104 + 646162735416304.0*x**103 + 408151660246024.0*x**102 + 256632997920088.0*x**101 + 160611666154064.0*x**100 + 100039649901592.0*x**99 + 62007636538464.0*x**98 + 38241146390736.0*x**97 + 23461909088736.0*x**96 + 14317876797656.0*x**95 + 8689992463568.0*x**94 + 5245001129704.0*x**93 + 3147872937568.0*x**92 + 1878373224984.0*x**91 + 1114227943432.0*x**90 + 656870360848.0*x**89 + 384733888736.0*x**88 + 223811972808.0*x**87 + 129272278504.0*x**86 + 74125358696.0*x**85 + 42193679544.0*x**84 + 23840311096.0*x**83 + 13370176224.0*x**82 + 7439064432.0*x**81 + 4103093072.0*x**80 + 2241521448.0*x**79 + 1211545984.0*x**78 + 647642728.0*x**77 + 342538808.0*x**76 + 179304800.0*x**75 + 92964480.0*x**74 + 47694992.0*x**73 + 24132760.0*x**72 + 12008720.0*x**71 + 5853392.0*x**70 + 2792792.0*x**69 + 1311320.0*x**68 + 607200.0*x**67 + 279200.0*x**66 + 128520.0*x**65 + 60232.0*x**64 + 31536.0*x**63 + 20472.0*x**62 + 17060.0*x**61 + 16848.0*x**60 + 17700.0*x**59 + 19528.0*x**58 + 21624.0*x**57 + 23688.0*x**56 + 24700.0*x**55 + 24392.0*x**54 + 23892.0*x**53 + 22464.0*x**52 + 21360.0*x**51 + 21192.0*x**50 + 19836.0*x**49 + 18988.0*x**48 + 18484.0*x**47 + 16136.0*x**46 + 14968.0*x**45 + 13928.0*x**44 + 11460.0*x**43 + 10520.0*x**42 + 8992.0*x**41 + 7028.0*x**40 + 6548.0*x**39 + 5104.0*x**38 + 3940.0*x**37 + 3466.0*x**36 + 2492.0*x**35 + 2174.0*x**34 + 1716.0*x**33 + 1188.0*x**32 + 1204.0*x**31 + 856.0*x**30 + 560.0*x**29 + 512.0*x**28 + 316.0*x**27 + 230.0*x**26 + 176.0*x**25 + 86.0*x**24 + 86.0*x**23 + 54.0*x**22 + 12.0*x**21 + 38.0*x**20 + 20.0*x**19 + 4.0*x**18 + 12.0*x**17 + 4.0*x**16 + 4.0*x**15 + 2.0*x**14 + 2.0*x**11 + 4.0*x**9 + 2.0*x**8) print(e*x) ``` I was doing a simple multiplication calculation, but I got an incorrect result: > Poly(-1.02749361615667e+15*x**392 + 1.00759245569393e+16*x**391 - 1.9300277358166e+16*x**390 + 1.32156899612164e+17*x**389 + 9.38136306167316e+17*x**388 + 6.73247311178813e+18*x**387 + 3.96574788227904e+19*x**386 + 2.04057697794413e+20*x**385 + 9.31648654918774e+20*x**384 + 3.83296743594759e+21*x**383 ........... The version I am using is 1.13.3
I'm not sure what is happening here: ```python In [30]: e.LC() Out[30]: 18786186952704.0 In [31]: (e*x).LC() Out[31]: -1.02749361615667e+15 ``` I think it goes wrong in here: https://github.com/sympy/sympy/blob/97e74c1a97d0cf08ef63be24921f5c9e620d3e68/sympy/polys/densearith.py#L773-L789 This is a major bug to do with RR somehow: ```python In [4]: (e.set_domain(QQ)*x).LC() Out[4]: 18786186952704 In [5]: (e*x).LC() Out[5]: -1.02749361615667e+15 ``` Here is a simpler reproducer: ```python In [5]: p Out[5]: Poly(18786186952704.0*x**165 + 9.31746684052255e+31*x**82, x, domain='RR') In [6]: p * x Out[6]: Poly(9.31746684052255e+31*x**83, x, domain='RR') ``` It seems to be important that there is a big difference in magnitude between the coefficients: ```python In [15]: p = Poly(0.1234*x**165 + 1e30*x**82, x, domain='RR') In [16]: p Out[16]: Poly(0.1234*x**165 + 1.0e+30*x**82, x, domain='RR') In [17]: p*x Out[17]: Poly(1.0e+30*x**83, x, domain='RR') ``` Something somewhere might be removing "small" coefficients. This seems to fix it: ```diff diff --git a/sympy/polys/densearith.py b/sympy/polys/densearith.py index 30bf9553b0..d798b1cd1b 100644 --- a/sympy/polys/densearith.py +++ b/sympy/polys/densearith.py @@ -757,7 +757,7 @@ def dup_mul(f, g, K): n = max(df, dg) + 1 - if n < 100: + if True: h = [] for i in range(0, df + dg + 1): ``` Maybe the Karatsuba algorithm should be disabled for inexact domains. @oscarbenjamin please look into the PR that has resolved this issue #27502 @oscarbenjamin please review the PR that resolves this issue #27503
[ { "body": "```\nimport sympy\n\nx = sympy.symbols('x')\ne = sympy.poly(18786186952704.0*x**391 + 607420044804096.0*x**390 + 1.00192653483704e+16*x**389 + 1.12340813861618e+17*x**388 + 9.62398129745822e+17*x**387 + 6.71245471589807e+18*x**386 + 3.96638528291356e+19*x**385 + 2.04022369386301e+20*x**384 + 9.31645053399718e+20*x**383 + 3.83300315791215e+21*x**382 + 1.43731755825706e+22*x**381 + 4.9579170438376e+22*x**380 + 1.58510523198181e+23*x**379 + 4.72677246134559e+23*x**378 + 1.32172534702695e+24*x**377 + 3.48170338234977e+24*x**376 + 8.67508667977031e+24*x**375 + 2.05188548716969e+25*x**374 + 4.62217891489198e+25*x**373 + 9.94620611226133e+25*x**372 + 2.05022259222962e+26*x**371 + 4.05908315644194e+26*x**370 + 7.73820136814458e+26*x**369 + 1.42397337632569e+27*x**368 + 2.53540660922645e+27*x**367 + 4.37807004799877e+27*x**366 + 7.34819956155604e+27*x**365 + 1.20137214778114e+28*x**364 + 1.91716468432695e+28*x**363 + 2.9919487946825e+28*x**362 + 4.5742546781703e+28*x**361 + 6.86184291643935e+28*x**360 + 1.01139401836376e+29*x**359 + 1.46652441620162e+29*x**358 + 2.09416284080091e+29*x**357 + 2.94774606786176e+29*x**356 + 4.09344545520338e+29*x**355 + 5.61219599549279e+29*x**354 + 7.60185071357956e+29*x**353 + 1.01794218645138e+30*x**352 + 1.34833422466908e+30*x**351 + 1.7675653090294e+30*x**350 + 2.29440117371239e+30*x**349 + 2.9503420548127e+30*x**348 + 3.75976046156287e+30*x**347 + 4.74999937073871e+30*x**346 + 5.95142719009883e+30*x**345 + 7.39744333715976e+30*x**344 + 9.12442320361216e+30*x**343 + 1.11715865398631e+31*x**342 + 1.35807726405611e+31*x**341 + 1.6396111359939e+31*x**340 + 1.96635896889856e+31*x**339 + 2.34305246185014e+31*x**338 + 2.77449583729775e+31*x**337 + 3.26549884864953e+31*x**336 + 3.82080346521502e+31*x**335 + 4.44500339884872e+31*x**334 + 5.14245548133296e+31*x**333 + 5.9171830506378e+31*x**332 + 6.77277360121932e+31*x**331 + 7.7122749480933e+31*x**330 + 8.73809487670393e+31*x**329 + 9.85190815750957e+31*x**328 + 1.10545724219769e+32*x**327 + 1.23460521197106e+32*x**326 + 1.37253490695579e+32*x**325 + 1.51904395757882e+32*x**324 + 1.67382209506145e+32*x**323 + 1.83644728118423e+32*x**322 + 2.00638389805687e+32*x**321 + 2.18298335862547e+32*x**320 + 2.36548710870913e+32*x**319 + 2.55303164068604e+32*x**318 + 2.74465502057376e+32*x**317 + 2.93930460272433e+32*x**316 + 3.13584595246087e+32*x**315 + 3.33307329029845e+32*x**314 + 3.52972181242525e+32*x**313 + 3.7244819808164e+32*x**312 + 3.91601544485273e+32*x**311 + 4.10297189098036e+32*x**310 + 4.28400601723887e+32*x**309 + 4.45779404000103e+32*x**308 + 4.6230495269906e+32*x**307 + 4.77853868383823e+32*x**306 + 4.92309531273987e+32*x**305 + 5.05563547223298e+32*x**304 + 5.17517152342847e+32*x**303 + 5.28082496187536e+32*x**302 + 5.37183737651272e+32*x**301 + 5.44757908299232e+32*x**300 + 5.50755533567183e+32*x**299 + 5.55141034358678e+32*x**298 + 5.57892944810279e+32*x**297 + 5.5900397271145e+32*x**296 + 5.58480906204685e+32*x**295 + 5.56344349448449e+32*x**294 + 5.52628263849091e+32*x**293 + 5.47379303835558e+32*x**292 + 5.40655960133798e+32*x**291 + 5.32527546572947e+32*x**290 + 5.23073077683624e+32*x**289 + 5.12380080005117e+32*x**288 + 5.0054336473724e+32*x**287 + 4.87663772724559e+32*x**286 + 4.73846893635791e+32*x**285 + 4.59201763250845e+32*x**284 + 4.43839553177586e+32*x**283 + 4.27872279128691e+32*x**282 + 4.11411560166392e+32*x**281 + 3.94567458771032e+32*x**280 + 3.77447421763255e+32*x**279 + 3.60155329959775e+32*x**278 + 3.42790655320098e+32*x**277 + 3.25447721236868e+32*x**276 + 3.08215064031296e+32*x**275 + 2.91174898459487e+32*x**274 + 2.74402693317065e+32*x**273 + 2.57966862663438e+32*x**272 + 2.41928573783986e+32*x**271 + 2.26341666779407e+32*x**270 + 2.11252675290002e+32*x**269 + 1.9670093522436e+32*x**268 + 1.82718768785443e+32*x**267 + 1.69331733442125e+32*x**266 + 1.56558928024408e+32*x**265 + 1.44413349419411e+32*x**264 + 1.32902292973695e+32*x**263 + 1.22027788212924e+32*x**262 + 1.11787059936967e+32*x**261 + 1.02173004087736e+32*x**260 + 9.31746684052255e+31*x**259 + 8.47777295239818e+31*x**258 + 7.69649601580008e+31*x**257 + 6.97166817091038e+31*x**256 + 6.30111986490116e+31*x**255 + 5.68252114106536e+31*x**254 + 5.11342046105023e+31*x**253 + 4.59128076016368e+31*x**252 + 4.11351248709378e+31*x**251 + 3.67750346516565e+31*x**250 + 3.280645514238e+31*x**249 + 2.92035786596398e+31*x**248 + 2.59410747247874e+31*x**247 + 2.29942634518776e+31*x**246 + 2.03392607416766e+31*x**245 + 1.79530968342324e+31*x**244 + 1.58138098451387e+31*x**243 + 1.39005160563051e+31*x**242 + 1.21934589258332e+31*x**241 + 1.06740389580791e+31*x**240 + 9.32482667165709e+30*x**239 + 8.1295608916701e+30*x**238 + 7.07313448617111e+30*x**237 + 6.1415695063951e+30*x**236 + 5.32198352029589e+30*x**235 + 4.602548778383e+30*x**234 + 3.97244572596053e+30*x**233 + 3.42181226502325e+30*x**232 + 2.94169005481941e+30*x**231 + 2.52396901031627e+30*x**230 + 2.16133101142353e+30*x**229 + 1.84719368145017e+30*x**228 + 1.5756549435175e+30*x**227 + 1.34143892913845e+30*x**226 + 1.13984369843053e+30*x**225 + 9.66691135076816e+29*x**224 + 8.18279296212723e+29*x**223 + 6.91337422766721e+29*x**222 + 5.82983746453424e+29*x**221 + 4.90686165535244e+29*x**220 + 4.12225804367941e+29*x**219 + 3.4566342372779e+29*x**218 + 2.89308611068738e+29*x**217 + 2.41691651708124e+29*x**216 + 2.0153796190433e+29*x**215 + 1.67744950765163e+29*x**214 + 1.39361168169125e+29*x**213 + 1.15567589314061e+29*x**212 + 9.56608827143122e+28*x**211 + 7.90385074249973e+28*x**210 + 6.51854869552611e+28*x**209 + 5.36627114870467e+28*x**208 + 4.40966261989531e+28*x**207 + 3.61701710225607e+28*x**206 + 2.96148454683881e+28*x**205 + 2.42037807632271e+28*x**204 + 1.97457101717268e+28*x**203 + 1.6079736918933e+28*x**202 + 1.30708075344097e+28*x**201 + 1.06058066598448e+28*x**200 + 8.59019735266542e+27*x**199 + 6.94513855589117e+27*x**198 + 5.60501861937276e+27*x**197 + 4.51535044872758e+27*x**196 + 3.63098001128394e+27*x**195 + 2.91456552715732e+27*x**194 + 2.33528975814278e+27*x**193 + 1.86777240689401e+27*x**192 + 1.49115379140818e+27*x**191 + 1.188324688484e+27*x**190 + 9.45280579391412e+26*x**189 + 7.50581492553531e+26*x**188 + 5.94901241401661e+26*x**187 + 4.70652152586838e+26*x**186 + 3.71673370989202e+26*x**185 + 2.92972569731752e+26*x**184 + 2.30512401872165e+26*x**183 + 1.81034340676203e+26*x**182 + 1.41913687033006e+26*x**181 + 1.11040492005887e+26*x**180 + 8.67219796721523e+25*x**179 + 6.76027631408076e+25*x**178 + 5.25997556818479e+25*x**177 + 4.08491829759755e+25*x**176 + 3.16635446647424e+25*x**175 + 2.44967304875125e+25*x**174 + 1.89158054889391e+25*x**173 + 1.45782414732817e+25*x**172 + 1.12135791467499e+25*x**171 + 8.60869197122726e+24*x**170 + 6.59597006043051e+24*x**169 + 5.0438634649684e+24*x**168 + 3.84933227879042e+24*x**167 + 2.93183152367596e+24*x**166 + 2.22852706967105e+24*x**165 + 1.69050194946342e+24*x**164 + 1.27975122104309e+24*x**163 + 9.6680881289861e+23*x**162 + 7.28875829238494e+23*x**161 + 5.48348800710694e+23*x**160 + 4.11663955183463e+23*x**159 + 3.08392351664647e+23*x**158 + 2.30531741539953e+23*x**157 + 1.7195533162247e+23*x**156 + 1.27982334666812e+23*x**155 + 9.50438312503859e+22*x**154 + 7.04253447662509e+22*x**153 + 5.20660666190957e+22*x**152 + 3.84053327419543e+22*x**151 + 2.82637478194694e+22*x**150 + 2.07519363919927e+22*x**149 + 1.52008380464323e+22*x**148 + 1.11082481827257e+22*x**147 + 8.09807989309473e+21*x**146 + 5.88931041753267e+21*x**145 + 4.27250564717805e+21*x**144 + 3.09188307656375e+21*x**143 + 2.23190051864519e+21*x**142 + 1.60703144783557e+21*x**141 + 1.15414058643475e+21*x**140 + 8.26734051209549e+20*x**139 + 5.906526391579e+20*x**138 + 4.20866452815223e+20*x**137 + 2.99080554595048e+20*x**136 + 2.11957904907271e+20*x**135 + 1.49800754528007e+20*x**134 + 1.055762803507e+20*x**133 + 7.4197863666214e+19*x**132 + 5.19964435311348e+19*x**131 + 3.6332674072111e+19*x**130 + 2.5313174369628e+19*x**129 + 1.75834913998358e+19*x**128 + 1.21774422135393e+19*x**127 + 8.40779420386497e+18*x**126 + 5.78715428446679e+18*x**125 + 3.970875524594e+18*x**124 + 2.71597982386884e+18*x**123 + 1.85168214872764e+18*x**122 + 1.25830615369268e+18*x**121 + 8.52248961334998e+17*x**120 + 5.75289976644556e+17*x**119 + 3.87013642816788e+17*x**118 + 2.5945547266754e+17*x**117 + 1.73329658702463e+17*x**116 + 1.15380239021783e+17*x**115 + 7.65266794627435e+16*x**114 + 5.05697395733019e+16*x**113 + 3.32919678677403e+16*x**112 + 2.18340096225946e+16*x**111 + 1.42641771653066e+16*x**110 + 9.28222858958434e+15*x**109 + 6.01617039744338e+15*x**108 + 3.88344180735753e+15*x**107 + 2.49635298837213e+15*x**106 + 1.59789587032747e+15*x**105 + 1.01837064242638e+15*x**104 + 646162735416304.0*x**103 + 408151660246024.0*x**102 + 256632997920088.0*x**101 + 160611666154064.0*x**100 + 100039649901592.0*x**99 + 62007636538464.0*x**98 + 38241146390736.0*x**97 + 23461909088736.0*x**96 + 14317876797656.0*x**95 + 8689992463568.0*x**94 + 5245001129704.0*x**93 + 3147872937568.0*x**92 + 1878373224984.0*x**91 + 1114227943432.0*x**90 + 656870360848.0*x**89 + 384733888736.0*x**88 + 223811972808.0*x**87 + 129272278504.0*x**86 + 74125358696.0*x**85 + 42193679544.0*x**84 + 23840311096.0*x**83 + 13370176224.0*x**82 + 7439064432.0*x**81 + 4103093072.0*x**80 + 2241521448.0*x**79 + 1211545984.0*x**78 + 647642728.0*x**77 + 342538808.0*x**76 + 179304800.0*x**75 + 92964480.0*x**74 + 47694992.0*x**73 + 24132760.0*x**72 + 12008720.0*x**71 + 5853392.0*x**70 + 2792792.0*x**69 + 1311320.0*x**68 + 607200.0*x**67 + 279200.0*x**66 + 128520.0*x**65 + 60232.0*x**64 + 31536.0*x**63 + 20472.0*x**62 + 17060.0*x**61 + 16848.0*x**60 + 17700.0*x**59 + 19528.0*x**58 + 21624.0*x**57 + 23688.0*x**56 + 24700.0*x**55 + 24392.0*x**54 + 23892.0*x**53 + 22464.0*x**52 + 21360.0*x**51 + 21192.0*x**50 + 19836.0*x**49 + 18988.0*x**48 + 18484.0*x**47 + 16136.0*x**46 + 14968.0*x**45 + 13928.0*x**44 + 11460.0*x**43 + 10520.0*x**42 + 8992.0*x**41 + 7028.0*x**40 + 6548.0*x**39 + 5104.0*x**38 + 3940.0*x**37 + 3466.0*x**36 + 2492.0*x**35 + 2174.0*x**34 + 1716.0*x**33 + 1188.0*x**32 + 1204.0*x**31 + 856.0*x**30 + 560.0*x**29 + 512.0*x**28 + 316.0*x**27 + 230.0*x**26 + 176.0*x**25 + 86.0*x**24 + 86.0*x**23 + 54.0*x**22 + 12.0*x**21 + 38.0*x**20 + 20.0*x**19 + 4.0*x**18 + 12.0*x**17 + 4.0*x**16 + 4.0*x**15 + 2.0*x**14 + 2.0*x**11 + 4.0*x**9 + 2.0*x**8)\nprint(e*x)\n```\n\n\nI was doing a simple multiplication calculation, but I got an incorrect result:\n\n> Poly(-1.02749361615667e+15*x**392 + 1.00759245569393e+16*x**391 - 1.9300277358166e+16*x**390 + 1.32156899612164e+17*x**389 + 9.38136306167316e+17*x**388 + 6.73247311178813e+18*x**387 + 3.96574788227904e+19*x**386 + 2.04057697794413e+20*x**385 + 9.31648654918774e+20*x**384 + 3.83296743594759e+21*x**383 ...........\n\n The version I am using is 1.13.3", "number": 27484, "title": "Incorrect result when doing multiplication of polynomials in SymPy" } ]
fa7ee53392af364a53f062784cf79d0ecad70e86
{ "head_commit": "71b92cfd4b3d88d12f73b0e24ff6983387058c33", "head_commit_message": "added test cases", "patch_to_review": "diff --git a/.mailmap b/.mailmap\nindex a14247066bf6..55ff28f9f63d 100644\n--- a/.mailmap\n+++ b/.mailmap\n@@ -1173,6 +1173,7 @@ Pranjal Tale <[email protected]>\n Prashant Tyagi <[email protected]>\n Prasoon Shukla <[email protected]>\n Prateek Papriwal <[email protected]>\n+Pratyksh Gupta <[email protected]> Pg999999 <[email protected]>\n Praveen Perumal <[email protected]> Praveen Perumal <[email protected]>\n Praveen Sahu <[email protected]> povinsahu1909 <[email protected]>\n Prayag <[email protected]> vprayag2005 <[email protected]>\ndiff --git a/sympy/polys/densearith.py b/sympy/polys/densearith.py\nindex 30bf9553b05e..1088691ca3fb 100644\n--- a/sympy/polys/densearith.py\n+++ b/sympy/polys/densearith.py\n@@ -757,7 +757,7 @@ def dup_mul(f, g, K):\n \n n = max(df, dg) + 1\n \n- if n < 100:\n+ if n < 100 or not K.is_Exact:\n h = []\n \n for i in range(0, df + dg + 1):\ndiff --git a/sympy/polys/tests/test_densearith.py b/sympy/polys/tests/test_densearith.py\nindex ea626f1feac2..0119f564a875 100644\n--- a/sympy/polys/tests/test_densearith.py\n+++ b/sympy/polys/tests/test_densearith.py\n@@ -41,7 +41,7 @@\n ExactQuotientFailed,\n )\n \n-from sympy.polys.specialpolys import f_polys\n+from sympy.polys.specialpolys import f_polys, Symbol, Poly\n from sympy.polys.domains import FF, ZZ, QQ\n \n from sympy.testing.pytest import raises\n@@ -995,3 +995,37 @@ def test_dmp_expand():\n assert dmp_expand(([[1], [2], [3]], [[1], [2]], [[7], [5], [4], [3]]), 1, ZZ) == \\\n dmp_mul([[1], [2], [3]], dmp_mul([[1], [2]], [[7], [5], [\n 4], [3]], 1, ZZ), 1, ZZ)\n+\n+def test_dup_mul_poly():\n+ x = Symbol('x')\n+\n+ p = Poly(18786186952704.0*x**165 + 9.31746684052255e+31*x**82, x, domain='RR')\n+\n+ assert p.LC() == 18786186952704.0\n+ assert (p * x).LC() == 18786186952704.0\n+\n+ p = Poly(0.1234*x**165 + 1e30*x**82, x, domain='RR')\n+\n+ assert p.LC() == 0.1234\n+ assert (p * x).LC() == 0.1234\n+\n+ p_rr = Poly(18786186952704.0*x**165 + 9.31746684052255e+31*x**82, x, domain='RR')\n+ p_qq = p_rr.set_domain(QQ)\n+\n+ assert abs(p_rr.LC() - p_qq.LC()) < 1e-10\n+\n+ p = Poly(18786186952704.0*x**165 + 9.31746684052255e+31*x**82, x, domain='RR')\n+\n+ p1 = p * x\n+ assert abs(p1.coeffs()[0] - p.coeffs()[0]) < 1e-10\n+ p2 = p + p\n+ assert abs(p2.coeffs()[0] - 2*p.coeffs()[0]) < 1e-10\n+ p3 = p - p\n+ assert all(abs(c) < 1e-10 for c in p3.coeffs())\n+\n+ p1 = Poly(1e-100*x**10 + 1e100*x**5, x, domain='RR')\n+ p2 = Poly(1e5*x**10 + 1e6*x**5, x, domain='RR')\n+\n+ assert (p1 * x).degree() == 11\n+ assert abs((p1 * x).LC() - 1e-100) < 1e-110\n+ assert abs((p2 * x).LC() - 1e5) < 1e-5\n" }
[ { "diff_hunk": "@@ -995,3 +995,37 @@ def test_dmp_expand():\n assert dmp_expand(([[1], [2], [3]], [[1], [2]], [[7], [5], [4], [3]]), 1, ZZ) == \\\n dmp_mul([[1], [2], [3]], dmp_mul([[1], [2]], [[7], [5], [\n 4], [3]], 1, ZZ), 1, ZZ)\n+\n+def test_dup_mul_poly():\n+ x = Symbol('x')\n+\n+ p = Poly(18786186952704.0*x**165 + 9.31746684052255e+31*x**82, x, domain='RR')\n+\n+ assert p.LC() == 18786186952704.0\n+ assert (p * x).LC() == 18786186952704.0\n+\n+ p = Poly(0.1234*x**165 + 1e30*x**82, x, domain='RR')\n+\n+ assert p.LC() == 0.1234\n+ assert (p * x).LC() == 0.1234\n+\n+ p_rr = Poly(18786186952704.0*x**165 + 9.31746684052255e+31*x**82, x, domain='RR')\n+ p_qq = p_rr.set_domain(QQ)\n+\n+ assert abs(p_rr.LC() - p_qq.LC()) < 1e-10\n+\n+ p = Poly(18786186952704.0*x**165 + 9.31746684052255e+31*x**82, x, domain='RR')\n+\n+ p1 = p * x\n+ assert abs(p1.coeffs()[0] - p.coeffs()[0]) < 1e-10\n+ p2 = p + p\n+ assert abs(p2.coeffs()[0] - 2*p.coeffs()[0]) < 1e-10\n+ p3 = p - p\n+ assert all(abs(c) < 1e-10 for c in p3.coeffs())\n+\n+ p1 = Poly(1e-100*x**10 + 1e100*x**5, x, domain='RR')\n+ p2 = Poly(1e5*x**10 + 1e6*x**5, x, domain='RR')\n+\n+ assert (p1 * x).degree() == 11\n+ assert abs((p1 * x).LC() - 1e-100) < 1e-110\n+ assert abs((p2 * x).LC() - 1e5) < 1e-5", "line": null, "original_line": 1031, "original_start_line": 999, "path": "sympy/polys/tests/test_densearith.py", "start_line": null, "text": "@user1:\nWhy does this define `p` repeatedly?\r\n\r\nClear test cases were spelled out in the issue. What is the rest of the code for?\r\n\r\nIf this test code is a dump from an LLM then please don't waste our time like this. This code is clearly not written by someone who understands each line and is thinking carefully.\n\n@author:\np is repeatedly defined in each cases. so, it should have different names for each cases or is there any other approach for these test cases @user1 \n\n@author:\nfrom sympy import Poly, Symbol,\r\n\r\ndef test_mul_poly():\r\n x = Symbol('x')\r\n \r\n large_coeff_poly = Poly(18786186952704.0*x**165 + 9.31746684052255e+31*x**82, x, domain='RR')\r\n assert large_coeff_poly.LC() == 18786186952704.0\r\n mult_large = large_coeff_poly * x\r\n assert mult_large.LC() == 18786186952704.0\r\n assert mult_large.degree() == large_coeff_poly.degree() + 1\r\n \r\n is this test case enough or should i add all the remaining test case under this function with other names defined @user1 \n\n@author:\n mixed_coeff_poly = Poly(0.1234*x**165 + 1e30*x**82, x, domain='RR')\r\n assert mixed_coeff_poly.LC() == 0.1234\r\n mult_mixed = mixed_coeff_poly * x\r\n assert mult_mixed.LC() == 0.1234\r\n assert mult_mixed.degree() == mixed_coeff_poly.degree() + 1\r\n\r\n extreme_poly = Poly(1e-100*x**10 + 1e100*x**5, x, domain='RR')\r\n mult_extreme = extreme_poly * x\r\n assert mult_extreme.degree() == extreme_poly.degree() + 1\r\n assert abs(mult_extreme.LC() - 1e-100) < 1e-110\r\n\r\n rr_poly = Poly(18786186952704.0*x**165 + 9.31746684052255e+31*x**82, x, domain='RR')\r\n qq_poly = rr_poly.set_domain(QQ)\r\n assert abs(rr_poly.LC() - qq_poly.LC()) < 1e-10\r\n\r\nThese are the remaining test cases with other names defined should i add this to the same function for the additional test as given in the issue @user1 \n\n@user3:\nI think @user2 you should add only this test case, mentioned in this [comment](https://github.com/sympy/sympy/issues/27484#issuecomment-2597189208) also try to add more cases only regarding different domain types.\r\nmaybe for the complex domain, it also going to find the wrong result. \r\n\r\nyou are currently addressing unnecessary test cases which are not related to the issue. \n\n@author:\nFor different domains i am going to use this test case in the similar example \r\n\r\n test_polynomials = [\r\n Poly(18786186952704.0*x**165 + 9.31746684052255e+31*x**82, x, domain='RR'),\r\n Poly(18786186952704/1*x**165 + 9.31746684052255e+31/1*x**82, x, domain='QQ'),\r\n Poly(18786186952704.0*x**165 + 9.31746684052255e+31*x**82, x, domain='CC')\r\n ]\r\n for poly in test_polynomials:\r\n original_coeffs = set(poly.all_coeffs())\r\n mult_poly = poly * x\r\n mult_coeffs = set(mult_poly.all_coeffs())\r\n assert original_coeffs.issubset(mult_coeffs)\r\n assert mult_poly.degree() == poly.degree() + 1\r\n \r\n With the one you commented in previously \r\n \r\n large_coeff_poly = Poly(18786186952704.0*x**165 + 9.31746684052255e+31*x**82, x, domain='RR')\r\n assert large_coeff_poly.LC() == 18786186952704.0\r\n large_coeff_mult = large_coeff_poly * x\r\n coeffs = large_coeff_mult.all_coeffs()\r\n assert 18786186952704.0 in coeffs and 9.31746684052255e+31 in coeffs\r\n assert large_coeff_mult.degree() == large_coeff_poly.degree() + 1\r\n \r\n Please review @user1 @user4-093 \n\n@user1:\nIt can just be:\r\n```\r\np = Poly(18786186952704.0*x**165 + 9.31746684052255e+31*x**82, x, domain='RR')\r\npx = Poly(18786186952704.0*x**166 + 9.31746684052255e+31*x**83, x, domain='RR')\r\n\r\nassert p * x == px\r\nassert p.set_domain(QQ) * x == px.set_domain(QQ)\r\nassert p.set_domain(CC) * x == px.set_domain(CC)\r\n```" } ]
7cab03dd7e25daf7e2440caa98ee6944ad77dd78
diff --git a/sympy/polys/densearith.py b/sympy/polys/densearith.py index 30bf9553b05e..1088691ca3fb 100644 --- a/sympy/polys/densearith.py +++ b/sympy/polys/densearith.py @@ -757,7 +757,7 @@ def dup_mul(f, g, K): n = max(df, dg) + 1 - if n < 100: + if n < 100 or not K.is_Exact: h = [] for i in range(0, df + dg + 1): diff --git a/sympy/polys/tests/test_densearith.py b/sympy/polys/tests/test_densearith.py index ea626f1feac2..ebb29d50867a 100644 --- a/sympy/polys/tests/test_densearith.py +++ b/sympy/polys/tests/test_densearith.py @@ -41,11 +41,13 @@ ExactQuotientFailed, ) -from sympy.polys.specialpolys import f_polys -from sympy.polys.domains import FF, ZZ, QQ +from sympy.polys.specialpolys import f_polys, Symbol, Poly +from sympy.polys.domains import FF, ZZ, QQ, CC from sympy.testing.pytest import raises +x = Symbol('x') + f_0, f_1, f_2, f_3, f_4, f_5, f_6 = [ f.to_dense() for f in f_polys() ] F_0 = dmp_mul_ground(dmp_normal(f_0, 2, QQ), QQ(1, 7), 2, QQ) @@ -995,3 +997,11 @@ def test_dmp_expand(): assert dmp_expand(([[1], [2], [3]], [[1], [2]], [[7], [5], [4], [3]]), 1, ZZ) == \ dmp_mul([[1], [2], [3]], dmp_mul([[1], [2]], [[7], [5], [ 4], [3]], 1, ZZ), 1, ZZ) + +def test_dup_mul_poly(): + p = Poly(18786186952704.0*x**165 + 9.31746684052255e+31*x**82, x, domain='RR') + px = Poly(18786186952704.0*x**166 + 9.31746684052255e+31*x**83, x, domain='RR') + + assert p * x == px + assert p.set_domain(QQ) * x == px.set_domain(QQ) + assert p.set_domain(CC) * x == px.set_domain(CC)
{ "difficulty": "medium", "estimated_review_effort": 4, "problem_domain": "Bug Fixes" }
xorbitsai__inference-524@f4b46cc
xorbitsai/inference
Python
524
BUG: Fix stream not compatible with openai
OpenAI stream response always split by line and each line is a valid json with `data:` prefix. Fixes: https://github.com/xorbitsai/inference/issues/523
2023-10-10T10:54:30Z
BUG: Stream response does not compatible with openai ### Describe the bug A clear and concise description of what the bug is. ```python openai.error.APIError: HTTP code 200 from API ({"id": "chatcmpl-346fc892-932c-464a-8051-9df38827d566", "model": "my_code_llama", "created": 1696932753, "object": "chat.completion.chunk", "choices": [{"index": 0, "delta": {"role": .... ``` ```python Traceback (most recent call last): File "openai/api_requestor.py", line 673, in _interpret_response_line File "json/__init__.py", line 346, in loads File "json/decoder.py", line 337, in decode File "json/decoder.py", line 355, in raw_decode json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) The above exception was the direct cause of the following exception: Traceback (most recent call last): File "server/continuedev/core/autopilot.py", line 437, in _run_singular_step File "server/continuedev/core/main.py", line 376, in __call__ File "server/continuedev/plugins/steps/chat.py", line 106, in run File "/var/folders/r6/h3hc6kj91s9czcyds6fh0yqh0000gn/T/_MEIxFzRFC/continuedev/libs/llm/base.py", line 406, in stream_chat async for chunk in self._stream_chat(messages=messages, options=options): File "/var/folders/r6/h3hc6kj91s9czcyds6fh0yqh0000gn/T/_MEIxFzRFC/continuedev/libs/llm/openai.py", line 133, in _stream_chat async for chunk in await openai.ChatCompletion.acreate( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "openai/api_resources/chat_completion.py", line 45, in acreate File "openai/api_resources/abstract/engine_api_resource.py", line 217, in acreate File "openai/api_requestor.py", line 310, in arequest File "openai/api_requestor.py", line 646, in _interpret_async_response File "openai/api_requestor.py", line 675, in _interpret_response_line ``` ### To Reproduce To help us to reproduce this bug, please provide information below: 1. Your Python version. 2. The version of xinference you use. 3. Versions of crucial packages. 4. Full stack of the error. 5. Minimized code to reproduce the error. ### Expected behavior A clear and concise description of what you expected to happen. ### Additional context Add any other context about the problem here.
[ { "body": "### Describe the bug\r\nA clear and concise description of what the bug is.\r\n\r\n```python\r\nopenai.error.APIError: HTTP code 200 from API ({\"id\": \"chatcmpl-346fc892-932c-464a-8051-9df38827d566\", \"model\": \"my_code_llama\", \"created\": 1696932753, \"object\": \"chat.completion.chunk\", \"choices\": [{\"index\": 0, \"delta\": {\"role\": ....\r\n\r\n```\r\n\r\n```python\r\nTraceback (most recent call last):\r\n\r\n File \"openai/api_requestor.py\", line 673, in _interpret_response_line\r\n\r\n File \"json/__init__.py\", line 346, in loads\r\n\r\n File \"json/decoder.py\", line 337, in decode\r\n\r\n File \"json/decoder.py\", line 355, in raw_decode\r\n\r\njson.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)\r\n\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\n\r\nTraceback (most recent call last):\r\n\r\n File \"server/continuedev/core/autopilot.py\", line 437, in _run_singular_step\r\n\r\n File \"server/continuedev/core/main.py\", line 376, in __call__\r\n\r\n File \"server/continuedev/plugins/steps/chat.py\", line 106, in run\r\n\r\n File \"/var/folders/r6/h3hc6kj91s9czcyds6fh0yqh0000gn/T/_MEIxFzRFC/continuedev/libs/llm/base.py\", line 406, in stream_chat\r\n async for chunk in self._stream_chat(messages=messages, options=options):\r\n\r\n File \"/var/folders/r6/h3hc6kj91s9czcyds6fh0yqh0000gn/T/_MEIxFzRFC/continuedev/libs/llm/openai.py\", line 133, in _stream_chat\r\n async for chunk in await openai.ChatCompletion.acreate(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n\r\n File \"openai/api_resources/chat_completion.py\", line 45, in acreate\r\n\r\n File \"openai/api_resources/abstract/engine_api_resource.py\", line 217, in acreate\r\n\r\n File \"openai/api_requestor.py\", line 310, in arequest\r\n\r\n File \"openai/api_requestor.py\", line 646, in _interpret_async_response\r\n\r\n File \"openai/api_requestor.py\", line 675, in _interpret_response_line\r\n```\r\n\r\n### To Reproduce\r\nTo help us to reproduce this bug, please provide information below:\r\n\r\n1. Your Python version.\r\n2. The version of xinference you use.\r\n3. Versions of crucial packages.\r\n4. Full stack of the error.\r\n5. Minimized code to reproduce the error.\r\n\r\n### Expected behavior\r\nA clear and concise description of what you expected to happen.\r\n\r\n### Additional context\r\nAdd any other context about the problem here.\r\n", "number": 523, "title": "BUG: Stream response does not compatible with openai" } ]
f22977bcb4f0f7c2367f73d18d198c708aeced53
{ "head_commit": "f4b46cc653f3ca87ea44fa495775bf3b82e15bfa", "head_commit_message": "Refine ut", "patch_to_review": "diff --git a/.github/workflows/python.yaml b/.github/workflows/python.yaml\nindex a5bd69ec4d..25bb360901 100644\n--- a/.github/workflows/python.yaml\n+++ b/.github/workflows/python.yaml\n@@ -97,6 +97,7 @@ jobs:\n pip install s3fs\n pip install modelscope\n pip install -e \".[dev]\"\n+ pip install openai\n working-directory: .\n \n - name: Test with pytest\ndiff --git a/xinference/client/common.py b/xinference/client/common.py\nindex e2a37076d5..b3d05450f6 100644\n--- a/xinference/client/common.py\n+++ b/xinference/client/common.py\n@@ -13,20 +13,23 @@\n # limitations under the License.\n \n import json\n-from typing import Iterator\n-\n-from ..types import ChatCompletionChunk, CompletionChunk\n+from typing import Any, Iterator\n \n \n def streaming_response_iterator(\n- response_chunk: Iterator[bytes],\n-) -> Iterator[\"CompletionChunk\"]:\n+ response_lines: Iterator[bytes],\n+) -> Iterator[Any]:\n \"\"\"\n Create an Iterator to handle the streaming type of generation.\n \n+ Note\n+ ----------\n+ This method is for compatible with openai. Please refer to:\n+ https://github.com/openai/openai-python/blob/v0.28.1/openai/api_requestor.py#L99\n+\n Parameters\n ----------\n- response_chunk: Iterator[bytes]\n+ response_lines: Iterator[bytes]\n Generated lines by the Model Generator.\n \n Returns\n@@ -36,36 +39,12 @@ def streaming_response_iterator(\n \n \"\"\"\n \n- for chunk in response_chunk:\n- content = json.loads(chunk.decode(\"utf-8\"))\n- error = content.get(\"error\", None)\n- if error is not None:\n- raise Exception(str(error))\n- yield content\n-\n-\n-# Duplicate code due to type hint issues\n-def chat_streaming_response_iterator(\n- response_chunk: Iterator[bytes],\n-) -> Iterator[\"ChatCompletionChunk\"]:\n- \"\"\"\n- Create an Iterator to handle the streaming type of generation.\n-\n- Parameters\n- ----------\n- response_chunk: Iterator[bytes]\n- Generated chunk by the Model Generator.\n-\n- Returns\n- -------\n- Iterator[\"ChatCompletionChunk\"]\n- Iterator of ChatCompletionChunks generated by models.\n-\n- \"\"\"\n-\n- for chunk in response_chunk:\n- content = json.loads(chunk.decode(\"utf-8\"))\n- error = content.get(\"error\", None)\n- if error is not None:\n- raise Exception(str(error))\n- yield content\n+ for line in response_lines:\n+ line = line.strip()\n+ if line.startswith(b\"data:\"):\n+ data = line[len(b\"data:\") :].strip()\n+ data = json.loads(data.decode(\"utf-8\"))\n+ yield data\n+ elif line.startswith(b\"error:\"):\n+ error = line[len(b\"error:\") :].strip()\n+ raise Exception(error.decode(\"utf-8\"))\ndiff --git a/xinference/client/restful/restful_client.py b/xinference/client/restful/restful_client.py\nindex a6a6d16e1c..30b127c26e 100644\n--- a/xinference/client/restful/restful_client.py\n+++ b/xinference/client/restful/restful_client.py\n@@ -17,7 +17,7 @@\n \n import requests\n \n-from ..common import chat_streaming_response_iterator, streaming_response_iterator\n+from ..common import streaming_response_iterator\n \n if TYPE_CHECKING:\n from ...types import (\n@@ -128,7 +128,7 @@ def generate(\n )\n \n if stream:\n- return streaming_response_iterator(response.iter_content(chunk_size=None))\n+ return streaming_response_iterator(response.iter_lines())\n \n response_data = response.json()\n return response_data\n@@ -206,9 +206,7 @@ def chat(\n )\n \n if stream:\n- return chat_streaming_response_iterator(\n- response.iter_content(chunk_size=None)\n- )\n+ return streaming_response_iterator(response.iter_lines())\n \n response_data = response.json()\n return response_data\n@@ -272,9 +270,7 @@ def chat(\n )\n \n if stream:\n- return chat_streaming_response_iterator(\n- response.iter_content(chunk_size=None)\n- )\n+ return streaming_response_iterator(response.iter_lines())\n \n response_data = response.json()\n return response_data\ndiff --git a/xinference/core/restful_api.py b/xinference/core/restful_api.py\nindex a2569de5c0..82853904f1 100644\n--- a/xinference/core/restful_api.py\n+++ b/xinference/core/restful_api.py\n@@ -518,12 +518,15 @@ async def stream_results():\n try:\n iterator = await model.generate(body.prompt, kwargs)\n async for item in iterator:\n- yield json.dumps(item)\n+ yield f\"data: {json.dumps(item)}\\n\"\n except Exception as ex:\n logger.exception(\"Completion stream got an error: %s\", ex)\n- yield json.dumps({\"error\": str(ex)})\n+ yield f\"error: {ex}\"\n \n- return StreamingResponse(stream_results())\n+ # The Content-Type: text/event-stream header is required for openai stream.\n+ return StreamingResponse(\n+ stream_results(), headers={\"Content-Type\": \"text/event-stream\"}\n+ )\n else:\n try:\n return await model.generate(body.prompt, kwargs)\n@@ -631,12 +634,15 @@ async def stream_results():\n prompt, system_prompt, chat_history, kwargs\n )\n async for item in iterator:\n- yield json.dumps(item)\n+ yield f\"data: {json.dumps(item)}\\n\"\n except Exception as ex:\n logger.exception(\"Chat completion stream got an error: %s\", ex)\n- yield json.dumps({\"error\": str(ex)})\n+ yield f\"error: {ex}\"\n \n- return StreamingResponse(stream_results())\n+ # The Content-Type: text/event-stream header is required for openai stream.\n+ return StreamingResponse(\n+ stream_results(), headers={\"Content-Type\": \"text/event-stream\"}\n+ )\n else:\n try:\n if is_chatglm_ggml:\ndiff --git a/xinference/core/tests/test_restful_api.py b/xinference/core/tests/test_restful_api.py\nindex 88ccbe9db1..f8a82931a0 100644\n--- a/xinference/core/tests/test_restful_api.py\n+++ b/xinference/core/tests/test_restful_api.py\n@@ -12,6 +12,7 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n \n+import openai\n import pytest\n import requests\n \n@@ -344,3 +345,54 @@ def test_restful_api_for_embedding(setup):\n response = requests.get(f\"{endpoint}/v1/models\")\n response_data = response.json()\n assert len(response_data) == 0\n+\n+\[email protected]\n+async def test_openai(setup):\n+ endpoint, _ = setup\n+ url = f\"{endpoint}/v1/models\"\n+\n+ # list\n+ response = requests.get(url)\n+ response_data = response.json()\n+ assert len(response_data) == 0\n+\n+ # launch\n+ payload = {\n+ \"model_uid\": \"test_restful_api\",\n+ \"model_name\": \"orca\",\n+ \"quantization\": \"q4_0\",\n+ }\n+\n+ response = requests.post(url, json=payload)\n+ response_data = response.json()\n+ model_uid_res = response_data[\"model_uid\"]\n+ assert model_uid_res == \"test_restful_api\"\n+\n+ openai.api_key = \"\"\n+ openai.api_base = f\"{endpoint}/v1\"\n+\n+ # chat\n+ messages = [\n+ {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n+ {\"role\": \"user\", \"content\": \"Hello!\"},\n+ {\"role\": \"assistant\", \"content\": \"Hi what can I help you?\"},\n+ {\"role\": \"user\", \"content\": \"What is the capital of France?\"},\n+ ]\n+\n+ result = []\n+ async for chunk in await openai.ChatCompletion.acreate(\n+ messages=messages, stream=True, model=model_uid_res\n+ ):\n+ if not hasattr(chunk, \"choices\") or len(chunk.choices) == 0:\n+ continue\n+ result.append(chunk)\n+ assert result\n+ assert type(result[0]).__name__ == \"OpenAIObject\"\n+\n+ result = await openai.ChatCompletion.acreate(\n+ messages=messages, stream=False, model=model_uid_res\n+ )\n+\n+ assert result\n+ assert type(result).__name__ == \"OpenAIObject\"\n" }
[ { "diff_hunk": "@@ -97,6 +97,7 @@ jobs:\n pip install s3fs\n pip install modelscope\n pip install -e \".[dev]\"\n+ pip install openai", "line": null, "original_line": 100, "original_start_line": null, "path": ".github/workflows/python.yaml", "start_line": null, "text": "@user1:\nopenai should be added to [dev] so that developers can run the unittests without any dependency problem.\n\n@author:\nGood suggestion. Thanks." } ]
86a49a978b41b82c106aab8030f11844cc2de588
diff --git a/setup.cfg b/setup.cfg index 179c099761..62ad84c0eb 100644 --- a/setup.cfg +++ b/setup.cfg @@ -62,6 +62,7 @@ dev = jieba>=0.42.0 flake8>=3.8.0 black + openai all = ctransformers llama-cpp-python>=0.2.0 diff --git a/xinference/client/common.py b/xinference/client/common.py index e2a37076d5..b3d05450f6 100644 --- a/xinference/client/common.py +++ b/xinference/client/common.py @@ -13,20 +13,23 @@ # limitations under the License. import json -from typing import Iterator - -from ..types import ChatCompletionChunk, CompletionChunk +from typing import Any, Iterator def streaming_response_iterator( - response_chunk: Iterator[bytes], -) -> Iterator["CompletionChunk"]: + response_lines: Iterator[bytes], +) -> Iterator[Any]: """ Create an Iterator to handle the streaming type of generation. + Note + ---------- + This method is for compatible with openai. Please refer to: + https://github.com/openai/openai-python/blob/v0.28.1/openai/api_requestor.py#L99 + Parameters ---------- - response_chunk: Iterator[bytes] + response_lines: Iterator[bytes] Generated lines by the Model Generator. Returns @@ -36,36 +39,12 @@ def streaming_response_iterator( """ - for chunk in response_chunk: - content = json.loads(chunk.decode("utf-8")) - error = content.get("error", None) - if error is not None: - raise Exception(str(error)) - yield content - - -# Duplicate code due to type hint issues -def chat_streaming_response_iterator( - response_chunk: Iterator[bytes], -) -> Iterator["ChatCompletionChunk"]: - """ - Create an Iterator to handle the streaming type of generation. - - Parameters - ---------- - response_chunk: Iterator[bytes] - Generated chunk by the Model Generator. - - Returns - ------- - Iterator["ChatCompletionChunk"] - Iterator of ChatCompletionChunks generated by models. - - """ - - for chunk in response_chunk: - content = json.loads(chunk.decode("utf-8")) - error = content.get("error", None) - if error is not None: - raise Exception(str(error)) - yield content + for line in response_lines: + line = line.strip() + if line.startswith(b"data:"): + data = line[len(b"data:") :].strip() + data = json.loads(data.decode("utf-8")) + yield data + elif line.startswith(b"error:"): + error = line[len(b"error:") :].strip() + raise Exception(error.decode("utf-8")) diff --git a/xinference/client/restful/restful_client.py b/xinference/client/restful/restful_client.py index b1c42511aa..aec180310e 100644 --- a/xinference/client/restful/restful_client.py +++ b/xinference/client/restful/restful_client.py @@ -17,7 +17,7 @@ import requests -from ..common import chat_streaming_response_iterator, streaming_response_iterator +from ..common import streaming_response_iterator if TYPE_CHECKING: from ...types import ( @@ -173,7 +173,7 @@ def generate( ) if stream: - return streaming_response_iterator(response.iter_content(chunk_size=None)) + return streaming_response_iterator(response.iter_lines()) response_data = response.json() return response_data @@ -251,9 +251,7 @@ def chat( ) if stream: - return chat_streaming_response_iterator( - response.iter_content(chunk_size=None) - ) + return streaming_response_iterator(response.iter_lines()) response_data = response.json() return response_data @@ -317,9 +315,7 @@ def chat( ) if stream: - return chat_streaming_response_iterator( - response.iter_content(chunk_size=None) - ) + return streaming_response_iterator(response.iter_lines()) response_data = response.json() return response_data diff --git a/xinference/core/restful_api.py b/xinference/core/restful_api.py index 61c9752c8b..011b65852f 100644 --- a/xinference/core/restful_api.py +++ b/xinference/core/restful_api.py @@ -533,12 +533,15 @@ async def stream_results(): try: iterator = await model.generate(body.prompt, kwargs) async for item in iterator: - yield json.dumps(item) + yield f"data: {json.dumps(item)}\n" except Exception as ex: logger.exception("Completion stream got an error: %s", ex) - yield json.dumps({"error": str(ex)}) + yield f"error: {ex}" - return StreamingResponse(stream_results()) + # The Content-Type: text/event-stream header is required for openai stream. + return StreamingResponse( + stream_results(), headers={"Content-Type": "text/event-stream"} + ) else: try: return await model.generate(body.prompt, kwargs) @@ -669,12 +672,15 @@ async def stream_results(): prompt, system_prompt, chat_history, kwargs ) async for item in iterator: - yield json.dumps(item) + yield f"data: {json.dumps(item)}\n" except Exception as ex: logger.exception("Chat completion stream got an error: %s", ex) - yield json.dumps({"error": str(ex)}) + yield f"error: {ex}" - return StreamingResponse(stream_results()) + # The Content-Type: text/event-stream header is required for openai stream. + return StreamingResponse( + stream_results(), headers={"Content-Type": "text/event-stream"} + ) else: try: if is_chatglm_ggml: diff --git a/xinference/core/tests/test_restful_api.py b/xinference/core/tests/test_restful_api.py index 88ccbe9db1..19a0fa72c3 100644 --- a/xinference/core/tests/test_restful_api.py +++ b/xinference/core/tests/test_restful_api.py @@ -12,6 +12,9 @@ # See the License for the specific language governing permissions and # limitations under the License. +import sys + +import openai import pytest import requests @@ -344,3 +347,57 @@ def test_restful_api_for_embedding(setup): response = requests.get(f"{endpoint}/v1/models") response_data = response.json() assert len(response_data) == 0 + + [email protected] [email protected]( + sys.platform == "win32", reason="Window CI hangs after run this case." +) +async def test_openai(setup): + endpoint, _ = setup + url = f"{endpoint}/v1/models" + + # list + response = requests.get(url) + response_data = response.json() + assert len(response_data) == 0 + + # launch + payload = { + "model_uid": "test_restful_api", + "model_name": "orca", + "quantization": "q4_0", + } + + response = requests.post(url, json=payload) + response_data = response.json() + model_uid_res = response_data["model_uid"] + assert model_uid_res == "test_restful_api" + + openai.api_key = "" + openai.api_base = f"{endpoint}/v1" + + # chat + messages = [ + {"role": "system", "content": "You are a helpful assistant."}, + {"role": "user", "content": "Hello!"}, + {"role": "assistant", "content": "Hi what can I help you?"}, + {"role": "user", "content": "What is the capital of France?"}, + ] + + result = [] + async for chunk in await openai.ChatCompletion.acreate( + messages=messages, stream=True, model=model_uid_res + ): + if not hasattr(chunk, "choices") or len(chunk.choices) == 0: + continue + result.append(chunk) + assert result + assert type(result[0]).__name__ == "OpenAIObject" + + result = await openai.ChatCompletion.acreate( + messages=messages, stream=False, model=model_uid_res + ) + + assert result + assert type(result).__name__ == "OpenAIObject"
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Bug Fixes" }
sympy__sympy-27407@873bdd5
sympy/sympy
Python
27,407
Migrate sympy.physics.quantum to use Kinds and _constructor_postprocessor_mapping
The goal of this effort is to remove the special methods like `__mul__`, `__rmul__`, and `__pow__` from the quantum classes. Instead, we are moving to using the new `Kind` and `_constructor_postprocessor_mapping` logic. Before review/merge: - [x] Implement kinds for bras, kets and operators. - [x] Replace custom `__mul__` and `__pow__` method with `_constructor_postprocessor_mapping` based transformed. - [x] Add tests. - [x] Make sure all tests and doctests pass. - [x] Add docstrings. - [x] Add note to release notes about `tensor_product_simp` and friends being deprecated. Follow up issues to open: - [x] Open issue documenting issues with kind and `_constructor_postprocessor_mapping` APIs. - [x] Open issue about introducing a binary operator for tensor products. - [x] Open issue for proper kind handling of `Operator*Ket` and `Bra*Operator` (improvements to the kind dispatcher are needed first. #### References to other Issues or PRs Fixes #27248 Fixes #19540 Fixes #19538 #### Brief description of what is fixed or changed * Add `OperatorKind`, `BraKind`, `KetKind`. * Add logic that uses `_constructor_postprocessor_mapping` to transform quantum expressions when building `Mul` and `Pow` expressions. * Use multiple-dipatch to define rules for the transformations. * Remove all `__mul__` and `__pow__` methods from quantum package. * All combinations of bras and kets are now consistently identified as inner/outer products, with inner products taking taking the highest priority when there is ambiguity. * TypeError is raised when trying to build tensor products using `*`, such as `Ket()*Ket()` or `Bra()*Bra()`. * Breaking change: `tensor_product_simp` and its helper functions have been deprecated as the transformations they apply are now handled automatically using `_constructor_postprocessor_mapping`. #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> * physics.quantum * Removed custom `__mul__` and `__pow__` methods from quantum classes. This logic has been replaced by `_constructor_postprocessor_mapping` based transforms that are more complete and consistent. * Created kinds for bras, kets and operators, with kind dispatch resolution. * All combinations of bras and kets are now consistently identified as inner/outer products, with inner products taking taking the highest priority when there is ambiguity. * TypeError is raised when trying to build tensor products using `*`, such as `Ket()*Ket()` or `Bra()*Bra()`. * Breaking change: `tensor_product_simp` and its helper functions have been deprecated as the transformations they apply are now handled automatically using `_constructor_postprocessor_mapping`. <!-- END RELEASE NOTES -->
2024-12-27T01:23:15Z
Multiplying kets should produce a tensor product ## Input code ```python srepr(qapply(Ket('a') * Ket('b'))) ``` ## Expected output ```python TensorProduct(Ket(Symbol('a')), Ket(Symbol('b'))) ``` ## Actual output ```python Mul(Ket(Symbol('a')), Ket(Symbol('b'))) ``` ## Rationale Qapply converts _bra_ × _ket_ to inner product and _ket_ × _bra_ to outer product. This gives a precedent to also support the rules _bra_ × _bra_ → tensor product of bras, _ket_ × _ket_ → tensor product of kets. ```python srepr(qapply(Bra('a')*Ket('b'))) > InnerProduct(Bra(Symbol('a')),Ket(Symbol('b'))) srepr(qapply(Ket('a')*Bra('b'))) > OuterProduct(Ket(Symbol('a')),Bra(Symbol('b'))) ``` Inner and outer product should accept tensor products of bra-kets ## Input code ```python k1 = OrthogonalKet(1) k2 = OrthogonalKet(2) lh = TensorProduct(k1, k2) # |1>|2> rh = TensorProduct(k1 + k2, k1 - k2) # |1>|1> - |1>|2> + |2>|1> - |2>|2> prod = InnerProduct(Dagger(lh), rh) srepr(qapply(prod)) ``` ## Expected result ```python # - <2|<1|1>|2> Mul(Integer(-1), InnerProduct(   TensorProduct(OrthogonalBra(1), OrthogonalBra(2)), TensorProduct(OrthogonalKet(1), OrthogonalKet(2)) )) ``` ## Actual output ``` TypeError: KetBase subclass expected, got: (|1> + |2>)x(|1> - |2>) ``` ## Rationale When working on a product space H⊗H, a tensor product of kets represents a state, much like one ket represents a state in H. Therefore, all the things we can do on bra-kets, we should be able to do on their tensor products. sympy.physics.quantum: Manually force creation of outer products in existing expressions After a longer calculation I end up with many expressions like `c*|1>*<1| - |1><1|`, where `c` is a complex number. I cannot factor this because `|1>*<1| != |1><1|`. The documentation states that one can force the creation of outer/inner products by putting parentheses around the corresponding bras/kets. But I can't do this, because I'm not manually creating this expression but instead this is the result of some other manipulations. Is there any way I can force the creation of outer products in an existing expression? Example to reproduce: ```python from sympy import symbols from sympy.physics.quantum import Bra, Ket, qapply, Dagger psi1 = Ket("1") psi1_dag = Dagger(psi1) c = symbols("c") expression = c*psi1*psi1_dag - psi1*psi1_dag #Don't change. This is the result from a longer computation. print(qapply(expression)) #c*|1>*<1| - |1><1| ``` Right now I'm working around this issue by doing `expression.subs(Mul(psi1, Dagger(psi1)), psi1*Dagger(psi1))`, but this is a bit cumbersome because I have to do this for every combination of bra's and ket's that appears in my expression.
This is a good call out. I agree that `ket*ket` and `bra*bra` should return tensor products. The code to convert products of bras and kets to inner and outer products is here: https://github.com/sympy/sympy/blob/master/sympy/physics/quantum/state.py#L226 https://github.com/sympy/sympy/blob/master/sympy/physics/quantum/state.py#L234 https://github.com/sympy/sympy/blob/master/sympy/physics/quantum/state.py#L317 https://github.com/sympy/sympy/blob/master/sympy/physics/quantum/state.py#L325 It should be fairly straightforward to extend this to cover the ket*ket and bra*bra cases. There may be additional handling in `qapply` that is needed to work with tensor products of states, but that can be covered in a different issue. Is it fine if I address this? Yes, go ahead. I don't know the quantum code so well myself but maybe @m93a could review a pull request. I will mention first that this is my first involvement in a GitHub project, so I may misunderstand some aspects of the social protocols for comment-section usage, although they mostly seem to be clear so far! I have searched through the project for items related to the tensor product and found that a number of packages contain their own definitions for the tensor product. I have also reviewed the history of these coexisting definitions and found that people have long been merging redundant instances, for example. Although this is not quite the same context as the finite-dimensional one in these other packages, I am thinking I could perhaps attempt to utilize the existing frameworks for tensor products, tensors and traces/contractions to create a consistent handling that is simply adapted to the infinite-dimensional case - part of this is to account for the fact that tensors may take multiple "arguments," and there is no way to specify "how" the tensors in the inner product may be interacting. Is it okay if I do this? If the words are not meaningful, the main idea is that I want to modify the way that a number of major functions work in "quantum." (I apologize if I'm not supposed to ask about these things in advance like this, I am still learning.) I had two questions about more extreme details: - The definition of the above "InnerProduct" inherits from the basic expression class, rather than the quantum expression class, and I notice that this fact is often important in the code, as well as the fact that all quantum expression descendants are assumed not to commute (the InnerProduct is assumed to only produce scalars, which is precisely the kind of behavior I would be generalizing.) Is it okay if I modify the InnerProduct to inherit from the quantum expression class, account for this in the code, and create handling for noncommutative quantum expression descendants? - Would it make sense for me to later possibly contribute a functional analysis module (considering the goal, probably not so large upon its creation,) so that the extra definitions in the quantum module may be less extensively specialized and more easily found, or is the quantum library meant to be somewhat autonomous? I'll be speaking for myself – and while I am a contributor, my contributions were simply a result of bodging things together until they worked, so I don't have a great deal of understanding sympy as a whole. So my opinions might not be the best advice, but I'll share them anyway 😁️ 1) Since the inner product of `<a|` and `|b>|c>` is a perfectly valid expression that results in `(<a|b>) |c>`, changing `InnerProduct` to inherit from QuantumExpr sounds like a reasonable thing to do. AFAIK it should still be possible to set `is_commutative` manually for the occasions when the result really is a scalar. 2) A functional analysis module sounds like a great idea to me, since it's really hard to do anything smart with the Dirac notation unless you know the properties of the operators. I think that that the FA module should be as autonomous from the core sympy, as physics.quantum is now. But making physics.quantum dependent on the FA module would be very beneficial for it IMO. How many changes would be suitable as an address to the specific issue? A lot of the improvements would, I suppose, be out of the scope of the original problem, which can be solved pretty simply. Should those be included into an independent pull or should this also have changes that, for example, would contribute to the general development of the InnerProduct function (eg, general inner product materials in a separate package, arguments where the significant "indices" would be described, etc?) On the InnerProduct itself - I notice it calls an _eval_innerproduct function in the bras and kets, which themselves refer to a "dispatch function" in the qexpr module. This would not generalize really well to arbitrary objects or inner products themselves. Would it be considered too extreme to have the InnerProduct function be entirely self-contained? I checked other SymPy packages to see how common this kind of arrangement is and it is not unique to this, but here it could hold the development of the general inner product back a bit (and though I can see in this form that it could have value to the creation of special-case kets and bras, I am thinking of including a simple way to define to some extent the contents of those kets and bras, which is currently not possible (and which is one of the reasons I took a while to respond this time :stuck_out_tongue: .)) To reiterate what I mean - rather than calling the _eval_innerproduct in the "states" module, the InnerProduct function would be as self-contained as possible and only make external references to obtain information for the product itself, not for computations themselves. I'm not sure I understand your question but I will say this: Currently the quantum module is effectively unmaintained. This means that there might not be anyone who can advise on what is a good change to make or not for the broader direction of the module. I personally can't advise although what you say sounds reasonable to me. Hopefully @m93a will be able to help... > How many changes would be suitable as an address to the specific issue? If it's up to me, there's no upper bound 😁️ This module needs some love and care, and if you want to rewrite some parts to make it more powerful/manageable, go for it. I'll gladly read through the code and provide a second opinion. Just be sure to chunk up your work to reasonably-sized PRs. > Would it be considered too extreme to have the InnerProduct function be entirely self-contained? As far as I know, right now the only internal function that redefines `_eval_innerproduct` is `OrthogonalKet`, so moving that logic elsewhere shouldn't be a problem. Currently, the quantum module is half-baked in a way that expects you to define new classes that extend Ket for different purposes, which isn't a good API I'd say. I'm not sure if there are any “ket classes”, other than `Ket` and `OrthogonalKet`, that might be useful. I think that vast majority of problems in QM are described with orthonormal basis made from the eigenstates of some operator. So limiting the ket types to “general” and “orthogonal” sounds reasonable… But if you find a good way to make it extendable even after moving the logic, that's great! The way I think about it, the best API for bra-kets would let you define an operator and then construct an orthogonal basis from it. Then you'd have a “referernce frame” and you could do basis transformations (automatic Clebsch-Gordan?), matrix representations of other operators (albeit possibly infinite-dimensional), ladder operators and things like that. However making this work sounds like huge undertaking, *if* it's possible at all. It shouldn't be too difficult to enable inner and outer products to work with tensor products of states. For inner products it is mostly about carrying around the tensor products and then reducing them to their individual inner products. For outer products it is a bit more complicated as qapply will also need to be modified to handle the case of tensor product operators on outer products of tensor products states. Great question. This is a bit subtle. You are seeing this behavior because the Python multiplication operator `*` and the sympy class `Mul` (which represents symbolic multiplication) are related, but work in slightly different ways. As a result, `psi1*psi1_dag` will give you an `OuterProduct`, but `Mul(psi1, psi1_dag)` will not. The `Bra` and `Ket` classes in sympy have special logic (see https://github.com/sympy/sympy/blob/master/sympy/physics/quantum/state.py#L226) to convert `*` style products to `OuterProducts` but it won't get triggered when code manually create a product using the `Mul` class rather than the `*` operator. One might think that `qapply` could be modified to always use `*` style products rather than `Mul`. For example, we could change [this line](https://github.com/sympy/sympy/blob/master/sympy/physics/quantum/qapply.py#L117) to use `*`. However, the goal of `qapply` it to actually perform the actions of operators, take inner products, etc. As such, `qapply` works with `OuterProduct`s by picking them apart and letting the bras and kets act on adjacent operators and states [see here](https://github.com/sympy/sympy/blob/master/sympy/physics/quantum/qapply.py#L156). What this means is that the fix (change `Mul` to `*`) needs to be applied outside of `qapply` at the location where the `Mul` classes are being used. This is also related to the limited ways that sympy has for customizing how new subclasses of `Expr` work with existing classes like `Add`, `Mul`, `Pow`. However, to get you unblocked, the following function will apply the fix to any expression: ```python import math def fix_mul(e): def _my_mul(*args): return math.prod(args) return e.replace(Mul, _my_mul) ``` Let's leave this issue open to discuss where we might put this logic in `sympy.physics.quantum` though. There may be some cases that aren't handled properly by the code I proposed above. There are some edge cases when the outer product is in different locations in a product that we will have to sort through.
[ { "body": "## Input code\r\n```python\r\nsrepr(qapply(Ket('a') * Ket('b')))\r\n```\r\n## Expected output\r\n```python\r\nTensorProduct(Ket(Symbol('a')), Ket(Symbol('b')))\r\n```\r\n## Actual output\r\n```python\r\nMul(Ket(Symbol('a')), Ket(Symbol('b')))\r\n```\r\n## Rationale\r\nQapply converts _bra_ × _ket_ to inner product and _ket_ × _bra_ to outer product. This gives a precedent to also support the rules _bra_ × _bra_ → tensor product of bras, _ket_ × _ket_ → tensor product of kets.\r\n```python\r\nsrepr(qapply(Bra('a')*Ket('b')))\r\n> InnerProduct(Bra(Symbol('a')),Ket(Symbol('b')))\r\n\r\nsrepr(qapply(Ket('a')*Bra('b')))\r\n> OuterProduct(Ket(Symbol('a')),Bra(Symbol('b')))\r\n```", "number": 19538, "title": "Multiplying kets should produce a tensor product" }, { "body": "## Input code\r\n```python\r\nk1 = OrthogonalKet(1)\r\nk2 = OrthogonalKet(2)\r\n\r\nlh = TensorProduct(k1, k2) # |1>|2>\r\nrh = TensorProduct(k1 + k2, k1 - k2) # |1>|1> - |1>|2> + |2>|1> - |2>|2>\r\n\r\nprod = InnerProduct(Dagger(lh), rh)\r\nsrepr(qapply(prod))\r\n```\r\n## Expected result\r\n```python\r\n# - <2|<1|1>|2>\r\nMul(Integer(-1), InnerProduct(\r\n  TensorProduct(OrthogonalBra(1), OrthogonalBra(2)),\r\n TensorProduct(OrthogonalKet(1), OrthogonalKet(2))\r\n))\r\n```\r\n## Actual output\r\n```\r\nTypeError: KetBase subclass expected, got: (|1> + |2>)x(|1> - |2>)\r\n```\r\n## Rationale\r\nWhen working on a product space H⊗H, a tensor product of kets represents a state, much like one ket represents a state in H. Therefore, all the things we can do on bra-kets, we should be able to do on their tensor products.", "number": 19540, "title": "Inner and outer product should accept tensor products of bra-kets" }, { "body": "After a longer calculation I end up with many expressions like `c*|1>*<1| - |1><1|`, where `c` is a complex number.\nI cannot factor this because `|1>*<1| != |1><1|`.\n\nThe documentation states that one can force the creation of outer/inner products by putting parentheses around the corresponding bras/kets.\nBut I can't do this, because I'm not manually creating this expression but instead this is the result of some other manipulations.\n\nIs there any way I can force the creation of outer products in an existing expression?\n\nExample to reproduce:\n\n```python\nfrom sympy import symbols\nfrom sympy.physics.quantum import Bra, Ket, qapply, Dagger\n\npsi1 = Ket(\"1\")\npsi1_dag = Dagger(psi1)\n\nc = symbols(\"c\")\n\nexpression = c*psi1*psi1_dag - psi1*psi1_dag #Don't change. This is the result from a longer computation.\nprint(qapply(expression)) #c*|1>*<1| - |1><1|\n```\n\nRight now I'm working around this issue by doing `expression.subs(Mul(psi1, Dagger(psi1)), psi1*Dagger(psi1))`, but this is a bit cumbersome because I have to do this for every combination of bra's and ket's that appears in my expression.", "number": 27248, "title": "sympy.physics.quantum: Manually force creation of outer products in existing expressions" } ]
b7e7de3b02c8d1edf683e12c61a69f57eebd4ce9
{ "head_commit": "873bdd59cec00d1e25b665400fd49301c94f605b", "head_commit_message": "quantum: Add docstrings and inline comments to PR\n\nIn this commit I have added docstrings and inline comments for (hopefully) everything that needs it.", "patch_to_review": "diff --git a/doc/src/explanation/active-deprecations.md b/doc/src/explanation/active-deprecations.md\nindex d8b1c469fbdf..22fb46480686 100644\n--- a/doc/src/explanation/active-deprecations.md\n+++ b/doc/src/explanation/active-deprecations.md\n@@ -76,10 +76,28 @@ SymPy deprecation warnings.\n \n ## Version 1.14\n \n+(deprecated-tensorproduct-simp)=\n+### Deprecated tensor_product_simp from physics.quantum\n+\n+The ``tensor_product_simp`` function in the ``sympy.physics.quantum``\n+module has been deprecated along with two helper functions,\n+``tensor_product_simp_Mul`` and ``tensor_product_simp_Pow``. The \n+transformations performed by these functions are now applied\n+automatically to all quantum expressions in the new\n+``sympy.physics.quantum.transforms`` module.\n+\n+If you are using these functions in your code, you can remove them as\n+they are now reduntant.\n+\n+Their current implementations have been replaced by a simple\n+pass-through as all quantum expressions will already be in the form\n+originally produced by these functions. These pass throughs will\n+remain, along with its tests for at least one year after the 1.14 release.\n+\n (deprecated-operator-identity)=\n ### Deprecated IdentityOperator from physics.quantum\n \n-The ``IdentityOperator`` in the ``sympy.physics.quantum`` moddule has been\n+The ``IdentityOperator`` in the ``sympy.physics.quantum`` module has been\n deprecated. Originally, we thought that it would be helpful to have a\n multiplicative identity for quantum operators and states. However, at this\n time, it is unused in `sympy.physics.quantum` for anything other than tests\ndiff --git a/sympy/physics/quantum/__init__.py b/sympy/physics/quantum/__init__.py\nindex bf08e1f7a383..36203f1a48c4 100644\n--- a/sympy/physics/quantum/__init__.py\n+++ b/sympy/physics/quantum/__init__.py\n@@ -29,7 +29,9 @@\n \n 'hbar', 'HBar',\n \n+ '_postprocess_state_mul', '_postprocess_state_pow'\n ]\n+\n from .anticommutator import AntiCommutator\n \n from .qapply import qapply\n@@ -57,3 +59,7 @@\n from .tensorproduct import TensorProduct, tensor_product_simp\n \n from .constants import hbar, HBar\n+\n+# These are private, but need to be imported so they are registered\n+# as postprocessing transformers with Mul and Pow.\n+from .transforms import _postprocess_state_mul, _postprocess_state_pow\ndiff --git a/sympy/physics/quantum/anticommutator.py b/sympy/physics/quantum/anticommutator.py\nindex a73f1c207793..cbd26eade640 100644\n--- a/sympy/physics/quantum/anticommutator.py\n+++ b/sympy/physics/quantum/anticommutator.py\n@@ -1,13 +1,14 @@\n \"\"\"The anti-commutator: ``{A,B} = A*B + B*A``.\"\"\"\n \n from sympy.core.expr import Expr\n+from sympy.core.kind import KindDispatcher\n from sympy.core.mul import Mul\n from sympy.core.numbers import Integer\n from sympy.core.singleton import S\n from sympy.printing.pretty.stringpict import prettyForm\n \n-from sympy.physics.quantum.operator import Operator\n from sympy.physics.quantum.dagger import Dagger\n+from sympy.physics.quantum.kind import _OperatorKind, OperatorKind\n \n __all__ = [\n 'AntiCommutator'\n@@ -80,6 +81,13 @@ class AntiCommutator(Expr):\n \"\"\"\n is_commutative = False\n \n+ _kind_dispatcher = KindDispatcher(\"AntiCommutator_kind_dispatcher\", commutative=True)\n+\n+ @property\n+ def kind(self):\n+ arg_kinds = (a.kind for a in self.args)\n+ return self._kind_dispatcher(*arg_kinds)\n+\n def __new__(cls, A, B):\n r = cls.eval(A, B)\n if r is not None:\n@@ -110,6 +118,9 @@ def eval(cls, a, b):\n \n def doit(self, **hints):\n \"\"\" Evaluate anticommutator \"\"\"\n+ # Keep the import of Operator here to avoid problems with\n+ # circular imports.\n+ from sympy.physics.quantum.operator import Operator\n A = self.args[0]\n B = self.args[1]\n if isinstance(A, Operator) and isinstance(B, Operator):\n@@ -147,3 +158,9 @@ def _pretty(self, printer, *args):\n def _latex(self, printer, *args):\n return \"\\\\left\\\\{%s,%s\\\\right\\\\}\" % tuple([\n printer._print(arg, *args) for arg in self.args])\n+\n+\n+@AntiCommutator._kind_dispatcher.register(_OperatorKind, _OperatorKind)\n+def find_op_kind(e1, e2):\n+ \"\"\"Find the kind of an anticommutator of two OperatorKinds.\"\"\"\n+ return OperatorKind\ndiff --git a/sympy/physics/quantum/boson.py b/sympy/physics/quantum/boson.py\nindex 4dfd2286b120..0f24cae2a7ad 100644\n--- a/sympy/physics/quantum/boson.py\n+++ b/sympy/physics/quantum/boson.py\n@@ -1,6 +1,5 @@\n \"\"\"Bosonic quantum operators.\"\"\"\n \n-from sympy.core.mul import Mul\n from sympy.core.numbers import Integer\n from sympy.core.singleton import S\n from sympy.functions.elementary.complexes import conjugate\n@@ -92,18 +91,6 @@ def _eval_anticommutator_BosonOp(self, other, **hints):\n def _eval_adjoint(self):\n return BosonOp(str(self.name), not self.is_annihilation)\n \n- def __mul__(self, other):\n-\n- if isinstance(other, Mul):\n- args1 = tuple(arg for arg in other.args if arg.is_commutative)\n- args2 = tuple(arg for arg in other.args if not arg.is_commutative)\n- x = self\n- for y in args2:\n- x = x * y\n- return Mul(*args1) * x\n-\n- return Mul(self, other)\n-\n def _print_contents_latex(self, printer, *args):\n if self.is_annihilation:\n return r'{%s}' % str(self.name)\ndiff --git a/sympy/physics/quantum/commutator.py b/sympy/physics/quantum/commutator.py\nindex 627158657481..a2d97a679e27 100644\n--- a/sympy/physics/quantum/commutator.py\n+++ b/sympy/physics/quantum/commutator.py\n@@ -2,13 +2,14 @@\n \n from sympy.core.add import Add\n from sympy.core.expr import Expr\n+from sympy.core.kind import KindDispatcher\n from sympy.core.mul import Mul\n from sympy.core.power import Pow\n from sympy.core.singleton import S\n from sympy.printing.pretty.stringpict import prettyForm\n \n from sympy.physics.quantum.dagger import Dagger\n-from sympy.physics.quantum.operator import Operator\n+from sympy.physics.quantum.kind import _OperatorKind, OperatorKind\n \n \n __all__ = [\n@@ -94,6 +95,13 @@ class returns the commutator in an unevaluated form. To evaluate the\n \"\"\"\n is_commutative = False\n \n+ _kind_dispatcher = KindDispatcher(\"Commutator_kind_dispatcher\", commutative=True)\n+\n+ @property\n+ def kind(self):\n+ arg_kinds = (a.kind for a in self.args)\n+ return self._kind_dispatcher(*arg_kinds)\n+\n def __new__(cls, A, B):\n r = cls.eval(A, B)\n if r is not None:\n@@ -200,6 +208,9 @@ def _eval_expand_commutator(self, **hints):\n \n def doit(self, **hints):\n \"\"\" Evaluate commutator \"\"\"\n+ # Keep the import of Operator here to avoid problems with\n+ # circular imports.\n+ from sympy.physics.quantum.operator import Operator\n A = self.args[0]\n B = self.args[1]\n if isinstance(A, Operator) and isinstance(B, Operator):\n@@ -237,3 +248,9 @@ def _pretty(self, printer, *args):\n def _latex(self, printer, *args):\n return \"\\\\left[%s,%s\\\\right]\" % tuple([\n printer._print(arg, *args) for arg in self.args])\n+\n+\n+@Commutator._kind_dispatcher.register(_OperatorKind, _OperatorKind)\n+def find_op_kind(e1, e2):\n+ \"\"\"Find the kind of an anticommutator of two OperatorKinds.\"\"\"\n+ return OperatorKind\ndiff --git a/sympy/physics/quantum/dagger.py b/sympy/physics/quantum/dagger.py\nindex 6305a656c366..f96f01e3b9ac 100644\n--- a/sympy/physics/quantum/dagger.py\n+++ b/sympy/physics/quantum/dagger.py\n@@ -1,6 +1,6 @@\n \"\"\"Hermitian conjugation.\"\"\"\n \n-from sympy.core import Expr, Mul, sympify\n+from sympy.core import Expr, sympify\n from sympy.functions.elementary.complexes import adjoint\n \n __all__ = [\n@@ -79,6 +79,11 @@ class Dagger(adjoint):\n .. [2] https://en.wikipedia.org/wiki/Hermitian_transpose\n \"\"\"\n \n+ @property\n+ def kind(self):\n+ \"\"\"Find the kind of a dagger of something (just the kind of the something).\"\"\"\n+ return self.args[0].kind\n+\n def __new__(cls, arg, evaluate=True):\n if hasattr(arg, 'adjoint') and evaluate:\n return arg.adjoint()\n@@ -86,12 +91,5 @@ def __new__(cls, arg, evaluate=True):\n return arg.conjugate().transpose()\n return Expr.__new__(cls, sympify(arg))\n \n- def __mul__(self, other):\n- from sympy.physics.quantum import IdentityOperator\n- if isinstance(other, IdentityOperator):\n- return self\n-\n- return Mul(self, other)\n-\n adjoint.__name__ = \"Dagger\"\n adjoint._sympyrepr = lambda a, b: \"Dagger(%s)\" % b._print(a.args[0])\ndiff --git a/sympy/physics/quantum/density.py b/sympy/physics/quantum/density.py\nindex aa1f408d93fd..941373e8105d 100644\n--- a/sympy/physics/quantum/density.py\n+++ b/sympy/physics/quantum/density.py\n@@ -12,7 +12,6 @@\n from sympy.physics.quantum.operator import HermitianOperator\n from sympy.physics.quantum.represent import represent\n from sympy.physics.quantum.matrixutils import numpy_ndarray, scipy_sparse_matrix, to_numpy\n-from sympy.physics.quantum.tensorproduct import TensorProduct, tensor_product_simp\n from sympy.physics.quantum.trace import Tr\n \n \n@@ -184,13 +183,10 @@ def _generate_outer_prod(self, arg1, arg2):\n ' Non-commutative instance required'\n ' for outer product.')\n \n- # Muls of Tensor Products should be expanded\n- # before this function is called\n- if (isinstance(nc_part1[0], TensorProduct) and len(nc_part1) == 1\n- and len(nc_part2) == 1):\n- op = tensor_product_simp(nc_part1[0]*Dagger(nc_part2[0]))\n- else:\n- op = Mul(*nc_part1)*Dagger(Mul(*nc_part2))\n+ # We were able to remove some tensor product simplifications that\n+ # used to be here as those transformations are not automatically\n+ # applied by transforms.py.\n+ op = Mul(*nc_part1)*Dagger(Mul(*nc_part2))\n \n return Mul(*c_part1)*Mul(*c_part2) * op\n \ndiff --git a/sympy/physics/quantum/innerproduct.py b/sympy/physics/quantum/innerproduct.py\nindex 1b712f2db9a8..11fed882b606 100644\n--- a/sympy/physics/quantum/innerproduct.py\n+++ b/sympy/physics/quantum/innerproduct.py\n@@ -1,10 +1,11 @@\n \"\"\"Symbolic inner product.\"\"\"\n \n from sympy.core.expr import Expr\n+from sympy.core.kind import NumberKind\n from sympy.functions.elementary.complexes import conjugate\n from sympy.printing.pretty.stringpict import prettyForm\n from sympy.physics.quantum.dagger import Dagger\n-from sympy.physics.quantum.state import KetBase, BraBase\n+\n \n __all__ = [\n 'InnerProduct'\n@@ -45,23 +46,17 @@ class InnerProduct(Expr):\n >>> ip.ket\n |k>\n \n- In simple products of kets and bras inner products will be automatically\n+ In quantum expressions, inner products will be automatically\n identified and created::\n \n >>> b*k\n <b|k>\n \n- But in more complex expressions, there is ambiguity in whether inner or\n- outer products should be created::\n+ In more complex expressions, where there is ambiguity in whether inner or\n+ outer products should be created, inner products have high priority::\n \n >>> k*b*k*b\n- |k><b|*|k>*<b|\n-\n- A user can force the creation of a inner products in a complex expression\n- by using parentheses to group the bra and ket::\n-\n- >>> k*(b*k)*b\n- <b|k>*|k>*<b|\n+ <b|k>*|k><b|\n \n Notice how the inner product <b|k> moved to the left of the expression\n because inner products are commutative complex numbers.\n@@ -71,9 +66,15 @@ class InnerProduct(Expr):\n \n .. [1] https://en.wikipedia.org/wiki/Inner_product\n \"\"\"\n+\n+ kind = NumberKind\n+\n is_complex = True\n \n def __new__(cls, bra, ket):\n+ # Keep the import of BraBase and KetBase here to avoid problems\n+ # with circular imports.\n+ from sympy.physics.quantum.state import KetBase, BraBase\n if not isinstance(ket, KetBase):\n raise TypeError('KetBase subclass expected, got: %r' % ket)\n if not isinstance(bra, BraBase):\ndiff --git a/sympy/physics/quantum/kind.py b/sympy/physics/quantum/kind.py\nnew file mode 100644\nindex 000000000000..14b5bd2c7b0c\n--- /dev/null\n+++ b/sympy/physics/quantum/kind.py\n@@ -0,0 +1,103 @@\n+\"\"\"Kinds for Operators, Bras, and Kets.\n+\n+This module defines kinds for operators, bras, and kets. These are useful\n+in various places in ``sympy.physics.quantum`` as you often want to know\n+what the kind is of a compound expression. For example, if you multiply\n+an operator, bra, or ket by a number, you get back another operator, bra,\n+or ket - even though if you did an ``isinstance`` check you would find that\n+you have a ``Mul`` instead. The kind system is meant to give you a quick\n+way of determining how a compound expression behaves in terms of lower\n+level kinds.\n+\n+The resolution calculation of kinds for compound expressions can be found\n+either in container classes or in functions that are registered with\n+kind dispatchers.\n+\"\"\"\n+\n+from sympy.core.mul import Mul\n+from sympy.core.kind import Kind, _NumberKind\n+\n+\n+__all__ = [\n+ '_KetKind',\n+ 'KetKind',\n+ '_BraKind',\n+ 'BraKind',\n+ '_OperatorKind',\n+ 'OperatorKind',\n+]\n+\n+\n+class _KetKind(Kind):\n+ \"\"\"A kind for quantum kets.\"\"\"\n+\n+ def __new__(cls):\n+ obj = super().__new__(cls)\n+ return obj\n+\n+ def __repr__(self):\n+ return \"KetKind\"\n+\n+# Create an instance as many situations need this.\n+KetKind = _KetKind()\n+\n+\n+class _BraKind(Kind):\n+ \"\"\"A kind for quantum bras.\"\"\"\n+\n+ def __new__(cls):\n+ obj = super().__new__(cls)\n+ return obj\n+\n+ def __repr__(self):\n+ return \"BraKind\"\n+\n+# Create an instance as many situations need this.\n+BraKind = _BraKind()\n+\n+\n+from sympy.core.kind import Kind\n+\n+class _OperatorKind(Kind):\n+ \"\"\"A kind for quantum operators.\"\"\"\n+\n+ def __new__(cls):\n+ obj = super().__new__(cls)\n+ return obj\n+\n+ def __repr__(self):\n+ return \"OperatorKind\"\n+\n+# Create an instance as many situations need this.\n+OperatorKind = _OperatorKind()\n+\n+\n+#-----------------------------------------------------------------------------\n+# Kind resolution.\n+#-----------------------------------------------------------------------------\n+\n+# Note: We can't currently add kind dispatchers for the following combinations\n+# as the Mul._kind_dispatcher is set to commutative and will also\n+# register the opposite order, which isn't correct for these pairs:\n+#\n+# 1. (_OperatorKind, _KetKind)\n+# 2. (_BraKind, _OperatorKind)\n+# 3. (_BraKind, _KetKind)\n+\n+\n+@Mul._kind_dispatcher.register(_NumberKind, _KetKind)\n+def _mul_number_ket_kind(lhs, rhs):\n+ \"\"\"Perform the kind calculation of NumberKind*KetKind -> KetKind.\"\"\"\n+ return KetKind\n+\n+\n+@Mul._kind_dispatcher.register(_NumberKind, _BraKind)\n+def _mul_number_bra_kind(lhs, rhs):\n+ \"\"\"Perform the kind calculation of NumberKind*BraKind -> BraKind.\"\"\"\n+ return BraKind\n+\n+\n+@Mul._kind_dispatcher.register(_NumberKind, _OperatorKind)\n+def _mul_operator_kind(lhs, rhs):\n+ \"\"\"Perform the kind calculation of NumberKind*OperatorKind -> OperatorKind.\"\"\"\n+ return OperatorKind\ndiff --git a/sympy/physics/quantum/operator.py b/sympy/physics/quantum/operator.py\nindex d5869a1607d0..f0533e7f6c9b 100644\n--- a/sympy/physics/quantum/operator.py\n+++ b/sympy/physics/quantum/operator.py\n@@ -18,10 +18,13 @@\n from sympy.core.singleton import S\n from sympy.printing.pretty.stringpict import prettyForm\n from sympy.physics.quantum.dagger import Dagger\n+from sympy.physics.quantum.kind import OperatorKind\n from sympy.physics.quantum.qexpr import QExpr, dispatch_method\n from sympy.matrices import eye\n from sympy.utilities.exceptions import sympy_deprecation_warning\n \n+\n+\n __all__ = [\n 'Operator',\n 'HermitianOperator',\n@@ -108,6 +111,8 @@ class Operator(QExpr):\n def default_args(self):\n return (\"O\",)\n \n+ kind = OperatorKind\n+\n #-------------------------------------------------------------------------\n # Printing\n #-------------------------------------------------------------------------\n@@ -185,13 +190,6 @@ def inverse(self):\n def _eval_inverse(self):\n return self**(-1)\n \n- def __mul__(self, other):\n-\n- if isinstance(other, IdentityOperator):\n- return self\n-\n- return Mul(self, other)\n-\n \n class HermitianOperator(Operator):\n \"\"\"A Hermitian operator that satisfies H == Dagger(H).\n@@ -331,13 +329,6 @@ def _print_contents_pretty(self, printer, *args):\n def _print_contents_latex(self, printer, *args):\n return r'{\\mathcal{I}}'\n \n- def __mul__(self, other):\n-\n- if isinstance(other, (Operator, Dagger)):\n- return other\n-\n- return Mul(self, other)\n-\n def _represent_default_basis(self, **options):\n if not self.N or self.N == oo:\n raise NotImplementedError('Cannot represent infinite dimensional' +\n@@ -372,7 +363,6 @@ class OuterProduct(Operator):\n Create a simple outer product by hand and take its dagger::\n \n >>> from sympy.physics.quantum import Ket, Bra, OuterProduct, Dagger\n- >>> from sympy.physics.quantum import Operator\n \n >>> k = Ket('k')\n >>> b = Bra('b')\n@@ -388,24 +378,17 @@ class OuterProduct(Operator):\n >>> Dagger(op)\n |b><k|\n \n- In simple products of kets and bras outer products will be automatically\n+ In quantum expressions, outer products will be automatically\n identified and created::\n \n >>> k*b\n |k><b|\n \n- But in more complex expressions, outer products are not automatically\n- created::\n-\n- >>> A = Operator('A')\n- >>> A*k*b\n- A*|k>*<b|\n-\n- A user can force the creation of an outer product in a complex expression\n- by using parentheses to group the ket and bra::\n+ However, the creation of inner products always has higher priority than that of\n+ outer products:\n \n- >>> A*(k*b)\n- A*|k><b|\n+ >>> b*k*b\n+ <b|k>*<b|\n \n References\n ==========\ndiff --git a/sympy/physics/quantum/qapply.py b/sympy/physics/quantum/qapply.py\nindex 87379c7e3e96..6d7f74b0c320 100644\n--- a/sympy/physics/quantum/qapply.py\n+++ b/sympy/physics/quantum/qapply.py\n@@ -6,6 +6,7 @@\n \n from sympy.concrete import Sum\n from sympy.core.add import Add\n+from sympy.core.kind import NumberKind\n from sympy.core.mul import Mul\n from sympy.core.power import Pow\n from sympy.core.singleton import S\n@@ -28,6 +29,17 @@\n # Main code\n #-----------------------------------------------------------------------------\n \n+\n+def ip_doit_func(e):\n+ \"\"\"Transform the inner products in an expression by calling ``.doit()``.\"\"\"\n+ return e.replace(InnerProduct, lambda *args: InnerProduct(*args).doit())\n+\n+\n+def sum_doit_func(e):\n+ \"\"\"Transform the sums in an expression by calling ``.doit()``.\"\"\"\n+ return e.replace(Sum, lambda *args: Sum(*args).doit())\n+\n+\n def qapply(e, **options):\n \"\"\"Apply operators to states in a quantum expression.\n \n@@ -68,18 +80,22 @@ def qapply(e, **options):\n |k><b|\n >>> qapply(A * b.dual / (b * b.dual))\n |k>\n- >>> qapply(k.dual * A / (k.dual * k), dagger=True)\n- <b|\n >>> qapply(k.dual * A / (k.dual * k))\n- <k|*|k><b|/<k|k>\n+ <b|\n \"\"\"\n from sympy.physics.quantum.density import Density\n \n dagger = options.get('dagger', False)\n sum_doit = options.get('sum_doit', False)\n+ ip_doit = options.get('ip_doit', True)\n \n- if e == 0:\n- return S.Zero\n+ if isinstance(e, (int, complex, float)):\n+ e = sympify(e)\n+\n+ # Using the kind API here helps us to narrow what types of expressions\n+ # we call ``ip_doit_func`` on.\n+ if e.kind == NumberKind:\n+ return ip_doit_func(e) if ip_doit else e\n \n # This may be a bit aggressive but ensures that everything gets expanded\n # to its simplest form before trying to apply operators. This includes\n@@ -114,8 +130,7 @@ def qapply(e, **options):\n # For a Sum, call qapply on its function.\n elif isinstance(e, Sum):\n result = Sum(qapply(e.function, **options), *e.limits)\n- if sum_doit:\n- result = result.doit()\n+ result = sum_doit_func(result) if sum_doit else result\n return result\n \n # For a Pow, call qapply on its base.\n@@ -127,14 +142,17 @@ def qapply(e, **options):\n c_part, nc_part = e.args_cnc()\n c_mul = Mul(*c_part)\n nc_mul = Mul(*nc_part)\n- if isinstance(nc_mul, Mul):\n+ if not nc_part: # If we only have a commuting part, just return it.\n+ result = c_mul\n+ elif isinstance(nc_mul, Mul):\n result = c_mul*qapply_Mul(nc_mul, **options)\n else:\n result = c_mul*qapply(nc_mul, **options)\n if result == e and dagger:\n- return Dagger(qapply_Mul(Dagger(e), **options))\n- else:\n- return result\n+ result = Dagger(qapply_Mul(Dagger(e), **options))\n+ result = ip_doit_func(result) if ip_doit else result\n+ result = sum_doit_func(result) if sum_doit else result\n+ return result\n \n # In all other cases (State, Operator, Pow, Commutator, InnerProduct,\n # OuterProduct) we won't ever have operators to apply to kets.\n@@ -144,10 +162,9 @@ def qapply(e, **options):\n \n def qapply_Mul(e, **options):\n \n- ip_doit = options.get('ip_doit', True)\n- sum_doit = options.get('sum_doit', False)\n-\n args = list(e.args)\n+ extra = S.One\n+ result = None\n \n # If we only have 0 or 1 args, we have nothing to do and return.\n if len(args) <= 1 or not isinstance(e, Mul):\n@@ -171,6 +188,10 @@ def qapply_Mul(e, **options):\n args.append(lhs.ket)\n lhs = lhs.bra\n \n+ if isinstance(rhs, OuterProduct):\n+ extra = rhs.bra # Append to the right of the result\n+ rhs = rhs.ket\n+\n # Call .doit() on Commutator/AntiCommutator.\n if isinstance(lhs, (Commutator, AntiCommutator)):\n comm = lhs.doit()\n@@ -179,16 +200,16 @@ def qapply_Mul(e, **options):\n e.func(*(args + [comm.args[0], rhs])) +\n e.func(*(args + [comm.args[1], rhs])),\n **options\n- )\n+ )*extra\n else:\n- return qapply(e.func(*args)*comm*rhs, **options)\n+ return qapply(e.func(*args)*comm*rhs, **options)*extra\n \n # Apply tensor products of operators to states\n if isinstance(lhs, TensorProduct) and all(isinstance(arg, (Operator, State, Mul, Pow)) or arg == 1 for arg in lhs.args) and \\\n isinstance(rhs, TensorProduct) and all(isinstance(arg, (Operator, State, Mul, Pow)) or arg == 1 for arg in rhs.args) and \\\n len(lhs.args) == len(rhs.args):\n result = TensorProduct(*[qapply(lhs.args[n]*rhs.args[n], **options) for n in range(len(lhs.args))]).expand(tensorproduct=True)\n- return qapply_Mul(e.func(*args), **options)*result\n+ return qapply_Mul(e.func(*args), **options)*result*extra\n \n # For Sums, move the Sum to the right.\n if isinstance(rhs, Sum):\n@@ -197,19 +218,13 @@ def qapply_Mul(e, **options):\n raise ValueError('Duplicated dummy indices in separate sums in qapply.')\n limits = lhs.limits + rhs.limits\n result = Sum(qapply(lhs.function*rhs.function, **options), *limits)\n- if sum_doit:\n- result = result.doit()\n return qapply_Mul(e.func(*args)*result, **options)\n else:\n- result = Sum(qapply(lhs*rhs.function, **options), rhs.limits)\n- if sum_doit:\n- result = result.doit()\n+ result = Sum(qapply(lhs*rhs.function, **options), *rhs.limits)\n return qapply_Mul(e.func(*args)*result, **options)\n \n if isinstance(lhs, Sum):\n- result = Sum(qapply(lhs.function*rhs, **options), lhs.limits)\n- if sum_doit:\n- result = result.doit()\n+ result = Sum(qapply(lhs.function*rhs, **options), *lhs.limits)\n return qapply_Mul(e.func(*args)*result, **options)\n \n # Now try to actually apply the operator and build an inner product.\n@@ -233,19 +248,17 @@ def qapply_Mul(e, **options):\n if result is None:\n if isinstance(lhs, BraBase) and isinstance(rhs, KetBase):\n result = InnerProduct(lhs, rhs)\n- if ip_doit:\n- result = result.doit()\n \n # TODO: I may need to expand before returning the final result.\n- if result == 0:\n- return S.Zero\n+ if isinstance(result, (int, complex, float)):\n+ return sympify(result)\n elif result is None:\n if len(args) == 0:\n # We had two args to begin with so args=[].\n return e\n else:\n- return qapply_Mul(e.func(*(args + [lhs])), **options)*rhs\n+ return qapply_Mul(e.func(*(args + [lhs])), **options)*rhs*extra\n elif isinstance(result, InnerProduct):\n- return result*qapply_Mul(e.func(*args), **options)\n+ return result*qapply_Mul(e.func(*args), **options)*extra\n else: # result is a scalar times a Mul, Add or TensorProduct\n- return qapply(e.func(*args)*result, **options)\n+ return qapply(e.func(*args)*result, **options)*extra\ndiff --git a/sympy/physics/quantum/represent.py b/sympy/physics/quantum/represent.py\nindex cfb0ea627571..3a1ada80aa6a 100644\n--- a/sympy/physics/quantum/represent.py\n+++ b/sympy/physics/quantum/represent.py\n@@ -24,6 +24,7 @@\n from sympy.physics.quantum.qapply import qapply\n from sympy.physics.quantum.operatorset import operators_to_state, state_to_operators\n \n+\n __all__ = [\n 'represent',\n 'rep_innerproduct',\n@@ -133,9 +134,6 @@ def _represent_FooBasis(self, e, basis, **options)\n >>> y = XBra('y')\n >>> represent(X*x)\n x*DiracDelta(x - x_2)\n- >>> represent(X*x*y)\n- x*DiracDelta(x - x_3)*DiracDelta(x_1 - y)\n-\n \"\"\"\n \n format = options.get('format', 'sympy')\n@@ -199,15 +197,15 @@ def _represent_FooBasis(self, e, basis, **options)\n A = expr.args[0]\n B = expr.args[1]\n return represent(Mul(A, B) + Mul(B, A), **options)\n- elif isinstance(expr, InnerProduct):\n- return represent(Mul(expr.bra, expr.ket), **options)\n- elif not isinstance(expr, (Mul, OuterProduct)):\n+ elif not isinstance(expr, (Mul, OuterProduct, InnerProduct)):\n+ # We have removed special handling of inner products that used to be\n+ # required (before automatic transforms).\n # For numpy and scipy.sparse, we can only handle numerical prefactors.\n if format in ('numpy', 'scipy.sparse'):\n return _sympy_to_scalar(expr)\n return expr\n \n- if not isinstance(expr, (Mul, OuterProduct)):\n+ if not isinstance(expr, (Mul, OuterProduct, InnerProduct)):\n raise TypeError('Mul expected, got: %r' % expr)\n \n if \"index\" in options:\n@@ -302,7 +300,8 @@ def rep_innerproduct(expr, **options):\n result = prod.doit()\n \n format = options.get('format', 'sympy')\n- return expr._format_represent(result, format)\n+ result = expr._format_represent(result, format)\n+ return result\n \n \n def rep_expectation(expr, **options):\n@@ -345,7 +344,8 @@ def rep_expectation(expr, **options):\n bra = basis_kets[1].dual\n ket = basis_kets[0]\n \n- return qapply(bra*expr*ket)\n+ result = qapply(bra*expr*ket)\n+ return result\n \n \n def integrate_result(orig_expr, result, **options):\ndiff --git a/sympy/physics/quantum/state.py b/sympy/physics/quantum/state.py\nindex b2babef2e947..4ccd1ce9b987 100644\n--- a/sympy/physics/quantum/state.py\n+++ b/sympy/physics/quantum/state.py\n@@ -11,6 +11,8 @@\n from sympy.integrals.integrals import integrate\n from sympy.printing.pretty.stringpict import stringPict\n from sympy.physics.quantum.qexpr import QExpr, dispatch_method\n+from sympy.physics.quantum.kind import KetKind, BraKind\n+\n \n __all__ = [\n 'KetBase',\n@@ -208,6 +210,8 @@ class KetBase(StateBase):\n use Ket.\n \"\"\"\n \n+ kind = KetKind\n+\n lbracket = _straight_bracket\n rbracket = _rbracket\n lbracket_ucode = _straight_bracket_ucode\n@@ -223,22 +227,6 @@ def default_args(self):\n def dual_class(self):\n return BraBase\n \n- def __mul__(self, other):\n- \"\"\"KetBase*other\"\"\"\n- from sympy.physics.quantum.operator import OuterProduct\n- if isinstance(other, BraBase):\n- return OuterProduct(self, other)\n- else:\n- return Expr.__mul__(self, other)\n-\n- def __rmul__(self, other):\n- \"\"\"other*KetBase\"\"\"\n- from sympy.physics.quantum.innerproduct import InnerProduct\n- if isinstance(other, BraBase):\n- return InnerProduct(other, self)\n- else:\n- return Expr.__rmul__(self, other)\n-\n #-------------------------------------------------------------------------\n # _eval_* methods\n #-------------------------------------------------------------------------\n@@ -287,6 +275,8 @@ class BraBase(StateBase):\n instead use Bra.\n \"\"\"\n \n+ kind = BraKind\n+\n lbracket = _lbracket\n rbracket = _straight_bracket\n lbracket_ucode = _lbracket_ucode\n@@ -314,22 +304,6 @@ def default_args(self):\n def dual_class(self):\n return KetBase\n \n- def __mul__(self, other):\n- \"\"\"BraBase*other\"\"\"\n- from sympy.physics.quantum.innerproduct import InnerProduct\n- if isinstance(other, KetBase):\n- return InnerProduct(self, other)\n- else:\n- return Expr.__mul__(self, other)\n-\n- def __rmul__(self, other):\n- \"\"\"other*BraBase\"\"\"\n- from sympy.physics.quantum.operator import OuterProduct\n- if isinstance(other, KetBase):\n- return OuterProduct(other, self)\n- else:\n- return Expr.__rmul__(self, other)\n-\n def _represent(self, **options):\n \"\"\"A default represent that uses the Ket's version.\"\"\"\n from sympy.physics.quantum.dagger import Dagger\n@@ -626,7 +600,7 @@ def dual_class(self):\n return TimeDepKet\n \n \n-class OrthogonalState(State, StateBase):\n+class OrthogonalState(State):\n \"\"\"General abstract quantum state used as a base class for Ket and Bra.\"\"\"\n pass\n \ndiff --git a/sympy/physics/quantum/tensorproduct.py b/sympy/physics/quantum/tensorproduct.py\nindex 334f2f66bf3e..058b3459227e 100644\n--- a/sympy/physics/quantum/tensorproduct.py\n+++ b/sympy/physics/quantum/tensorproduct.py\n@@ -2,23 +2,27 @@\n \n from sympy.core.add import Add\n from sympy.core.expr import Expr\n+from sympy.core.kind import KindDispatcher\n from sympy.core.mul import Mul\n from sympy.core.power import Pow\n from sympy.core.sympify import sympify\n from sympy.matrices.dense import DenseMatrix as Matrix\n from sympy.matrices.immutable import ImmutableDenseMatrix as ImmutableMatrix\n from sympy.printing.pretty.stringpict import prettyForm\n+from sympy.utilities.exceptions import sympy_deprecation_warning\n \n-from sympy.physics.quantum.qexpr import QuantumError\n from sympy.physics.quantum.dagger import Dagger\n-from sympy.physics.quantum.commutator import Commutator\n-from sympy.physics.quantum.anticommutator import AntiCommutator\n-from sympy.physics.quantum.state import Ket, Bra\n+from sympy.physics.quantum.kind import (\n+ KetKind, _KetKind,\n+ BraKind, _BraKind,\n+ OperatorKind, _OperatorKind\n+)\n from sympy.physics.quantum.matrixutils import (\n numpy_ndarray,\n scipy_sparse_matrix,\n matrix_tensor_product\n )\n+from sympy.physics.quantum.state import Ket, Bra\n from sympy.physics.quantum.trace import Tr\n \n \n@@ -120,6 +124,14 @@ class TensorProduct(Expr):\n \"\"\"\n is_commutative = False\n \n+ _kind_dispatcher = KindDispatcher(\"TensorProduct_kind_dispatcher\", commutative=True)\n+\n+ @property\n+ def kind(self):\n+ \"\"\"Calculate the kind of a tensor product by looking at its children.\"\"\"\n+ arg_kinds = (a.kind for a in self.args)\n+ return self._kind_dispatcher(*arg_kinds)\n+\n def __new__(cls, *args):\n if isinstance(args[0], (Matrix, ImmutableMatrix, numpy_ndarray,\n scipy_sparse_matrix)):\n@@ -263,7 +275,7 @@ def _eval_expand_tensorproduct(self, **hints):\n \n def _eval_trace(self, **kwargs):\n indices = kwargs.get('indices', None)\n- exp = tensor_product_simp(self)\n+ exp = self\n \n if indices is None or len(indices) == 0:\n return Mul(*[Tr(arg).doit() for arg in exp.args])\n@@ -273,153 +285,79 @@ def _eval_trace(self, **kwargs):\n \n \n def tensor_product_simp_Mul(e):\n- \"\"\"Simplify a Mul with TensorProducts.\n-\n- Current the main use of this is to simplify a ``Mul`` of ``TensorProduct``s\n- to a ``TensorProduct`` of ``Muls``. It currently only works for relatively\n- simple cases where the initial ``Mul`` only has scalars and raw\n- ``TensorProduct``s, not ``Add``, ``Pow``, ``Commutator``s of\n- ``TensorProduct``s.\n-\n- Parameters\n- ==========\n-\n- e : Expr\n- A ``Mul`` of ``TensorProduct``s to be simplified.\n-\n- Returns\n- =======\n-\n- e : Expr\n- A ``TensorProduct`` of ``Mul``s.\n-\n- Examples\n- ========\n+ \"\"\"Simplify a Mul with tensor products.\n \n- This is an example of the type of simplification that this function\n- performs::\n-\n- >>> from sympy.physics.quantum.tensorproduct import \\\n- tensor_product_simp_Mul, TensorProduct\n- >>> from sympy import Symbol\n- >>> A = Symbol('A',commutative=False)\n- >>> B = Symbol('B',commutative=False)\n- >>> C = Symbol('C',commutative=False)\n- >>> D = Symbol('D',commutative=False)\n- >>> e = TensorProduct(A,B)*TensorProduct(C,D)\n- >>> e\n- AxB*CxD\n- >>> tensor_product_simp_Mul(e)\n- (A*C)x(B*D)\n+ .. deprecated:: 1.14.\n+ The transformations applied by this function are not done automatically\n+ when tensor products are combined.\n \n+ Originally, the main use of this function is to simplify a ``Mul`` of\n+ ``TensorProduct``s to a ``TensorProduct`` of ``Muls``.\n \"\"\"\n- # TODO: This won't work with Muls that have other composites of\n- # TensorProducts, like an Add, Commutator, etc.\n- # TODO: This only works for the equivalent of single Qbit gates.\n- if not isinstance(e, Mul):\n- return e\n- c_part, nc_part = e.args_cnc()\n- n_nc = len(nc_part)\n- if n_nc == 0:\n- return e\n- elif n_nc == 1:\n- if isinstance(nc_part[0], Pow):\n- return Mul(*c_part) * tensor_product_simp_Pow(nc_part[0])\n- return e\n- elif e.has(TensorProduct):\n- current = nc_part[0]\n- if not isinstance(current, TensorProduct):\n- if isinstance(current, Pow):\n- if isinstance(current.base, TensorProduct):\n- current = tensor_product_simp_Pow(current)\n- else:\n- raise TypeError('TensorProduct expected, got: %r' % current)\n- n_terms = len(current.args)\n- new_args = list(current.args)\n- for next in nc_part[1:]:\n- # TODO: check the hilbert spaces of next and current here.\n- if isinstance(next, TensorProduct):\n- if n_terms != len(next.args):\n- raise QuantumError(\n- 'TensorProducts of different lengths: %r and %r' %\n- (current, next)\n- )\n- for i in range(len(new_args)):\n- new_args[i] = new_args[i] * next.args[i]\n- else:\n- if isinstance(next, Pow):\n- if isinstance(next.base, TensorProduct):\n- new_tp = tensor_product_simp_Pow(next)\n- for i in range(len(new_args)):\n- new_args[i] = new_args[i] * new_tp.args[i]\n- else:\n- raise TypeError('TensorProduct expected, got: %r' % next)\n- else:\n- raise TypeError('TensorProduct expected, got: %r' % next)\n- current = next\n- return Mul(*c_part) * TensorProduct(*new_args)\n- elif e.has(Pow):\n- new_args = [ tensor_product_simp_Pow(nc) for nc in nc_part ]\n- return tensor_product_simp_Mul(Mul(*c_part) * TensorProduct(*new_args))\n- else:\n- return e\n+ sympy_deprecation_warning(\n+ \"\"\"\n+ tensor_product_simp_Mul has been deprecated. The transformations\n+ performed by this function are now done automatically when\n+ tensor products are multiplied.\n+ \"\"\",\n+ deprecated_since_version=\"1.14\",\n+ active_deprecations_target='deprecated-tensorproduct-simp'\n+ )\n+ return e\n \n def tensor_product_simp_Pow(e):\n- \"\"\"Evaluates ``Pow`` expressions whose base is ``TensorProduct``\"\"\"\n- if not isinstance(e, Pow):\n- return e\n+ \"\"\"Evaluates ``Pow`` expressions whose base is ``TensorProduct``\n+\n+ .. deprecated:: 1.14.\n+ The transformations applied by this function are not done automatically\n+ when tensor products are combined.\n+ \"\"\"\n+ sympy_deprecation_warning(\n+ \"\"\"\n+ tensor_product_simp_Pow has been deprecated. The transformations\n+ performed by this function are now done automatically when\n+ tensor products are exponentiated.\n+ \"\"\",\n+ deprecated_since_version=\"1.14\",\n+ active_deprecations_target='deprecated-tensorproduct-simp'\n+ )\n+ return e\n \n- if isinstance(e.base, TensorProduct):\n- return TensorProduct(*[ b**e.exp for b in e.base.args])\n- else:\n- return e\n \n def tensor_product_simp(e, **hints):\n- \"\"\"Try to simplify and combine TensorProducts.\n+ \"\"\"Try to simplify and combine tensor products.\n \n- In general this will try to pull expressions inside of ``TensorProducts``.\n- It currently only works for relatively simple cases where the products have\n- only scalars, raw ``TensorProducts``, not ``Add``, ``Pow``, ``Commutators``\n- of ``TensorProducts``. It is best to see what it does by showing examples.\n+ .. deprecated:: 1.14.\n+ The transformations applied by this function are not done automatically\n+ when tensor products are combined.\n \n- Examples\n- ========\n+ Originally, this function tried to pull expressions inside of ``TensorProducts``.\n+ It only worked for relatively simple cases where the products have\n+ only scalars, raw ``TensorProducts``, not ``Add``, ``Pow``, ``Commutators``\n+ of ``TensorProducts``.\n+ \"\"\"\n+ sympy_deprecation_warning(\n+ \"\"\"\n+ tensor_product_simp has been deprecated. The transformations\n+ performed by this function are now done automatically when\n+ tensor products are combined.\n+ \"\"\",\n+ deprecated_since_version=\"1.14\",\n+ active_deprecations_target='deprecated-tensorproduct-simp'\n+ )\n+ return e\n \n- >>> from sympy.physics.quantum import tensor_product_simp\n- >>> from sympy.physics.quantum import TensorProduct\n- >>> from sympy import Symbol\n- >>> A = Symbol('A',commutative=False)\n- >>> B = Symbol('B',commutative=False)\n- >>> C = Symbol('C',commutative=False)\n- >>> D = Symbol('D',commutative=False)\n \n- First see what happens to products of tensor products:\n+@TensorProduct._kind_dispatcher.register(_OperatorKind, _OperatorKind)\n+def find_op_kind(e1, e2):\n+ return OperatorKind\n \n- >>> e = TensorProduct(A,B)*TensorProduct(C,D)\n- >>> e\n- AxB*CxD\n- >>> tensor_product_simp(e)\n- (A*C)x(B*D)\n \n- This is the core logic of this function, and it works inside, powers, sums,\n- commutators and anticommutators as well:\n+@TensorProduct._kind_dispatcher.register(_KetKind, _KetKind)\n+def find_ket_kind(e1, e2):\n+ return KetKind\n \n- >>> tensor_product_simp(e**2)\n- (A*C)x(B*D)**2\n \n- \"\"\"\n- if isinstance(e, Add):\n- return Add(*[tensor_product_simp(arg) for arg in e.args])\n- elif isinstance(e, Pow):\n- if isinstance(e.base, TensorProduct):\n- return tensor_product_simp_Pow(e)\n- else:\n- return tensor_product_simp(e.base) ** e.exp\n- elif isinstance(e, Mul):\n- return tensor_product_simp_Mul(e)\n- elif isinstance(e, Commutator):\n- return Commutator(*[tensor_product_simp(arg) for arg in e.args])\n- elif isinstance(e, AntiCommutator):\n- return AntiCommutator(*[tensor_product_simp(arg) for arg in e.args])\n- else:\n- return e\n+@TensorProduct._kind_dispatcher.register(_BraKind, _BraKind)\n+def find_bra_kind(e1, e2):\n+ return BraKind\ndiff --git a/sympy/physics/quantum/tests/test_cartesian.py b/sympy/physics/quantum/tests/test_cartesian.py\nindex ddfd28d8b5f4..f1dd435fab68 100644\n--- a/sympy/physics/quantum/tests/test_cartesian.py\n+++ b/sympy/physics/quantum/tests/test_cartesian.py\n@@ -7,6 +7,7 @@\n from sympy.functions.elementary.miscellaneous import sqrt\n from sympy.functions.special.delta_functions import DiracDelta\n from sympy.sets.sets import Interval\n+from sympy.testing.pytest import XFAIL\n \n from sympy.physics.quantum import qapply, represent, L2, Dagger\n from sympy.physics.quantum import Commutator, hbar\n@@ -33,8 +34,6 @@ def test_x():\n assert represent(XBra(x)) == DiracDelta(-x + x_1)\n assert XBra(x).position == x\n assert represent(XOp()*XKet()) == x*DiracDelta(x - x_2)\n- assert represent(XOp()*XKet()*XBra('y')) == \\\n- x*DiracDelta(x - x_3)*DiracDelta(x_1 - y)\n assert represent(XBra(\"y\")*XKet()) == DiracDelta(x - y)\n assert represent(\n XKet()*XBra()) == DiracDelta(x - x_2) * DiracDelta(x_1 - x)\n@@ -49,6 +48,16 @@ def test_x():\n hbar*I*DiracDelta(px - px_2)*DifferentialOperator(px)\n \n \n+@XFAIL\n+def _text_x_broken():\n+ # represent has some broken logic that is relying in particular\n+ # forms of input, rather than a full and proper handling of\n+ # all valid quantum expressions. Marking this test as XFAIL until\n+ # we can refactor represent.\n+ assert represent(XOp()*XKet()*XBra('y')) == \\\n+ x*DiracDelta(x - x_3)*DiracDelta(x_1 - y)\n+\n+\n def test_p():\n assert Px.hilbert_space == L2(Interval(S.NegativeInfinity, S.Infinity))\n assert qapply(Px*PxKet(px)) == px*PxKet(px)\ndiff --git a/sympy/physics/quantum/tests/test_kind.py b/sympy/physics/quantum/tests/test_kind.py\nnew file mode 100644\nindex 000000000000..e50467db4c2d\n--- /dev/null\n+++ b/sympy/physics/quantum/tests/test_kind.py\n@@ -0,0 +1,75 @@\n+\"\"\"Tests for sympy.physics.quantum.kind.\"\"\"\n+\n+from sympy.core.kind import NumberKind, UndefinedKind\n+from sympy.core.symbol import symbols\n+\n+from sympy.physics.quantum.kind import (\n+ OperatorKind, KetKind, BraKind\n+)\n+from sympy.physics.quantum.anticommutator import AntiCommutator\n+from sympy.physics.quantum.commutator import Commutator\n+from sympy.physics.quantum.dagger import Dagger\n+from sympy.physics.quantum.operator import Operator\n+from sympy.physics.quantum.state import Ket, Bra\n+from sympy.physics.quantum.tensorproduct import TensorProduct\n+\n+k = Ket('k')\n+b = Bra('k')\n+A = Operator('A')\n+B = Operator('B')\n+x, y, z = symbols('x y z', integer=True)\n+\n+def test_bra_ket():\n+ assert k.kind == KetKind\n+ assert b.kind == BraKind\n+ assert (b*k).kind == NumberKind # inner product\n+ assert (x*k).kind == KetKind\n+ assert (x*b).kind == BraKind\n+\n+\n+def test_operator_kind():\n+ assert A.kind == OperatorKind\n+ assert (A*B).kind == OperatorKind\n+ assert (x*A).kind == OperatorKind\n+ assert (x*A*B).kind == OperatorKind\n+ assert (x*k*b).kind == OperatorKind # outer product\n+\n+\n+def test_undefind_kind():\n+ # Because of limitations in the kind dispatcher API, we are currently\n+ # unable to have OperatorKind*KetKind -> KetKind (and similar for bras).\n+ assert (A*k).kind == UndefinedKind\n+ assert (b*A).kind == UndefinedKind\n+ assert (x*b*A*k).kind == UndefinedKind\n+\n+\n+def test_dagger_kind():\n+ assert Dagger(k).kind == BraKind\n+ assert Dagger(b).kind == KetKind\n+ assert Dagger(A).kind == OperatorKind\n+\n+\n+def test_commutator_kind():\n+ assert Commutator(A, B).kind == OperatorKind\n+ assert Commutator(A, x*B).kind == OperatorKind\n+ assert Commutator(x*A, B).kind == OperatorKind\n+ assert Commutator(x*A, x*B).kind == OperatorKind\n+\n+\n+def test_anticommutator_kind():\n+ assert AntiCommutator(A, B).kind == OperatorKind\n+ assert AntiCommutator(A, x*B).kind == OperatorKind\n+ assert AntiCommutator(x*A, B).kind == OperatorKind\n+ assert AntiCommutator(x*A, x*B).kind == OperatorKind\n+\n+\n+def test_tensorproduct_kind():\n+ assert TensorProduct(k,k).kind == KetKind\n+ assert TensorProduct(b,b).kind == BraKind\n+ assert TensorProduct(x*k,y*k).kind == KetKind\n+ assert TensorProduct(x*b,y*b).kind == BraKind\n+ assert TensorProduct(x*b*k, y*b*k).kind == NumberKind\n+ assert TensorProduct(x*k*b, y*k*b).kind == OperatorKind\n+ assert TensorProduct(A, B).kind == OperatorKind\n+ assert TensorProduct(A, x*B).kind == OperatorKind\n+ assert TensorProduct(x*A, B).kind == OperatorKind\ndiff --git a/sympy/physics/quantum/tests/test_operator.py b/sympy/physics/quantum/tests/test_operator.py\nindex 8950fc9b931d..100cacd9a800 100644\n--- a/sympy/physics/quantum/tests/test_operator.py\n+++ b/sympy/physics/quantum/tests/test_operator.py\n@@ -2,6 +2,7 @@\n from sympy.core.mul import Mul\n from sympy.core.numbers import (Integer, pi)\n from sympy.core.symbol import (Symbol, symbols)\n+from sympy.core.sympify import sympify\n from sympy.functions.elementary.trigonometric import sin\n from sympy.physics.quantum.qexpr import QExpr\n from sympy.physics.quantum.dagger import Dagger\n@@ -95,6 +96,7 @@ def test_identity():\n I = IdentityOperator()\n O = Operator('O')\n x = Symbol(\"x\")\n+ three = sympify(3)\n \n assert isinstance(I, IdentityOperator)\n assert isinstance(I, Operator)\n@@ -104,8 +106,8 @@ def test_identity():\n assert I * Dagger(O) == Dagger(O)\n assert Dagger(O) * I == Dagger(O)\n assert isinstance(I * I, IdentityOperator)\n- assert isinstance(3 * I, Mul)\n- assert isinstance(I * x, Mul)\n+ assert three * I == three\n+ assert I * x == x\n assert I.inv() == I\n assert Dagger(I) == I\n assert qapply(I * O) == O\ndiff --git a/sympy/physics/quantum/tests/test_qapply.py b/sympy/physics/quantum/tests/test_qapply.py\nindex 839477822416..be6f68d9869d 100644\n--- a/sympy/physics/quantum/tests/test_qapply.py\n+++ b/sympy/physics/quantum/tests/test_qapply.py\n@@ -99,7 +99,7 @@ def test_tensorproduct():\n assert qapply(TensorProduct(a, Dagger(b) * b) * ket1) == 2 * ket3\n assert qapply(bra1 * TensorProduct(a, b * b),\n dagger=True) == sqrt(2) * bra2\n- assert qapply(bra2 * ket1).doit() == TensorProduct(1, 1)\n+ assert qapply(bra2 * ket1).doit() == S.One\n assert qapply(TensorProduct(a, b * b) * ket1) == sqrt(2) * ket2\n assert qapply(Dagger(TensorProduct(a, b * b) * ket1),\n dagger=True) == sqrt(2) * Dagger(ket2)\n@@ -143,9 +143,9 @@ def test_issue24158_ket_times_op():\n assert qapply(P1) == QubitBra(0) * XGate(0) # qapply(P1) -> 0 before fix\n P1 = qapply(P1, dagger = True) # unsatisfactorily -> <0|*X(0), expect <1| since dagger=True\n assert qapply(P1, dagger = True) == QubitBra(1) # qapply(P1, dagger=True) -> 0 before fix\n- P2 = QubitBra(0) * QubitBra(0) * Qubit(0) * XGate(0) # 'forgot' to set brackets\n+ P2 = QubitBra(0) * (QubitBra(0) * Qubit(0)) * XGate(0) # 'forgot' to set brackets\n P2 = qapply(P2, dagger = True) # unsatisfactorily -> <0|*X(0), expect <1| since dagger=True\n- assert qapply(P2, dagger = True) == QubitBra(1) # qapply(P1) -> 0 before fix\n+ assert P2 == QubitBra(1) # qapply(P1) -> 0 before fix\n # Pull Request 24237: IdentityOperator from the right without dagger=True option\n with warns_deprecated_sympy():\n assert qapply(QubitBra(1)*IdentityOperator()) == QubitBra(1)\ndiff --git a/sympy/physics/quantum/tests/test_sho1d.py b/sympy/physics/quantum/tests/test_sho1d.py\nindex 18d3862033ef..36ba792293a8 100644\n--- a/sympy/physics/quantum/tests/test_sho1d.py\n+++ b/sympy/physics/quantum/tests/test_sho1d.py\n@@ -40,8 +40,8 @@\n omega = Symbol('omega')\n m = Symbol('m')\n ndim = Integer(4)\n-p = Symbol('p', is_integer=True)\n-q = Symbol('q', nonnegative=True, is_integer=True)\n+p = Symbol('p', integer=True)\n+q = Symbol('q', nonnegative=True, integer=True)\n \n \n np = import_module('numpy')\n@@ -167,10 +167,10 @@ def test_sho_coherant_state():\n assert simplify(qapply(SHOBra(q)*a*cstate, sum_doit=True)) == simplify(qapply(SHOBra(q)*alpha*cstate, sum_doit=True))\n \n def test_issue_26495():\n- nbar = Symbol('nbar', is_real=True, nonnegative=True)\n- n = Symbol('n', is_integer=True)\n- i = Symbol('i', is_integer=True, nonnegative=True)\n- j = Symbol('j', is_integer=True, nonnegative=True)\n- rho = (1/(1+nbar))*Sum((nbar/(1+nbar))**n*SHOKet(n)*SHOBra(n), (n,0,oo))\n+ nbar = Symbol('nbar', real=True, nonnegative=True)\n+ n = Symbol('n', integer=True)\n+ i = Symbol('i', integer=True, nonnegative=True)\n+ j = Symbol('j', integer=True, nonnegative=True)\n+ rho = Sum((nbar/(1+nbar))**n*SHOKet(n)*SHOBra(n), (n,0,oo))\n result = qapply(SHOBra(i)*rho*SHOKet(j), sum_doit=True)\n- assert simplify(result) == nbar**j*(nbar+1)**(-j-1)*KroneckerDelta(i,j)\n+ assert simplify(result) == (nbar/(nbar+1))**i*KroneckerDelta(i,j)\ndiff --git a/sympy/physics/quantum/tests/test_spin.py b/sympy/physics/quantum/tests/test_spin.py\nindex 2bc038e656b5..f905a7de5aed 100644\n--- a/sympy/physics/quantum/tests/test_spin.py\n+++ b/sympy/physics/quantum/tests/test_spin.py\n@@ -8,6 +8,8 @@\n from sympy.functions.elementary.trigonometric import (cos, sin)\n from sympy.matrices.dense import Matrix\n from sympy.abc import alpha, beta, gamma, j, m\n+from sympy.simplify import simplify\n+\n from sympy.physics.quantum import hbar, represent, Commutator, InnerProduct\n from sympy.physics.quantum.qapply import qapply\n from sympy.physics.quantum.tensorproduct import TensorProduct\n@@ -28,6 +30,16 @@\n 'j12 j13 j24 j34 j123 j134 mi mi1 mp')\n \n \n+def assert_simplify_expand(e1, e2):\n+ \"\"\"Helper for simplifying and expanding results.\n+\n+ This is needed to help us test complex expressions whose form\n+ might change in subtle ways as the rest of sympy evolves.\n+ \"\"\"\n+ assert simplify(e1.expand(tensorproduct=True)) == \\\n+ simplify(e2.expand(tensorproduct=True))\n+\n+\n def test_represent_spin_operators():\n assert represent(Jx) == hbar*Matrix([[0, 1], [1, 0]])/2\n assert represent(\n@@ -3738,18 +3750,22 @@ def test_jplus():\n hbar*sqrt(j**2 + j - m**2 - m)*JzKetCoupled(j, m + 1, (j1, j2))\n # Uncoupled operators, uncoupled states\n # Numerical\n- assert qapply(TensorProduct(Jplus, 1)*TensorProduct(JxKet(1, 1), JxKet(1, -1))) == \\\n- -hbar*sqrt(2)*TensorProduct(JxKet(1, 0), JxKet(1, -1))/2 + \\\n+ e1 = qapply(TensorProduct(Jplus, 1)*TensorProduct(JxKet(1, 1), JxKet(1, -1)))\n+ e2 = -hbar*sqrt(2)*TensorProduct(JxKet(1, 0), JxKet(1, -1))/2 + \\\n hbar*TensorProduct(JxKet(1, 1), JxKet(1, -1))\n- assert qapply(TensorProduct(1, Jplus)*TensorProduct(JxKet(1, 1), JxKet(1, -1))) == \\\n- -hbar*TensorProduct(JxKet(1, 1), JxKet(1, -1)) + \\\n+ assert_simplify_expand(e1, e2)\n+ e1 = qapply(TensorProduct(1, Jplus)*TensorProduct(JxKet(1, 1), JxKet(1, -1)))\n+ e2 = -hbar*TensorProduct(JxKet(1, 1), JxKet(1, -1)) + \\\n hbar*sqrt(2)*TensorProduct(JxKet(1, 1), JxKet(1, 0))/2\n- assert qapply(TensorProduct(Jplus, 1)*TensorProduct(JyKet(1, 1), JyKet(1, -1))) == \\\n- hbar*sqrt(2)*TensorProduct(JyKet(1, 0), JyKet(1, -1))/2 + \\\n+ assert_simplify_expand(e1, e2)\n+ e1 = qapply(TensorProduct(Jplus, 1)*TensorProduct(JyKet(1, 1), JyKet(1, -1)))\n+ e2 = hbar*sqrt(2)*TensorProduct(JyKet(1, 0), JyKet(1, -1))/2 + \\\n hbar*I*TensorProduct(JyKet(1, 1), JyKet(1, -1))\n- assert qapply(TensorProduct(1, Jplus)*TensorProduct(JyKet(1, 1), JyKet(1, -1))) == \\\n- -hbar*I*TensorProduct(JyKet(1, 1), JyKet(1, -1)) + \\\n+ assert_simplify_expand(e1, e2)\n+ e1 = qapply(TensorProduct(1, Jplus)*TensorProduct(JyKet(1, 1), JyKet(1, -1)))\n+ e2 = -hbar*I*TensorProduct(JyKet(1, 1), JyKet(1, -1)) + \\\n hbar*sqrt(2)*TensorProduct(JyKet(1, 1), JyKet(1, 0))/2\n+ assert_simplify_expand(e1, e2)\n assert qapply(\n TensorProduct(Jplus, 1)*TensorProduct(JzKet(1, 1), JzKet(1, -1))) == 0\n assert qapply(TensorProduct(1, Jplus)*TensorProduct(JzKet(1, 1), JzKet(1, -1))) == \\\n@@ -3826,18 +3842,22 @@ def test_jminus():\n hbar*sqrt(j**2 + j - m**2 + m)*JzKetCoupled(j, m - 1, (j1, j2))\n # Uncoupled operators, uncoupled states\n # Numerical\n- assert qapply(TensorProduct(Jminus, 1)*TensorProduct(JxKet(1, 1), JxKet(1, -1))) == \\\n- hbar*sqrt(2)*TensorProduct(JxKet(1, 0), JxKet(1, -1))/2 + \\\n+ e1 = qapply(TensorProduct(Jminus, 1)*TensorProduct(JxKet(1, 1), JxKet(1, -1)))\n+ e2 = hbar*sqrt(2)*TensorProduct(JxKet(1, 0), JxKet(1, -1))/2 + \\\n hbar*TensorProduct(JxKet(1, 1), JxKet(1, -1))\n- assert qapply(TensorProduct(1, Jminus)*TensorProduct(JxKet(1, 1), JxKet(1, -1))) == \\\n- -hbar*TensorProduct(JxKet(1, 1), JxKet(1, -1)) - \\\n+ assert_simplify_expand(e1, e2)\n+ e1 = qapply(TensorProduct(1, Jminus)*TensorProduct(JxKet(1, 1), JxKet(1, -1)))\n+ e2 = -hbar*TensorProduct(JxKet(1, 1), JxKet(1, -1)) - \\\n hbar*sqrt(2)*TensorProduct(JxKet(1, 1), JxKet(1, 0))/2\n- assert qapply(TensorProduct(Jminus, 1)*TensorProduct(JyKet(1, 1), JyKet(1, -1))) == \\\n- hbar*sqrt(2)*TensorProduct(JyKet(1, 0), JyKet(1, -1))/2 - \\\n+ assert_simplify_expand(e1, e2)\n+ e1 = qapply(TensorProduct(Jminus, 1)*TensorProduct(JyKet(1, 1), JyKet(1, -1)))\n+ e2 = hbar*sqrt(2)*TensorProduct(JyKet(1, 0), JyKet(1, -1))/2 - \\\n hbar*I*TensorProduct(JyKet(1, 1), JyKet(1, -1))\n- assert qapply(TensorProduct(1, Jminus)*TensorProduct(JyKet(1, 1), JyKet(1, -1))) == \\\n- hbar*I*TensorProduct(JyKet(1, 1), JyKet(1, -1)) + \\\n+ assert_simplify_expand(e1, e2)\n+ e1 = qapply(TensorProduct(1, Jminus)*TensorProduct(JyKet(1, 1), JyKet(1, -1)))\n+ e2 = hbar*I*TensorProduct(JyKet(1, 1), JyKet(1, -1)) + \\\n hbar*sqrt(2)*TensorProduct(JyKet(1, 1), JyKet(1, 0))/2\n+ assert_simplify_expand(e1, e2)\n assert qapply(TensorProduct(Jminus, 1)*TensorProduct(JzKet(1, 1), JzKet(1, -1))) == \\\n sqrt(2)*hbar*TensorProduct(JzKet(1, 0), JzKet(1, -1))\n assert qapply(TensorProduct(\n@@ -3915,24 +3935,30 @@ def test_j2():\n assert qapply(TensorProduct(1, J2)*TensorProduct(JzKet(1, 1), JzKet(1, -1))) == \\\n 2*hbar**2*TensorProduct(JzKet(1, 1), JzKet(1, -1))\n # Symbolic\n- assert qapply(TensorProduct(J2, 1)*TensorProduct(JxKet(j1, m1), JxKet(j2, m2))) == \\\n- hbar**2*j1**2*TensorProduct(JxKet(j1, m1), JxKet(j2, m2)) + \\\n+ e1 = qapply(TensorProduct(J2, 1)*TensorProduct(JxKet(j1, m1), JxKet(j2, m2)))\n+ e2 = hbar**2*j1**2*TensorProduct(JxKet(j1, m1), JxKet(j2, m2)) + \\\n hbar**2*j1*TensorProduct(JxKet(j1, m1), JxKet(j2, m2))\n- assert qapply(TensorProduct(1, J2)*TensorProduct(JxKet(j1, m1), JxKet(j2, m2))) == \\\n- hbar**2*j2**2*TensorProduct(JxKet(j1, m1), JxKet(j2, m2)) + \\\n+ assert_simplify_expand(e1, e2)\n+ e1 = qapply(TensorProduct(1, J2)*TensorProduct(JxKet(j1, m1), JxKet(j2, m2)))\n+ e2 = hbar**2*j2**2*TensorProduct(JxKet(j1, m1), JxKet(j2, m2)) + \\\n hbar**2*j2*TensorProduct(JxKet(j1, m1), JxKet(j2, m2))\n- assert qapply(TensorProduct(J2, 1)*TensorProduct(JyKet(j1, m1), JyKet(j2, m2))) == \\\n- hbar**2*j1**2*TensorProduct(JyKet(j1, m1), JyKet(j2, m2)) + \\\n+ assert_simplify_expand(e1, e2)\n+ e1 = qapply(TensorProduct(J2, 1)*TensorProduct(JyKet(j1, m1), JyKet(j2, m2)))\n+ e2 = hbar**2*j1**2*TensorProduct(JyKet(j1, m1), JyKet(j2, m2)) + \\\n hbar**2*j1*TensorProduct(JyKet(j1, m1), JyKet(j2, m2))\n- assert qapply(TensorProduct(1, J2)*TensorProduct(JyKet(j1, m1), JyKet(j2, m2))) == \\\n- hbar**2*j2**2*TensorProduct(JyKet(j1, m1), JyKet(j2, m2)) + \\\n+ assert_simplify_expand(e1, e2)\n+ e1 = qapply(TensorProduct(1, J2)*TensorProduct(JyKet(j1, m1), JyKet(j2, m2)))\n+ e2 = hbar**2*j2**2*TensorProduct(JyKet(j1, m1), JyKet(j2, m2)) + \\\n hbar**2*j2*TensorProduct(JyKet(j1, m1), JyKet(j2, m2))\n- assert qapply(TensorProduct(J2, 1)*TensorProduct(JzKet(j1, m1), JzKet(j2, m2))) == \\\n- hbar**2*j1**2*TensorProduct(JzKet(j1, m1), JzKet(j2, m2)) + \\\n+ assert_simplify_expand(e1, e2)\n+ e1 = qapply(TensorProduct(J2, 1)*TensorProduct(JzKet(j1, m1), JzKet(j2, m2)))\n+ e2 = hbar**2*j1**2*TensorProduct(JzKet(j1, m1), JzKet(j2, m2)) + \\\n hbar**2*j1*TensorProduct(JzKet(j1, m1), JzKet(j2, m2))\n- assert qapply(TensorProduct(1, J2)*TensorProduct(JzKet(j1, m1), JzKet(j2, m2))) == \\\n- hbar**2*j2**2*TensorProduct(JzKet(j1, m1), JzKet(j2, m2)) + \\\n+ assert_simplify_expand(e1, e2)\n+ e1 = qapply(TensorProduct(1, J2)*TensorProduct(JzKet(j1, m1), JzKet(j2, m2)))\n+ e2 = hbar**2*j2**2*TensorProduct(JzKet(j1, m1), JzKet(j2, m2)) + \\\n hbar**2*j2*TensorProduct(JzKet(j1, m1), JzKet(j2, m2))\n+ assert_simplify_expand(e1, e2)\n \n \n def test_jx():\n@@ -4016,14 +4042,16 @@ def test_jx():\n TensorProduct(Sum(hbar*mi*WignerD(j1, mi, m1, 0, 0, pi/2) * Sum(WignerD(j1, mi1, mi, pi*Rational(3, 2), 0, 0)*JyKet(j1, mi1), (mi1, -j1, j1)), (mi, -j1, j1)), JyKet(j2, m2))\n assert qapply(TensorProduct(1, Jx)*TensorProduct(JyKet(j1, m1), JyKet(j2, m2))) == \\\n TensorProduct(JyKet(j1, m1), Sum(hbar*mi*WignerD(j2, mi, m2, 0, 0, pi/2) * Sum(WignerD(j2, mi1, mi, pi*Rational(3, 2), 0, 0)*JyKet(j2, mi1), (mi1, -j2, j2)), (mi, -j2, j2)))\n- assert qapply(TensorProduct(Jx, 1)*TensorProduct(JzKet(j1, m1), JzKet(j2, m2))) == \\\n- hbar*sqrt(j1**2 + j1 - m1**2 - m1)*TensorProduct(JzKet(j1, m1 + 1), JzKet(j2, m2))/2 + \\\n+ e1 = qapply(TensorProduct(Jx, 1)*TensorProduct(JzKet(j1, m1), JzKet(j2, m2)))\n+ e2 = hbar*sqrt(j1**2 + j1 - m1**2 - m1)*TensorProduct(JzKet(j1, m1 + 1), JzKet(j2, m2))/2 + \\\n hbar*sqrt(\n j1**2 + j1 - m1**2 + m1)*TensorProduct(JzKet(j1, m1 - 1), JzKet(j2, m2))/2\n- assert qapply(TensorProduct(1, Jx)*TensorProduct(JzKet(j1, m1), JzKet(j2, m2))) == \\\n- hbar*sqrt(j2**2 + j2 - m2**2 - m2)*TensorProduct(JzKet(j1, m1), JzKet(j2, m2 + 1))/2 + \\\n+ assert_simplify_expand(e1, e2)\n+ e1 = qapply(TensorProduct(1, Jx)*TensorProduct(JzKet(j1, m1), JzKet(j2, m2)))\n+ e2 = hbar*sqrt(j2**2 + j2 - m2**2 - m2)*TensorProduct(JzKet(j1, m1), JzKet(j2, m2 + 1))/2 + \\\n hbar*sqrt(\n j2**2 + j2 - m2**2 + m2)*TensorProduct(JzKet(j1, m1), JzKet(j2, m2 - 1))/2\n+ assert_simplify_expand(e1, e2)\n \n \n def test_jy():\n@@ -4107,14 +4135,16 @@ def test_jy():\n hbar*m1*TensorProduct(JyKet(j1, m1), JyKet(j2, m2))\n assert qapply(TensorProduct(1, Jy)*TensorProduct(JyKet(j1, m1), JyKet(j2, m2))) == \\\n hbar*m2*TensorProduct(JyKet(j1, m1), JyKet(j2, m2))\n- assert qapply(TensorProduct(Jy, 1)*TensorProduct(JzKet(j1, m1), JzKet(j2, m2))) == \\\n- -hbar*I*sqrt(j1**2 + j1 - m1**2 - m1)*TensorProduct(JzKet(j1, m1 + 1), JzKet(j2, m2))/2 + \\\n+ e1 = qapply(TensorProduct(Jy, 1)*TensorProduct(JzKet(j1, m1), JzKet(j2, m2)))\n+ e2 = -hbar*I*sqrt(j1**2 + j1 - m1**2 - m1)*TensorProduct(JzKet(j1, m1 + 1), JzKet(j2, m2))/2 + \\\n hbar*I*sqrt(\n j1**2 + j1 - m1**2 + m1)*TensorProduct(JzKet(j1, m1 - 1), JzKet(j2, m2))/2\n- assert qapply(TensorProduct(1, Jy)*TensorProduct(JzKet(j1, m1), JzKet(j2, m2))) == \\\n- -hbar*I*sqrt(j2**2 + j2 - m2**2 - m2)*TensorProduct(JzKet(j1, m1), JzKet(j2, m2 + 1))/2 + \\\n+ assert_simplify_expand(e1, e2)\n+ e1 = qapply(TensorProduct(1, Jy)*TensorProduct(JzKet(j1, m1), JzKet(j2, m2)))\n+ e2 = -hbar*I*sqrt(j2**2 + j2 - m2**2 - m2)*TensorProduct(JzKet(j1, m1), JzKet(j2, m2 + 1))/2 + \\\n hbar*I*sqrt(\n j2**2 + j2 - m2**2 + m2)*TensorProduct(JzKet(j1, m1), JzKet(j2, m2 - 1))/2\n+ assert_simplify_expand(e1, e2)\n \n \n def test_jz():\ndiff --git a/sympy/physics/quantum/tests/test_tensorproduct.py b/sympy/physics/quantum/tests/test_tensorproduct.py\nindex 5c4560932861..c17d533ae6d4 100644\n--- a/sympy/physics/quantum/tests/test_tensorproduct.py\n+++ b/sympy/physics/quantum/tests/test_tensorproduct.py\n@@ -2,6 +2,7 @@\n from sympy.core.symbol import symbols\n from sympy.core.expr import unchanged\n from sympy.matrices import Matrix, SparseMatrix, ImmutableMatrix\n+from sympy.testing.pytest import warns_deprecated_sympy\n \n from sympy.physics.quantum.commutator import Commutator as Comm\n from sympy.physics.quantum.tensorproduct import TensorProduct\n@@ -9,12 +10,16 @@\n from sympy.physics.quantum.tensorproduct import tensor_product_simp\n from sympy.physics.quantum.dagger import Dagger\n from sympy.physics.quantum.qubit import Qubit, QubitBra\n-from sympy.physics.quantum.operator import OuterProduct\n+from sympy.physics.quantum.operator import OuterProduct, Operator\n from sympy.physics.quantum.density import Density\n from sympy.physics.quantum.trace import Tr\n \n-A, B, C, D = symbols('A,B,C,D', commutative=False)\n+A = Operator('A')\n+B = Operator('B')\n+C = Operator('C')\n+D = Operator('D')\n x = symbols('x')\n+y = symbols('y', integer=True, positive=True)\n \n mat1 = Matrix([[1, 2*I], [1 + I, 3]])\n mat2 = Matrix([[2*I, 3], [4*I, 2]])\n@@ -61,12 +66,14 @@ def test_tensor_product_commutator():\n \n \n def test_tensor_product_simp():\n- assert tensor_product_simp(TP(A, B)*TP(B, C)) == TP(A*B, B*C)\n- # tests for Pow-expressions\n- assert tensor_product_simp(TP(A, B)**x) == TP(A**x, B**x)\n- assert tensor_product_simp(x*TP(A, B)**2) == x*TP(A**2,B**2)\n- assert tensor_product_simp(x*(TP(A, B)**2)*TP(C,D)) == x*TP(A**2*C,B**2*D)\n- assert tensor_product_simp(TP(A,B)-TP(C,D)**x) == TP(A,B)-TP(C**x,D**x)\n+ with warns_deprecated_sympy():\n+ assert tensor_product_simp(TP(A, B)*TP(B, C)) == TP(A*B, B*C)\n+ # tests for Pow-expressions\n+ assert TP(A, B)**y == TP(A**y, B**y)\n+ assert tensor_product_simp(TP(A, B)**y) == TP(A**y, B**y)\n+ assert tensor_product_simp(x*TP(A, B)**2) == x*TP(A**2,B**2)\n+ assert tensor_product_simp(x*(TP(A, B)**2)*TP(C,D)) == x*TP(A**2*C,B**2*D)\n+ assert tensor_product_simp(TP(A,B)-TP(C,D)**y) == TP(A,B)-TP(C**y,D**y)\n \n \n def test_issue_5923():\n@@ -82,8 +89,6 @@ def test_eval_trace():\n #and density operators. Since, the test is more to test the behavior of\n #TensorProducts it remains here\n \n- A, B, C, D, E, F = symbols('A B C D E F', commutative=False)\n-\n # Density with simple tensor products as args\n t = TensorProduct(A, B)\n d = Density([t, 1.0])\ndiff --git a/sympy/physics/quantum/tests/test_transforms.py b/sympy/physics/quantum/tests/test_transforms.py\nnew file mode 100644\nindex 000000000000..5f68b0362d5c\n--- /dev/null\n+++ b/sympy/physics/quantum/tests/test_transforms.py\n@@ -0,0 +1,75 @@\n+\"\"\"Tests of transforms of quantum expressions for Mul and Pow.\"\"\"\n+\n+from sympy.core.symbol import symbols\n+from sympy.testing.pytest import raises\n+\n+from sympy.physics.quantum.operator import (\n+ Operator, OuterProduct\n+)\n+from sympy.physics.quantum.state import Ket, Bra\n+from sympy.physics.quantum.innerproduct import InnerProduct\n+from sympy.physics.quantum.tensorproduct import TensorProduct\n+\n+\n+k1 = Ket('k1')\n+k2 = Ket('k2')\n+k3 = Ket('k3')\n+b1 = Bra('b1')\n+b2 = Bra('b2')\n+b3 = Bra('b3')\n+A = Operator('A')\n+B = Operator('B')\n+C = Operator('C')\n+x, y, z = symbols('x y z')\n+\n+\n+def test_bra_ket():\n+ assert b1*k1 == InnerProduct(b1, k1)\n+ assert k1*b1 == OuterProduct(k1, b1)\n+ # Test priority of inner product\n+ assert OuterProduct(k1, b1)*k2 == InnerProduct(b1, k2)*k1\n+ assert b1*OuterProduct(k1, b2) == InnerProduct(b1, k1)*b2\n+\n+\n+def test_tensor_product():\n+ # We are attempting to be rigourous and raise TypeError when a user tries\n+ # to combine bras, kets, and operators in a manner that doesn't make sense.\n+ # In particular, we are not trying to interpret regular ``*`` multiplication\n+ # as a tensor product.\n+ with raises(TypeError):\n+ assert k1*k1 == TensorProduct(k1, k1)\n+ with raises(TypeError):\n+ assert b1*b1 == TensorProduct(b1, b1)\n+ with raises(TypeError):\n+ assert k1*TensorProduct(k2, k3) == TensorProduct(k1, k2, k3)\n+ with raises(TypeError):\n+ assert b1*TensorProduct(b2, b3) == TensorProduct(b1, b2, b3)\n+ with raises(TypeError):\n+ assert TensorProduct(k2, k3)*k1 == TensorProduct(k2, k3, k1)\n+ with raises(TypeError):\n+ assert TensorProduct(b2, b3)*b1 == TensorProduct(b2, b3, b1)\n+\n+ assert TensorProduct(A, B, C)*TensorProduct(k1, k2, k3) == \\\n+ TensorProduct(A*k1, B*k2, C*k3)\n+ assert TensorProduct(b1, b2, b3)*TensorProduct(A, B, C) == \\\n+ TensorProduct(b1*A, b2*B, b3*C)\n+ assert TensorProduct(b1, b2, b3)*TensorProduct(k1, k2, k3) == \\\n+ InnerProduct(b1, k1)*InnerProduct(b2, k2)*InnerProduct(b3, k3)\n+ assert TensorProduct(b1, b2, b3)*TensorProduct(A, B, C)*TensorProduct(k1, k2, k3) == \\\n+ TensorProduct(b1*A*k1, b2*B*k2, b3*C*k3)\n+\n+\n+def test_outer_product():\n+ assert OuterProduct(k1, b1)*OuterProduct(k2, b2) == \\\n+ InnerProduct(b1, k2)*OuterProduct(k1, b2)\n+\n+\n+def test_compound():\n+ e1 = b1*A*B*k1*b2*k2*b3\n+ assert e1 == InnerProduct(b2, k2)*b1*A*B*OuterProduct(k1, b3)\n+\n+ e2 = TensorProduct(k1, k2)*TensorProduct(b1, b2)\n+ assert e2 == TensorProduct(\n+ OuterProduct(k1, b1),\n+ OuterProduct(k2, b2)\n+ )\ndiff --git a/sympy/physics/quantum/transforms.py b/sympy/physics/quantum/transforms.py\nnew file mode 100644\nindex 000000000000..d646d3e8e779\n--- /dev/null\n+++ b/sympy/physics/quantum/transforms.py\n@@ -0,0 +1,291 @@\n+\"\"\"Transforms that are always applied to quantum expressions.\n+\n+This module uses the kind and _constructor_postprocessor_mapping APIs\n+to transform different combinations of Operators, Bras, and Kets into\n+Inner/Outer/TensorProducts. These transformations are registered\n+with the postprocessing API of core classes like `Mul` and `Pow` and\n+are always applied to any expression involving Bras, Kets, and\n+Operators. This API replaces the custom `__mul__` and `__pow__`\n+methods of the quantum classes, which were found to be inconsistent.\n+\n+THIS IS EXPERIMENTAL.\n+\"\"\"\n+from sympy.core.basic import Basic\n+from sympy.core.expr import Expr\n+from sympy.core.mul import Mul\n+from sympy.core.singleton import S\n+from sympy.multipledispatch.dispatcher import (\n+ Dispatcher, ambiguity_register_error_ignore_dup\n+)\n+from sympy.utilities.misc import debug\n+\n+from sympy.physics.quantum.innerproduct import InnerProduct\n+from sympy.physics.quantum.kind import KetKind, BraKind, OperatorKind\n+from sympy.physics.quantum.operator import (\n+ OuterProduct, IdentityOperator, Operator\n+)\n+from sympy.physics.quantum.state import BraBase, KetBase, StateBase\n+from sympy.physics.quantum.tensorproduct import TensorProduct\n+\n+\n+#-----------------------------------------------------------------------------\n+# Multipledispatch based transformed for Mul and Pow\n+#-----------------------------------------------------------------------------\n+\n+_transform_state_pair = Dispatcher('_transform_state_pair')\n+\"\"\"Transform a pair of expression in a Mul to their canonical form.\n+\n+All functions that are registered with this dispatcher need to take\n+two inputs and return either tuple of transformed outputs, or None if no\n+transform is applied. The output tuple is inserted into the right place\n+of the ``Mul`` that is being put into canonical form. It works something like\n+the following:\n+\n+``Mul(a, b, c, d, e, f) -> Mul(*(_transform_state_pair(a, b) + (c, d, e, f))))``\n+\n+The transforms here are always applied when quantum objects are multiplied.\n+\n+THIS IS EXPERIMENTAL.\n+\n+However, users of ``sympy.physics.quantum`` can import this dispatcher and\n+register their own transforms to control the canonical form of products\n+of quantum expressions.\n+\"\"\"\n+\n+@_transform_state_pair.register(Expr, Expr)\n+def _transform_expr(a, b):\n+ \"\"\"Default transformer that does nothing for base types.\"\"\"\n+ return None\n+\n+\n+# The identity times anything is the anything.\n+_transform_state_pair.add(\n+ (IdentityOperator, Expr),\n+ lambda x, y: (y,),\n+ on_ambiguity=ambiguity_register_error_ignore_dup\n+)\n+_transform_state_pair.add(\n+ (Expr, IdentityOperator),\n+ lambda x, y: (x,),\n+ on_ambiguity=ambiguity_register_error_ignore_dup\n+)\n+_transform_state_pair.add(\n+ (IdentityOperator, IdentityOperator),\n+ lambda x, y: S.One,\n+ on_ambiguity=ambiguity_register_error_ignore_dup\n+)\n+\n+@_transform_state_pair.register(BraBase, KetBase)\n+def _transform_bra_ket(a, b):\n+ \"\"\"Transform a bra*ket -> InnerProduct(bra, ket).\"\"\"\n+ return (InnerProduct(a, b),)\n+\n+@_transform_state_pair.register(KetBase, BraBase)\n+def _transform_ket_bra(a, b):\n+ \"\"\"Transform a keT*bra -> OuterProduct(ket, bra).\"\"\"\n+ return (OuterProduct(a, b),)\n+\n+@_transform_state_pair.register(KetBase, KetBase)\n+def _transform_ket_ket(a, b):\n+ \"\"\"Raise a TypeError if a user tries to multiply two kets.\n+\n+ Multiplication based on `*` is not a shorthand for tensor products.\n+ \"\"\"\n+ raise TypeError(\n+ 'Multiplication of two kets is not allowed. Use TensorProduct instead.'\n+ )\n+\n+@_transform_state_pair.register(BraBase, BraBase)\n+def _transform_bra_bra(a, b):\n+ \"\"\"Raise a TypeError if a user tries to multiply two bras.\n+\n+ Multiplication based on `*` is not a shorthand for tensor products.\n+ \"\"\"\n+ raise TypeError(\n+ 'Multiplication of two bras is not allowed. Use TensorProduct instead.'\n+ )\n+\n+@_transform_state_pair.register(OuterProduct, KetBase)\n+def _transform_op_ket(a, b):\n+ return (InnerProduct(a.bra, b), a.ket)\n+\n+@_transform_state_pair.register(BraBase, OuterProduct)\n+def _transform_bra_op(a, b):\n+ return (InnerProduct(a, b.ket), b.bra)\n+\n+@_transform_state_pair.register(TensorProduct, KetBase)\n+def _transform_tp_ket(a, b):\n+ \"\"\"Raise a TypeError if a user tries to multiply TensorProduct(*kets)*ket.\n+\n+ Multiplication based on `*` is not a shorthand for tensor products.\n+ \"\"\"\n+ if a.kind == KetKind:\n+ raise TypeError(\n+ 'Multiplication of TensorProduct(*kets)*ket is invalid.'\n+ )\n+\n+@_transform_state_pair.register(KetBase, TensorProduct)\n+def _transform_ket_tp(a, b):\n+ \"\"\"Raise a TypeError if a user tries to multiply ket*TensorProduct(*kets).\n+\n+ Multiplication based on `*` is not a shorthand for tensor products.\n+ \"\"\"\n+ if b.kind == KetKind:\n+ raise TypeError(\n+ 'Multiplication of ket*TensorProduct(*kets) is invalid.'\n+ )\n+\n+@_transform_state_pair.register(TensorProduct, BraBase)\n+def _transform_tp_bra(a, b):\n+ \"\"\"Raise a TypeError if a user tries to multiply TensorProduct(*bras)*bra.\n+\n+ Multiplication based on `*` is not a shorthand for tensor products.\n+ \"\"\"\n+ if a.kind == BraKind:\n+ raise TypeError(\n+ 'Multiplication of TensorProduct(*bras)*bra is invalid.'\n+ )\n+\n+@_transform_state_pair.register(BraBase, TensorProduct)\n+def _transform_bra_tp(a, b):\n+ \"\"\"Raise a TypeError if a user tries to multiply bra*TensorProduct(*bras).\n+\n+ Multiplication based on `*` is not a shorthand for tensor products.\n+ \"\"\"\n+ if b.kind == BraKind:\n+ raise TypeError(\n+ 'Multiplication of bra*TensorProduct(*bras) is invalid.'\n+ )\n+\n+@_transform_state_pair.register(TensorProduct, TensorProduct)\n+def _transform_tp_tp(a, b):\n+ \"\"\"Combine a product of tensor products if their number of args matches.\"\"\"\n+ debug('_transform_tp_tp', a, b)\n+ if len(a.args) == len(b.args):\n+ if a.kind == BraKind and b.kind == KetKind:\n+ return tuple([InnerProduct(i, j) for (i, j) in zip(a.args, b.args)])\n+ else:\n+ return (TensorProduct(*(i*j for (i, j) in zip(a.args, b.args))), )\n+\n+@_transform_state_pair.register(OuterProduct, OuterProduct)\n+def _transform_op_op(a, b):\n+ \"\"\"Extract an inner produt from a product of outer products.\"\"\"\n+ return (InnerProduct(a.bra, b.ket), OuterProduct(a.ket, b.bra))\n+\n+\n+#-----------------------------------------------------------------------------\n+# Postprocessing transforms for Mul and Pow\n+#-----------------------------------------------------------------------------\n+\n+\n+def _postprocess_state_mul(expr):\n+ \"\"\"Trasform a ``Mul`` of quantum expressions into canonical form.\n+\n+ This function is registered ``_constructor_postprocessor_mapping`` as a\n+ transformer for ``Mul``. This means that every time a quantum expression\n+ is multiplied, this function will be called to transform it into canonical\n+ form as defined by the binary functions registered with\n+ ``_transform_state_pair``.\n+\n+ The algorithm of this function is as follows. It walks the args\n+ of the input ``Mul`` from left to right and calls ``_transform_state_pair``\n+ on every overlapping pair of args. Each time ``_transform_state_pair``\n+ is called it can return a tuple of items or None. If None, the pair isn't\n+ transformed. If a tuple, then the last element of the tuple goes back into\n+ the args to be transformed again and the others are extended onto the result\n+ args list.\n+\n+ The algorithm can be visualized in the following table:\n+\n+ step result args\n+ ============================================================================\n+ #0 [] [a, b, c, d, e, f]\n+ #1 [] [T(a,b), c, d, e, f]\n+ #2 [T(a,b)[:-1]] [T(a,b)[-1], c, d, e, f]\n+ #3 [T(a,b)[:-1]] [T(T(a,b)[-1], c), d, e, f]\n+ #4 [T(a,b)[:-1], T(T(a,b)[-1], c)[:-1]] [T(T(T(a,b)[-1], c)[-1], d), e, f]\n+ #5 ...\n+\n+ One limitation of the current implementation is that we assume that only the\n+ last item of the transformed tuple goes back into the args to be transformed\n+ again. These seems to handle the cases needed for Mul. However, we may need\n+ to extend the algorithm to have the entire tuple go back into the args for\n+ further transformation.\n+ \"\"\"\n+ args = list(expr.args)\n+ result = []\n+\n+ # Continue as long as we have at least 2 elements\n+ while len(args) > 1:\n+ # Get first two elements\n+ first = args.pop(0)\n+ second = args[0] # Look at second element without popping yet\n+\n+ transformed = _transform_state_pair(first, second)\n+\n+ if transformed is None:\n+ # If transform returns None, append first element\n+ result.append(first)\n+ else:\n+ # This item was transformed, pop and discard\n+ args.pop(0)\n+ # The last item goes back to be transformed again\n+ args.insert(0, transformed[-1])\n+ # All other items go directly into the result\n+ result.extend(transformed[:-1])\n+\n+ # Append any remaining element\n+ if args:\n+ result.append(args[0])\n+\n+ return Mul._from_args(result, is_commutative=False)\n+\n+\n+def _postprocess_state_pow(expr):\n+ \"\"\"Handle bras and kets raised to powers.\n+\n+ Under ``*`` multiplication this is invalid. Users should use a\n+ TensorProduct instead.\n+ \"\"\"\n+ base, exp = expr.as_base_exp()\n+ if base.kind == KetKind or base.kind == BraKind:\n+ raise TypeError(\n+ 'A bra or ket to a power is invalid, use TensorProduct instead.'\n+ )\n+\n+\n+def _postprocess_tp_pow(expr):\n+ \"\"\"Handle TensorProduct(*operators)**(positive integer).\n+\n+ This handles a tensor product of operators, to an integer power.\n+ The power here is interpreted as regular multiplication, not\n+ tensor product exponentiation. The form of exponentiation performed\n+ here leaves the space and dimension of the object the same.\n+\n+ This operation does not make sense for tensor product's of states.\n+ \"\"\"\n+ base, exp = expr.as_base_exp()\n+ debug('_postprocess_tp_pow: ', base, exp, expr.args)\n+ if isinstance(base, TensorProduct) and exp.is_integer and exp.is_positive and base.kind == OperatorKind:\n+ new_args = [a**exp for a in base.args]\n+ return TensorProduct(*new_args)\n+\n+\n+#-----------------------------------------------------------------------------\n+# Register the transformers with Basic._constructor_postprocessor_mapping\n+#-----------------------------------------------------------------------------\n+\n+\n+Basic._constructor_postprocessor_mapping[StateBase] = {\n+ \"Mul\": [_postprocess_state_mul],\n+ \"Pow\": [_postprocess_state_pow]\n+}\n+\n+Basic._constructor_postprocessor_mapping[TensorProduct] = {\n+ \"Mul\": [_postprocess_state_mul],\n+ \"Pow\": [_postprocess_tp_pow]\n+}\n+\n+Basic._constructor_postprocessor_mapping[Operator] = {\n+ \"Mul\": [_postprocess_state_mul]\n+}\n" }
[ { "diff_hunk": "@@ -68,18 +80,22 @@ def qapply(e, **options):\n |k><b|\n >>> qapply(A * b.dual / (b * b.dual))\n |k>\n- >>> qapply(k.dual * A / (k.dual * k), dagger=True)\n- <b|\n >>> qapply(k.dual * A / (k.dual * k))\n- <k|*|k><b|/<k|k>\n+ <b|\n \"\"\"\n from sympy.physics.quantum.density import Density\n \n dagger = options.get('dagger', False)\n sum_doit = options.get('sum_doit', False)\n+ ip_doit = options.get('ip_doit', True)\n \n- if e == 0:\n- return S.Zero\n+ if isinstance(e, (int, complex, float)):\n+ e = sympify(e)", "line": null, "original_line": 93, "original_start_line": null, "path": "sympy/physics/quantum/qapply.py", "start_line": null, "text": "@user1:\nWhy not just call `sympify` unconditionally? Actually `_sympify` is better because it doesn't parse strings.\n\n@author:\nWill fix this now." }, { "diff_hunk": "@@ -0,0 +1,75 @@\n+\"\"\"Tests of transforms of quantum expressions for Mul and Pow.\"\"\"\n+\n+from sympy.core.symbol import symbols\n+from sympy.testing.pytest import raises\n+\n+from sympy.physics.quantum.operator import (\n+ Operator, OuterProduct\n+)\n+from sympy.physics.quantum.state import Ket, Bra\n+from sympy.physics.quantum.innerproduct import InnerProduct\n+from sympy.physics.quantum.tensorproduct import TensorProduct\n+\n+\n+k1 = Ket('k1')\n+k2 = Ket('k2')\n+k3 = Ket('k3')\n+b1 = Bra('b1')\n+b2 = Bra('b2')\n+b3 = Bra('b3')\n+A = Operator('A')\n+B = Operator('B')\n+C = Operator('C')\n+x, y, z = symbols('x y z')\n+\n+\n+def test_bra_ket():\n+ assert b1*k1 == InnerProduct(b1, k1)\n+ assert k1*b1 == OuterProduct(k1, b1)\n+ # Test priority of inner product\n+ assert OuterProduct(k1, b1)*k2 == InnerProduct(b1, k2)*k1\n+ assert b1*OuterProduct(k1, b2) == InnerProduct(b1, k1)*b2\n+\n+\n+def test_tensor_product():\n+ # We are attempting to be rigourous and raise TypeError when a user tries\n+ # to combine bras, kets, and operators in a manner that doesn't make sense.\n+ # In particular, we are not trying to interpret regular ``*`` multiplication\n+ # as a tensor product.\n+ with raises(TypeError):\n+ assert k1*k1 == TensorProduct(k1, k1)", "line": null, "original_line": 40, "original_start_line": null, "path": "sympy/physics/quantum/tests/test_transforms.py", "start_line": null, "text": "@user1:\nWhich part of this is raising TypeError? Is it the `==` or the `k1*k2`. The with block should only have the minimal code that raises the error.\n\n@author:\nYeah, jusst the `k1*k2` part, will fix that now." } ]
c11078925db5e76886cfbf78a948e44152eb9724
diff --git a/doc/src/explanation/active-deprecations.md b/doc/src/explanation/active-deprecations.md index d8b1c469fbdf..22fb46480686 100644 --- a/doc/src/explanation/active-deprecations.md +++ b/doc/src/explanation/active-deprecations.md @@ -76,10 +76,28 @@ SymPy deprecation warnings. ## Version 1.14 +(deprecated-tensorproduct-simp)= +### Deprecated tensor_product_simp from physics.quantum + +The ``tensor_product_simp`` function in the ``sympy.physics.quantum`` +module has been deprecated along with two helper functions, +``tensor_product_simp_Mul`` and ``tensor_product_simp_Pow``. The +transformations performed by these functions are now applied +automatically to all quantum expressions in the new +``sympy.physics.quantum.transforms`` module. + +If you are using these functions in your code, you can remove them as +they are now reduntant. + +Their current implementations have been replaced by a simple +pass-through as all quantum expressions will already be in the form +originally produced by these functions. These pass throughs will +remain, along with its tests for at least one year after the 1.14 release. + (deprecated-operator-identity)= ### Deprecated IdentityOperator from physics.quantum -The ``IdentityOperator`` in the ``sympy.physics.quantum`` moddule has been +The ``IdentityOperator`` in the ``sympy.physics.quantum`` module has been deprecated. Originally, we thought that it would be helpful to have a multiplicative identity for quantum operators and states. However, at this time, it is unused in `sympy.physics.quantum` for anything other than tests diff --git a/sympy/physics/quantum/__init__.py b/sympy/physics/quantum/__init__.py index bf08e1f7a383..36203f1a48c4 100644 --- a/sympy/physics/quantum/__init__.py +++ b/sympy/physics/quantum/__init__.py @@ -29,7 +29,9 @@ 'hbar', 'HBar', + '_postprocess_state_mul', '_postprocess_state_pow' ] + from .anticommutator import AntiCommutator from .qapply import qapply @@ -57,3 +59,7 @@ from .tensorproduct import TensorProduct, tensor_product_simp from .constants import hbar, HBar + +# These are private, but need to be imported so they are registered +# as postprocessing transformers with Mul and Pow. +from .transforms import _postprocess_state_mul, _postprocess_state_pow diff --git a/sympy/physics/quantum/anticommutator.py b/sympy/physics/quantum/anticommutator.py index a73f1c207793..cbd26eade640 100644 --- a/sympy/physics/quantum/anticommutator.py +++ b/sympy/physics/quantum/anticommutator.py @@ -1,13 +1,14 @@ """The anti-commutator: ``{A,B} = A*B + B*A``.""" from sympy.core.expr import Expr +from sympy.core.kind import KindDispatcher from sympy.core.mul import Mul from sympy.core.numbers import Integer from sympy.core.singleton import S from sympy.printing.pretty.stringpict import prettyForm -from sympy.physics.quantum.operator import Operator from sympy.physics.quantum.dagger import Dagger +from sympy.physics.quantum.kind import _OperatorKind, OperatorKind __all__ = [ 'AntiCommutator' @@ -80,6 +81,13 @@ class AntiCommutator(Expr): """ is_commutative = False + _kind_dispatcher = KindDispatcher("AntiCommutator_kind_dispatcher", commutative=True) + + @property + def kind(self): + arg_kinds = (a.kind for a in self.args) + return self._kind_dispatcher(*arg_kinds) + def __new__(cls, A, B): r = cls.eval(A, B) if r is not None: @@ -110,6 +118,9 @@ def eval(cls, a, b): def doit(self, **hints): """ Evaluate anticommutator """ + # Keep the import of Operator here to avoid problems with + # circular imports. + from sympy.physics.quantum.operator import Operator A = self.args[0] B = self.args[1] if isinstance(A, Operator) and isinstance(B, Operator): @@ -147,3 +158,9 @@ def _pretty(self, printer, *args): def _latex(self, printer, *args): return "\\left\\{%s,%s\\right\\}" % tuple([ printer._print(arg, *args) for arg in self.args]) + + +@AntiCommutator._kind_dispatcher.register(_OperatorKind, _OperatorKind) +def find_op_kind(e1, e2): + """Find the kind of an anticommutator of two OperatorKinds.""" + return OperatorKind diff --git a/sympy/physics/quantum/boson.py b/sympy/physics/quantum/boson.py index 4dfd2286b120..0f24cae2a7ad 100644 --- a/sympy/physics/quantum/boson.py +++ b/sympy/physics/quantum/boson.py @@ -1,6 +1,5 @@ """Bosonic quantum operators.""" -from sympy.core.mul import Mul from sympy.core.numbers import Integer from sympy.core.singleton import S from sympy.functions.elementary.complexes import conjugate @@ -92,18 +91,6 @@ def _eval_anticommutator_BosonOp(self, other, **hints): def _eval_adjoint(self): return BosonOp(str(self.name), not self.is_annihilation) - def __mul__(self, other): - - if isinstance(other, Mul): - args1 = tuple(arg for arg in other.args if arg.is_commutative) - args2 = tuple(arg for arg in other.args if not arg.is_commutative) - x = self - for y in args2: - x = x * y - return Mul(*args1) * x - - return Mul(self, other) - def _print_contents_latex(self, printer, *args): if self.is_annihilation: return r'{%s}' % str(self.name) diff --git a/sympy/physics/quantum/commutator.py b/sympy/physics/quantum/commutator.py index 627158657481..a2d97a679e27 100644 --- a/sympy/physics/quantum/commutator.py +++ b/sympy/physics/quantum/commutator.py @@ -2,13 +2,14 @@ from sympy.core.add import Add from sympy.core.expr import Expr +from sympy.core.kind import KindDispatcher from sympy.core.mul import Mul from sympy.core.power import Pow from sympy.core.singleton import S from sympy.printing.pretty.stringpict import prettyForm from sympy.physics.quantum.dagger import Dagger -from sympy.physics.quantum.operator import Operator +from sympy.physics.quantum.kind import _OperatorKind, OperatorKind __all__ = [ @@ -94,6 +95,13 @@ class returns the commutator in an unevaluated form. To evaluate the """ is_commutative = False + _kind_dispatcher = KindDispatcher("Commutator_kind_dispatcher", commutative=True) + + @property + def kind(self): + arg_kinds = (a.kind for a in self.args) + return self._kind_dispatcher(*arg_kinds) + def __new__(cls, A, B): r = cls.eval(A, B) if r is not None: @@ -200,6 +208,9 @@ def _eval_expand_commutator(self, **hints): def doit(self, **hints): """ Evaluate commutator """ + # Keep the import of Operator here to avoid problems with + # circular imports. + from sympy.physics.quantum.operator import Operator A = self.args[0] B = self.args[1] if isinstance(A, Operator) and isinstance(B, Operator): @@ -237,3 +248,9 @@ def _pretty(self, printer, *args): def _latex(self, printer, *args): return "\\left[%s,%s\\right]" % tuple([ printer._print(arg, *args) for arg in self.args]) + + +@Commutator._kind_dispatcher.register(_OperatorKind, _OperatorKind) +def find_op_kind(e1, e2): + """Find the kind of an anticommutator of two OperatorKinds.""" + return OperatorKind diff --git a/sympy/physics/quantum/dagger.py b/sympy/physics/quantum/dagger.py index 6305a656c366..f96f01e3b9ac 100644 --- a/sympy/physics/quantum/dagger.py +++ b/sympy/physics/quantum/dagger.py @@ -1,6 +1,6 @@ """Hermitian conjugation.""" -from sympy.core import Expr, Mul, sympify +from sympy.core import Expr, sympify from sympy.functions.elementary.complexes import adjoint __all__ = [ @@ -79,6 +79,11 @@ class Dagger(adjoint): .. [2] https://en.wikipedia.org/wiki/Hermitian_transpose """ + @property + def kind(self): + """Find the kind of a dagger of something (just the kind of the something).""" + return self.args[0].kind + def __new__(cls, arg, evaluate=True): if hasattr(arg, 'adjoint') and evaluate: return arg.adjoint() @@ -86,12 +91,5 @@ def __new__(cls, arg, evaluate=True): return arg.conjugate().transpose() return Expr.__new__(cls, sympify(arg)) - def __mul__(self, other): - from sympy.physics.quantum import IdentityOperator - if isinstance(other, IdentityOperator): - return self - - return Mul(self, other) - adjoint.__name__ = "Dagger" adjoint._sympyrepr = lambda a, b: "Dagger(%s)" % b._print(a.args[0]) diff --git a/sympy/physics/quantum/density.py b/sympy/physics/quantum/density.py index aa1f408d93fd..941373e8105d 100644 --- a/sympy/physics/quantum/density.py +++ b/sympy/physics/quantum/density.py @@ -12,7 +12,6 @@ from sympy.physics.quantum.operator import HermitianOperator from sympy.physics.quantum.represent import represent from sympy.physics.quantum.matrixutils import numpy_ndarray, scipy_sparse_matrix, to_numpy -from sympy.physics.quantum.tensorproduct import TensorProduct, tensor_product_simp from sympy.physics.quantum.trace import Tr @@ -184,13 +183,10 @@ def _generate_outer_prod(self, arg1, arg2): ' Non-commutative instance required' ' for outer product.') - # Muls of Tensor Products should be expanded - # before this function is called - if (isinstance(nc_part1[0], TensorProduct) and len(nc_part1) == 1 - and len(nc_part2) == 1): - op = tensor_product_simp(nc_part1[0]*Dagger(nc_part2[0])) - else: - op = Mul(*nc_part1)*Dagger(Mul(*nc_part2)) + # We were able to remove some tensor product simplifications that + # used to be here as those transformations are not automatically + # applied by transforms.py. + op = Mul(*nc_part1)*Dagger(Mul(*nc_part2)) return Mul(*c_part1)*Mul(*c_part2) * op diff --git a/sympy/physics/quantum/innerproduct.py b/sympy/physics/quantum/innerproduct.py index 1b712f2db9a8..11fed882b606 100644 --- a/sympy/physics/quantum/innerproduct.py +++ b/sympy/physics/quantum/innerproduct.py @@ -1,10 +1,11 @@ """Symbolic inner product.""" from sympy.core.expr import Expr +from sympy.core.kind import NumberKind from sympy.functions.elementary.complexes import conjugate from sympy.printing.pretty.stringpict import prettyForm from sympy.physics.quantum.dagger import Dagger -from sympy.physics.quantum.state import KetBase, BraBase + __all__ = [ 'InnerProduct' @@ -45,23 +46,17 @@ class InnerProduct(Expr): >>> ip.ket |k> - In simple products of kets and bras inner products will be automatically + In quantum expressions, inner products will be automatically identified and created:: >>> b*k <b|k> - But in more complex expressions, there is ambiguity in whether inner or - outer products should be created:: + In more complex expressions, where there is ambiguity in whether inner or + outer products should be created, inner products have high priority:: >>> k*b*k*b - |k><b|*|k>*<b| - - A user can force the creation of a inner products in a complex expression - by using parentheses to group the bra and ket:: - - >>> k*(b*k)*b - <b|k>*|k>*<b| + <b|k>*|k><b| Notice how the inner product <b|k> moved to the left of the expression because inner products are commutative complex numbers. @@ -71,9 +66,15 @@ class InnerProduct(Expr): .. [1] https://en.wikipedia.org/wiki/Inner_product """ + + kind = NumberKind + is_complex = True def __new__(cls, bra, ket): + # Keep the import of BraBase and KetBase here to avoid problems + # with circular imports. + from sympy.physics.quantum.state import KetBase, BraBase if not isinstance(ket, KetBase): raise TypeError('KetBase subclass expected, got: %r' % ket) if not isinstance(bra, BraBase): diff --git a/sympy/physics/quantum/kind.py b/sympy/physics/quantum/kind.py new file mode 100644 index 000000000000..14b5bd2c7b0c --- /dev/null +++ b/sympy/physics/quantum/kind.py @@ -0,0 +1,103 @@ +"""Kinds for Operators, Bras, and Kets. + +This module defines kinds for operators, bras, and kets. These are useful +in various places in ``sympy.physics.quantum`` as you often want to know +what the kind is of a compound expression. For example, if you multiply +an operator, bra, or ket by a number, you get back another operator, bra, +or ket - even though if you did an ``isinstance`` check you would find that +you have a ``Mul`` instead. The kind system is meant to give you a quick +way of determining how a compound expression behaves in terms of lower +level kinds. + +The resolution calculation of kinds for compound expressions can be found +either in container classes or in functions that are registered with +kind dispatchers. +""" + +from sympy.core.mul import Mul +from sympy.core.kind import Kind, _NumberKind + + +__all__ = [ + '_KetKind', + 'KetKind', + '_BraKind', + 'BraKind', + '_OperatorKind', + 'OperatorKind', +] + + +class _KetKind(Kind): + """A kind for quantum kets.""" + + def __new__(cls): + obj = super().__new__(cls) + return obj + + def __repr__(self): + return "KetKind" + +# Create an instance as many situations need this. +KetKind = _KetKind() + + +class _BraKind(Kind): + """A kind for quantum bras.""" + + def __new__(cls): + obj = super().__new__(cls) + return obj + + def __repr__(self): + return "BraKind" + +# Create an instance as many situations need this. +BraKind = _BraKind() + + +from sympy.core.kind import Kind + +class _OperatorKind(Kind): + """A kind for quantum operators.""" + + def __new__(cls): + obj = super().__new__(cls) + return obj + + def __repr__(self): + return "OperatorKind" + +# Create an instance as many situations need this. +OperatorKind = _OperatorKind() + + +#----------------------------------------------------------------------------- +# Kind resolution. +#----------------------------------------------------------------------------- + +# Note: We can't currently add kind dispatchers for the following combinations +# as the Mul._kind_dispatcher is set to commutative and will also +# register the opposite order, which isn't correct for these pairs: +# +# 1. (_OperatorKind, _KetKind) +# 2. (_BraKind, _OperatorKind) +# 3. (_BraKind, _KetKind) + + +@Mul._kind_dispatcher.register(_NumberKind, _KetKind) +def _mul_number_ket_kind(lhs, rhs): + """Perform the kind calculation of NumberKind*KetKind -> KetKind.""" + return KetKind + + +@Mul._kind_dispatcher.register(_NumberKind, _BraKind) +def _mul_number_bra_kind(lhs, rhs): + """Perform the kind calculation of NumberKind*BraKind -> BraKind.""" + return BraKind + + +@Mul._kind_dispatcher.register(_NumberKind, _OperatorKind) +def _mul_operator_kind(lhs, rhs): + """Perform the kind calculation of NumberKind*OperatorKind -> OperatorKind.""" + return OperatorKind diff --git a/sympy/physics/quantum/operator.py b/sympy/physics/quantum/operator.py index d5869a1607d0..f0533e7f6c9b 100644 --- a/sympy/physics/quantum/operator.py +++ b/sympy/physics/quantum/operator.py @@ -18,10 +18,13 @@ from sympy.core.singleton import S from sympy.printing.pretty.stringpict import prettyForm from sympy.physics.quantum.dagger import Dagger +from sympy.physics.quantum.kind import OperatorKind from sympy.physics.quantum.qexpr import QExpr, dispatch_method from sympy.matrices import eye from sympy.utilities.exceptions import sympy_deprecation_warning + + __all__ = [ 'Operator', 'HermitianOperator', @@ -108,6 +111,8 @@ class Operator(QExpr): def default_args(self): return ("O",) + kind = OperatorKind + #------------------------------------------------------------------------- # Printing #------------------------------------------------------------------------- @@ -185,13 +190,6 @@ def inverse(self): def _eval_inverse(self): return self**(-1) - def __mul__(self, other): - - if isinstance(other, IdentityOperator): - return self - - return Mul(self, other) - class HermitianOperator(Operator): """A Hermitian operator that satisfies H == Dagger(H). @@ -331,13 +329,6 @@ def _print_contents_pretty(self, printer, *args): def _print_contents_latex(self, printer, *args): return r'{\mathcal{I}}' - def __mul__(self, other): - - if isinstance(other, (Operator, Dagger)): - return other - - return Mul(self, other) - def _represent_default_basis(self, **options): if not self.N or self.N == oo: raise NotImplementedError('Cannot represent infinite dimensional' + @@ -372,7 +363,6 @@ class OuterProduct(Operator): Create a simple outer product by hand and take its dagger:: >>> from sympy.physics.quantum import Ket, Bra, OuterProduct, Dagger - >>> from sympy.physics.quantum import Operator >>> k = Ket('k') >>> b = Bra('b') @@ -388,24 +378,17 @@ class OuterProduct(Operator): >>> Dagger(op) |b><k| - In simple products of kets and bras outer products will be automatically + In quantum expressions, outer products will be automatically identified and created:: >>> k*b |k><b| - But in more complex expressions, outer products are not automatically - created:: - - >>> A = Operator('A') - >>> A*k*b - A*|k>*<b| - - A user can force the creation of an outer product in a complex expression - by using parentheses to group the ket and bra:: + However, the creation of inner products always has higher priority than that of + outer products: - >>> A*(k*b) - A*|k><b| + >>> b*k*b + <b|k>*<b| References ========== diff --git a/sympy/physics/quantum/qapply.py b/sympy/physics/quantum/qapply.py index 87379c7e3e96..a2d8c92e5155 100644 --- a/sympy/physics/quantum/qapply.py +++ b/sympy/physics/quantum/qapply.py @@ -6,10 +6,11 @@ from sympy.concrete import Sum from sympy.core.add import Add +from sympy.core.kind import NumberKind from sympy.core.mul import Mul from sympy.core.power import Pow from sympy.core.singleton import S -from sympy.core.sympify import sympify +from sympy.core.sympify import sympify, _sympify from sympy.physics.quantum.anticommutator import AntiCommutator from sympy.physics.quantum.commutator import Commutator @@ -28,6 +29,17 @@ # Main code #----------------------------------------------------------------------------- + +def ip_doit_func(e): + """Transform the inner products in an expression by calling ``.doit()``.""" + return e.replace(InnerProduct, lambda *args: InnerProduct(*args).doit()) + + +def sum_doit_func(e): + """Transform the sums in an expression by calling ``.doit()``.""" + return e.replace(Sum, lambda *args: Sum(*args).doit()) + + def qapply(e, **options): """Apply operators to states in a quantum expression. @@ -68,18 +80,21 @@ def qapply(e, **options): |k><b| >>> qapply(A * b.dual / (b * b.dual)) |k> - >>> qapply(k.dual * A / (k.dual * k), dagger=True) - <b| >>> qapply(k.dual * A / (k.dual * k)) - <k|*|k><b|/<k|k> + <b| """ from sympy.physics.quantum.density import Density dagger = options.get('dagger', False) sum_doit = options.get('sum_doit', False) + ip_doit = options.get('ip_doit', True) - if e == 0: - return S.Zero + e = _sympify(e) + + # Using the kind API here helps us to narrow what types of expressions + # we call ``ip_doit_func`` on. + if e.kind == NumberKind: + return ip_doit_func(e) if ip_doit else e # This may be a bit aggressive but ensures that everything gets expanded # to its simplest form before trying to apply operators. This includes @@ -114,8 +129,7 @@ def qapply(e, **options): # For a Sum, call qapply on its function. elif isinstance(e, Sum): result = Sum(qapply(e.function, **options), *e.limits) - if sum_doit: - result = result.doit() + result = sum_doit_func(result) if sum_doit else result return result # For a Pow, call qapply on its base. @@ -127,14 +141,17 @@ def qapply(e, **options): c_part, nc_part = e.args_cnc() c_mul = Mul(*c_part) nc_mul = Mul(*nc_part) - if isinstance(nc_mul, Mul): + if not nc_part: # If we only have a commuting part, just return it. + result = c_mul + elif isinstance(nc_mul, Mul): result = c_mul*qapply_Mul(nc_mul, **options) else: result = c_mul*qapply(nc_mul, **options) if result == e and dagger: - return Dagger(qapply_Mul(Dagger(e), **options)) - else: - return result + result = Dagger(qapply_Mul(Dagger(e), **options)) + result = ip_doit_func(result) if ip_doit else result + result = sum_doit_func(result) if sum_doit else result + return result # In all other cases (State, Operator, Pow, Commutator, InnerProduct, # OuterProduct) we won't ever have operators to apply to kets. @@ -144,10 +161,9 @@ def qapply(e, **options): def qapply_Mul(e, **options): - ip_doit = options.get('ip_doit', True) - sum_doit = options.get('sum_doit', False) - args = list(e.args) + extra = S.One + result = None # If we only have 0 or 1 args, we have nothing to do and return. if len(args) <= 1 or not isinstance(e, Mul): @@ -171,6 +187,10 @@ def qapply_Mul(e, **options): args.append(lhs.ket) lhs = lhs.bra + if isinstance(rhs, OuterProduct): + extra = rhs.bra # Append to the right of the result + rhs = rhs.ket + # Call .doit() on Commutator/AntiCommutator. if isinstance(lhs, (Commutator, AntiCommutator)): comm = lhs.doit() @@ -179,16 +199,16 @@ def qapply_Mul(e, **options): e.func(*(args + [comm.args[0], rhs])) + e.func(*(args + [comm.args[1], rhs])), **options - ) + )*extra else: - return qapply(e.func(*args)*comm*rhs, **options) + return qapply(e.func(*args)*comm*rhs, **options)*extra # Apply tensor products of operators to states if isinstance(lhs, TensorProduct) and all(isinstance(arg, (Operator, State, Mul, Pow)) or arg == 1 for arg in lhs.args) and \ isinstance(rhs, TensorProduct) and all(isinstance(arg, (Operator, State, Mul, Pow)) or arg == 1 for arg in rhs.args) and \ len(lhs.args) == len(rhs.args): result = TensorProduct(*[qapply(lhs.args[n]*rhs.args[n], **options) for n in range(len(lhs.args))]).expand(tensorproduct=True) - return qapply_Mul(e.func(*args), **options)*result + return qapply_Mul(e.func(*args), **options)*result*extra # For Sums, move the Sum to the right. if isinstance(rhs, Sum): @@ -197,19 +217,13 @@ def qapply_Mul(e, **options): raise ValueError('Duplicated dummy indices in separate sums in qapply.') limits = lhs.limits + rhs.limits result = Sum(qapply(lhs.function*rhs.function, **options), *limits) - if sum_doit: - result = result.doit() return qapply_Mul(e.func(*args)*result, **options) else: - result = Sum(qapply(lhs*rhs.function, **options), rhs.limits) - if sum_doit: - result = result.doit() + result = Sum(qapply(lhs*rhs.function, **options), *rhs.limits) return qapply_Mul(e.func(*args)*result, **options) if isinstance(lhs, Sum): - result = Sum(qapply(lhs.function*rhs, **options), lhs.limits) - if sum_doit: - result = result.doit() + result = Sum(qapply(lhs.function*rhs, **options), *lhs.limits) return qapply_Mul(e.func(*args)*result, **options) # Now try to actually apply the operator and build an inner product. @@ -233,19 +247,17 @@ def qapply_Mul(e, **options): if result is None: if isinstance(lhs, BraBase) and isinstance(rhs, KetBase): result = InnerProduct(lhs, rhs) - if ip_doit: - result = result.doit() # TODO: I may need to expand before returning the final result. - if result == 0: - return S.Zero + if isinstance(result, (int, complex, float)): + return _sympify(result) elif result is None: if len(args) == 0: # We had two args to begin with so args=[]. return e else: - return qapply_Mul(e.func(*(args + [lhs])), **options)*rhs + return qapply_Mul(e.func(*(args + [lhs])), **options)*rhs*extra elif isinstance(result, InnerProduct): - return result*qapply_Mul(e.func(*args), **options) + return result*qapply_Mul(e.func(*args), **options)*extra else: # result is a scalar times a Mul, Add or TensorProduct - return qapply(e.func(*args)*result, **options) + return qapply(e.func(*args)*result, **options)*extra diff --git a/sympy/physics/quantum/represent.py b/sympy/physics/quantum/represent.py index cfb0ea627571..3a1ada80aa6a 100644 --- a/sympy/physics/quantum/represent.py +++ b/sympy/physics/quantum/represent.py @@ -24,6 +24,7 @@ from sympy.physics.quantum.qapply import qapply from sympy.physics.quantum.operatorset import operators_to_state, state_to_operators + __all__ = [ 'represent', 'rep_innerproduct', @@ -133,9 +134,6 @@ def _represent_FooBasis(self, e, basis, **options) >>> y = XBra('y') >>> represent(X*x) x*DiracDelta(x - x_2) - >>> represent(X*x*y) - x*DiracDelta(x - x_3)*DiracDelta(x_1 - y) - """ format = options.get('format', 'sympy') @@ -199,15 +197,15 @@ def _represent_FooBasis(self, e, basis, **options) A = expr.args[0] B = expr.args[1] return represent(Mul(A, B) + Mul(B, A), **options) - elif isinstance(expr, InnerProduct): - return represent(Mul(expr.bra, expr.ket), **options) - elif not isinstance(expr, (Mul, OuterProduct)): + elif not isinstance(expr, (Mul, OuterProduct, InnerProduct)): + # We have removed special handling of inner products that used to be + # required (before automatic transforms). # For numpy and scipy.sparse, we can only handle numerical prefactors. if format in ('numpy', 'scipy.sparse'): return _sympy_to_scalar(expr) return expr - if not isinstance(expr, (Mul, OuterProduct)): + if not isinstance(expr, (Mul, OuterProduct, InnerProduct)): raise TypeError('Mul expected, got: %r' % expr) if "index" in options: @@ -302,7 +300,8 @@ def rep_innerproduct(expr, **options): result = prod.doit() format = options.get('format', 'sympy') - return expr._format_represent(result, format) + result = expr._format_represent(result, format) + return result def rep_expectation(expr, **options): @@ -345,7 +344,8 @@ def rep_expectation(expr, **options): bra = basis_kets[1].dual ket = basis_kets[0] - return qapply(bra*expr*ket) + result = qapply(bra*expr*ket) + return result def integrate_result(orig_expr, result, **options): diff --git a/sympy/physics/quantum/state.py b/sympy/physics/quantum/state.py index b2babef2e947..4ccd1ce9b987 100644 --- a/sympy/physics/quantum/state.py +++ b/sympy/physics/quantum/state.py @@ -11,6 +11,8 @@ from sympy.integrals.integrals import integrate from sympy.printing.pretty.stringpict import stringPict from sympy.physics.quantum.qexpr import QExpr, dispatch_method +from sympy.physics.quantum.kind import KetKind, BraKind + __all__ = [ 'KetBase', @@ -208,6 +210,8 @@ class KetBase(StateBase): use Ket. """ + kind = KetKind + lbracket = _straight_bracket rbracket = _rbracket lbracket_ucode = _straight_bracket_ucode @@ -223,22 +227,6 @@ def default_args(self): def dual_class(self): return BraBase - def __mul__(self, other): - """KetBase*other""" - from sympy.physics.quantum.operator import OuterProduct - if isinstance(other, BraBase): - return OuterProduct(self, other) - else: - return Expr.__mul__(self, other) - - def __rmul__(self, other): - """other*KetBase""" - from sympy.physics.quantum.innerproduct import InnerProduct - if isinstance(other, BraBase): - return InnerProduct(other, self) - else: - return Expr.__rmul__(self, other) - #------------------------------------------------------------------------- # _eval_* methods #------------------------------------------------------------------------- @@ -287,6 +275,8 @@ class BraBase(StateBase): instead use Bra. """ + kind = BraKind + lbracket = _lbracket rbracket = _straight_bracket lbracket_ucode = _lbracket_ucode @@ -314,22 +304,6 @@ def default_args(self): def dual_class(self): return KetBase - def __mul__(self, other): - """BraBase*other""" - from sympy.physics.quantum.innerproduct import InnerProduct - if isinstance(other, KetBase): - return InnerProduct(self, other) - else: - return Expr.__mul__(self, other) - - def __rmul__(self, other): - """other*BraBase""" - from sympy.physics.quantum.operator import OuterProduct - if isinstance(other, KetBase): - return OuterProduct(other, self) - else: - return Expr.__rmul__(self, other) - def _represent(self, **options): """A default represent that uses the Ket's version.""" from sympy.physics.quantum.dagger import Dagger @@ -626,7 +600,7 @@ def dual_class(self): return TimeDepKet -class OrthogonalState(State, StateBase): +class OrthogonalState(State): """General abstract quantum state used as a base class for Ket and Bra.""" pass diff --git a/sympy/physics/quantum/tensorproduct.py b/sympy/physics/quantum/tensorproduct.py index 334f2f66bf3e..058b3459227e 100644 --- a/sympy/physics/quantum/tensorproduct.py +++ b/sympy/physics/quantum/tensorproduct.py @@ -2,23 +2,27 @@ from sympy.core.add import Add from sympy.core.expr import Expr +from sympy.core.kind import KindDispatcher from sympy.core.mul import Mul from sympy.core.power import Pow from sympy.core.sympify import sympify from sympy.matrices.dense import DenseMatrix as Matrix from sympy.matrices.immutable import ImmutableDenseMatrix as ImmutableMatrix from sympy.printing.pretty.stringpict import prettyForm +from sympy.utilities.exceptions import sympy_deprecation_warning -from sympy.physics.quantum.qexpr import QuantumError from sympy.physics.quantum.dagger import Dagger -from sympy.physics.quantum.commutator import Commutator -from sympy.physics.quantum.anticommutator import AntiCommutator -from sympy.physics.quantum.state import Ket, Bra +from sympy.physics.quantum.kind import ( + KetKind, _KetKind, + BraKind, _BraKind, + OperatorKind, _OperatorKind +) from sympy.physics.quantum.matrixutils import ( numpy_ndarray, scipy_sparse_matrix, matrix_tensor_product ) +from sympy.physics.quantum.state import Ket, Bra from sympy.physics.quantum.trace import Tr @@ -120,6 +124,14 @@ class TensorProduct(Expr): """ is_commutative = False + _kind_dispatcher = KindDispatcher("TensorProduct_kind_dispatcher", commutative=True) + + @property + def kind(self): + """Calculate the kind of a tensor product by looking at its children.""" + arg_kinds = (a.kind for a in self.args) + return self._kind_dispatcher(*arg_kinds) + def __new__(cls, *args): if isinstance(args[0], (Matrix, ImmutableMatrix, numpy_ndarray, scipy_sparse_matrix)): @@ -263,7 +275,7 @@ def _eval_expand_tensorproduct(self, **hints): def _eval_trace(self, **kwargs): indices = kwargs.get('indices', None) - exp = tensor_product_simp(self) + exp = self if indices is None or len(indices) == 0: return Mul(*[Tr(arg).doit() for arg in exp.args]) @@ -273,153 +285,79 @@ def _eval_trace(self, **kwargs): def tensor_product_simp_Mul(e): - """Simplify a Mul with TensorProducts. - - Current the main use of this is to simplify a ``Mul`` of ``TensorProduct``s - to a ``TensorProduct`` of ``Muls``. It currently only works for relatively - simple cases where the initial ``Mul`` only has scalars and raw - ``TensorProduct``s, not ``Add``, ``Pow``, ``Commutator``s of - ``TensorProduct``s. - - Parameters - ========== - - e : Expr - A ``Mul`` of ``TensorProduct``s to be simplified. - - Returns - ======= - - e : Expr - A ``TensorProduct`` of ``Mul``s. - - Examples - ======== + """Simplify a Mul with tensor products. - This is an example of the type of simplification that this function - performs:: - - >>> from sympy.physics.quantum.tensorproduct import \ - tensor_product_simp_Mul, TensorProduct - >>> from sympy import Symbol - >>> A = Symbol('A',commutative=False) - >>> B = Symbol('B',commutative=False) - >>> C = Symbol('C',commutative=False) - >>> D = Symbol('D',commutative=False) - >>> e = TensorProduct(A,B)*TensorProduct(C,D) - >>> e - AxB*CxD - >>> tensor_product_simp_Mul(e) - (A*C)x(B*D) + .. deprecated:: 1.14. + The transformations applied by this function are not done automatically + when tensor products are combined. + Originally, the main use of this function is to simplify a ``Mul`` of + ``TensorProduct``s to a ``TensorProduct`` of ``Muls``. """ - # TODO: This won't work with Muls that have other composites of - # TensorProducts, like an Add, Commutator, etc. - # TODO: This only works for the equivalent of single Qbit gates. - if not isinstance(e, Mul): - return e - c_part, nc_part = e.args_cnc() - n_nc = len(nc_part) - if n_nc == 0: - return e - elif n_nc == 1: - if isinstance(nc_part[0], Pow): - return Mul(*c_part) * tensor_product_simp_Pow(nc_part[0]) - return e - elif e.has(TensorProduct): - current = nc_part[0] - if not isinstance(current, TensorProduct): - if isinstance(current, Pow): - if isinstance(current.base, TensorProduct): - current = tensor_product_simp_Pow(current) - else: - raise TypeError('TensorProduct expected, got: %r' % current) - n_terms = len(current.args) - new_args = list(current.args) - for next in nc_part[1:]: - # TODO: check the hilbert spaces of next and current here. - if isinstance(next, TensorProduct): - if n_terms != len(next.args): - raise QuantumError( - 'TensorProducts of different lengths: %r and %r' % - (current, next) - ) - for i in range(len(new_args)): - new_args[i] = new_args[i] * next.args[i] - else: - if isinstance(next, Pow): - if isinstance(next.base, TensorProduct): - new_tp = tensor_product_simp_Pow(next) - for i in range(len(new_args)): - new_args[i] = new_args[i] * new_tp.args[i] - else: - raise TypeError('TensorProduct expected, got: %r' % next) - else: - raise TypeError('TensorProduct expected, got: %r' % next) - current = next - return Mul(*c_part) * TensorProduct(*new_args) - elif e.has(Pow): - new_args = [ tensor_product_simp_Pow(nc) for nc in nc_part ] - return tensor_product_simp_Mul(Mul(*c_part) * TensorProduct(*new_args)) - else: - return e + sympy_deprecation_warning( + """ + tensor_product_simp_Mul has been deprecated. The transformations + performed by this function are now done automatically when + tensor products are multiplied. + """, + deprecated_since_version="1.14", + active_deprecations_target='deprecated-tensorproduct-simp' + ) + return e def tensor_product_simp_Pow(e): - """Evaluates ``Pow`` expressions whose base is ``TensorProduct``""" - if not isinstance(e, Pow): - return e + """Evaluates ``Pow`` expressions whose base is ``TensorProduct`` + + .. deprecated:: 1.14. + The transformations applied by this function are not done automatically + when tensor products are combined. + """ + sympy_deprecation_warning( + """ + tensor_product_simp_Pow has been deprecated. The transformations + performed by this function are now done automatically when + tensor products are exponentiated. + """, + deprecated_since_version="1.14", + active_deprecations_target='deprecated-tensorproduct-simp' + ) + return e - if isinstance(e.base, TensorProduct): - return TensorProduct(*[ b**e.exp for b in e.base.args]) - else: - return e def tensor_product_simp(e, **hints): - """Try to simplify and combine TensorProducts. + """Try to simplify and combine tensor products. - In general this will try to pull expressions inside of ``TensorProducts``. - It currently only works for relatively simple cases where the products have - only scalars, raw ``TensorProducts``, not ``Add``, ``Pow``, ``Commutators`` - of ``TensorProducts``. It is best to see what it does by showing examples. + .. deprecated:: 1.14. + The transformations applied by this function are not done automatically + when tensor products are combined. - Examples - ======== + Originally, this function tried to pull expressions inside of ``TensorProducts``. + It only worked for relatively simple cases where the products have + only scalars, raw ``TensorProducts``, not ``Add``, ``Pow``, ``Commutators`` + of ``TensorProducts``. + """ + sympy_deprecation_warning( + """ + tensor_product_simp has been deprecated. The transformations + performed by this function are now done automatically when + tensor products are combined. + """, + deprecated_since_version="1.14", + active_deprecations_target='deprecated-tensorproduct-simp' + ) + return e - >>> from sympy.physics.quantum import tensor_product_simp - >>> from sympy.physics.quantum import TensorProduct - >>> from sympy import Symbol - >>> A = Symbol('A',commutative=False) - >>> B = Symbol('B',commutative=False) - >>> C = Symbol('C',commutative=False) - >>> D = Symbol('D',commutative=False) - First see what happens to products of tensor products: +@TensorProduct._kind_dispatcher.register(_OperatorKind, _OperatorKind) +def find_op_kind(e1, e2): + return OperatorKind - >>> e = TensorProduct(A,B)*TensorProduct(C,D) - >>> e - AxB*CxD - >>> tensor_product_simp(e) - (A*C)x(B*D) - This is the core logic of this function, and it works inside, powers, sums, - commutators and anticommutators as well: +@TensorProduct._kind_dispatcher.register(_KetKind, _KetKind) +def find_ket_kind(e1, e2): + return KetKind - >>> tensor_product_simp(e**2) - (A*C)x(B*D)**2 - """ - if isinstance(e, Add): - return Add(*[tensor_product_simp(arg) for arg in e.args]) - elif isinstance(e, Pow): - if isinstance(e.base, TensorProduct): - return tensor_product_simp_Pow(e) - else: - return tensor_product_simp(e.base) ** e.exp - elif isinstance(e, Mul): - return tensor_product_simp_Mul(e) - elif isinstance(e, Commutator): - return Commutator(*[tensor_product_simp(arg) for arg in e.args]) - elif isinstance(e, AntiCommutator): - return AntiCommutator(*[tensor_product_simp(arg) for arg in e.args]) - else: - return e +@TensorProduct._kind_dispatcher.register(_BraKind, _BraKind) +def find_bra_kind(e1, e2): + return BraKind diff --git a/sympy/physics/quantum/tests/test_cartesian.py b/sympy/physics/quantum/tests/test_cartesian.py index ddfd28d8b5f4..f1dd435fab68 100644 --- a/sympy/physics/quantum/tests/test_cartesian.py +++ b/sympy/physics/quantum/tests/test_cartesian.py @@ -7,6 +7,7 @@ from sympy.functions.elementary.miscellaneous import sqrt from sympy.functions.special.delta_functions import DiracDelta from sympy.sets.sets import Interval +from sympy.testing.pytest import XFAIL from sympy.physics.quantum import qapply, represent, L2, Dagger from sympy.physics.quantum import Commutator, hbar @@ -33,8 +34,6 @@ def test_x(): assert represent(XBra(x)) == DiracDelta(-x + x_1) assert XBra(x).position == x assert represent(XOp()*XKet()) == x*DiracDelta(x - x_2) - assert represent(XOp()*XKet()*XBra('y')) == \ - x*DiracDelta(x - x_3)*DiracDelta(x_1 - y) assert represent(XBra("y")*XKet()) == DiracDelta(x - y) assert represent( XKet()*XBra()) == DiracDelta(x - x_2) * DiracDelta(x_1 - x) @@ -49,6 +48,16 @@ def test_x(): hbar*I*DiracDelta(px - px_2)*DifferentialOperator(px) +@XFAIL +def _text_x_broken(): + # represent has some broken logic that is relying in particular + # forms of input, rather than a full and proper handling of + # all valid quantum expressions. Marking this test as XFAIL until + # we can refactor represent. + assert represent(XOp()*XKet()*XBra('y')) == \ + x*DiracDelta(x - x_3)*DiracDelta(x_1 - y) + + def test_p(): assert Px.hilbert_space == L2(Interval(S.NegativeInfinity, S.Infinity)) assert qapply(Px*PxKet(px)) == px*PxKet(px) diff --git a/sympy/physics/quantum/tests/test_kind.py b/sympy/physics/quantum/tests/test_kind.py new file mode 100644 index 000000000000..e50467db4c2d --- /dev/null +++ b/sympy/physics/quantum/tests/test_kind.py @@ -0,0 +1,75 @@ +"""Tests for sympy.physics.quantum.kind.""" + +from sympy.core.kind import NumberKind, UndefinedKind +from sympy.core.symbol import symbols + +from sympy.physics.quantum.kind import ( + OperatorKind, KetKind, BraKind +) +from sympy.physics.quantum.anticommutator import AntiCommutator +from sympy.physics.quantum.commutator import Commutator +from sympy.physics.quantum.dagger import Dagger +from sympy.physics.quantum.operator import Operator +from sympy.physics.quantum.state import Ket, Bra +from sympy.physics.quantum.tensorproduct import TensorProduct + +k = Ket('k') +b = Bra('k') +A = Operator('A') +B = Operator('B') +x, y, z = symbols('x y z', integer=True) + +def test_bra_ket(): + assert k.kind == KetKind + assert b.kind == BraKind + assert (b*k).kind == NumberKind # inner product + assert (x*k).kind == KetKind + assert (x*b).kind == BraKind + + +def test_operator_kind(): + assert A.kind == OperatorKind + assert (A*B).kind == OperatorKind + assert (x*A).kind == OperatorKind + assert (x*A*B).kind == OperatorKind + assert (x*k*b).kind == OperatorKind # outer product + + +def test_undefind_kind(): + # Because of limitations in the kind dispatcher API, we are currently + # unable to have OperatorKind*KetKind -> KetKind (and similar for bras). + assert (A*k).kind == UndefinedKind + assert (b*A).kind == UndefinedKind + assert (x*b*A*k).kind == UndefinedKind + + +def test_dagger_kind(): + assert Dagger(k).kind == BraKind + assert Dagger(b).kind == KetKind + assert Dagger(A).kind == OperatorKind + + +def test_commutator_kind(): + assert Commutator(A, B).kind == OperatorKind + assert Commutator(A, x*B).kind == OperatorKind + assert Commutator(x*A, B).kind == OperatorKind + assert Commutator(x*A, x*B).kind == OperatorKind + + +def test_anticommutator_kind(): + assert AntiCommutator(A, B).kind == OperatorKind + assert AntiCommutator(A, x*B).kind == OperatorKind + assert AntiCommutator(x*A, B).kind == OperatorKind + assert AntiCommutator(x*A, x*B).kind == OperatorKind + + +def test_tensorproduct_kind(): + assert TensorProduct(k,k).kind == KetKind + assert TensorProduct(b,b).kind == BraKind + assert TensorProduct(x*k,y*k).kind == KetKind + assert TensorProduct(x*b,y*b).kind == BraKind + assert TensorProduct(x*b*k, y*b*k).kind == NumberKind + assert TensorProduct(x*k*b, y*k*b).kind == OperatorKind + assert TensorProduct(A, B).kind == OperatorKind + assert TensorProduct(A, x*B).kind == OperatorKind + assert TensorProduct(x*A, B).kind == OperatorKind diff --git a/sympy/physics/quantum/tests/test_operator.py b/sympy/physics/quantum/tests/test_operator.py index 8950fc9b931d..100cacd9a800 100644 --- a/sympy/physics/quantum/tests/test_operator.py +++ b/sympy/physics/quantum/tests/test_operator.py @@ -2,6 +2,7 @@ from sympy.core.mul import Mul from sympy.core.numbers import (Integer, pi) from sympy.core.symbol import (Symbol, symbols) +from sympy.core.sympify import sympify from sympy.functions.elementary.trigonometric import sin from sympy.physics.quantum.qexpr import QExpr from sympy.physics.quantum.dagger import Dagger @@ -95,6 +96,7 @@ def test_identity(): I = IdentityOperator() O = Operator('O') x = Symbol("x") + three = sympify(3) assert isinstance(I, IdentityOperator) assert isinstance(I, Operator) @@ -104,8 +106,8 @@ def test_identity(): assert I * Dagger(O) == Dagger(O) assert Dagger(O) * I == Dagger(O) assert isinstance(I * I, IdentityOperator) - assert isinstance(3 * I, Mul) - assert isinstance(I * x, Mul) + assert three * I == three + assert I * x == x assert I.inv() == I assert Dagger(I) == I assert qapply(I * O) == O diff --git a/sympy/physics/quantum/tests/test_qapply.py b/sympy/physics/quantum/tests/test_qapply.py index 839477822416..be6f68d9869d 100644 --- a/sympy/physics/quantum/tests/test_qapply.py +++ b/sympy/physics/quantum/tests/test_qapply.py @@ -99,7 +99,7 @@ def test_tensorproduct(): assert qapply(TensorProduct(a, Dagger(b) * b) * ket1) == 2 * ket3 assert qapply(bra1 * TensorProduct(a, b * b), dagger=True) == sqrt(2) * bra2 - assert qapply(bra2 * ket1).doit() == TensorProduct(1, 1) + assert qapply(bra2 * ket1).doit() == S.One assert qapply(TensorProduct(a, b * b) * ket1) == sqrt(2) * ket2 assert qapply(Dagger(TensorProduct(a, b * b) * ket1), dagger=True) == sqrt(2) * Dagger(ket2) @@ -143,9 +143,9 @@ def test_issue24158_ket_times_op(): assert qapply(P1) == QubitBra(0) * XGate(0) # qapply(P1) -> 0 before fix P1 = qapply(P1, dagger = True) # unsatisfactorily -> <0|*X(0), expect <1| since dagger=True assert qapply(P1, dagger = True) == QubitBra(1) # qapply(P1, dagger=True) -> 0 before fix - P2 = QubitBra(0) * QubitBra(0) * Qubit(0) * XGate(0) # 'forgot' to set brackets + P2 = QubitBra(0) * (QubitBra(0) * Qubit(0)) * XGate(0) # 'forgot' to set brackets P2 = qapply(P2, dagger = True) # unsatisfactorily -> <0|*X(0), expect <1| since dagger=True - assert qapply(P2, dagger = True) == QubitBra(1) # qapply(P1) -> 0 before fix + assert P2 == QubitBra(1) # qapply(P1) -> 0 before fix # Pull Request 24237: IdentityOperator from the right without dagger=True option with warns_deprecated_sympy(): assert qapply(QubitBra(1)*IdentityOperator()) == QubitBra(1) diff --git a/sympy/physics/quantum/tests/test_sho1d.py b/sympy/physics/quantum/tests/test_sho1d.py index 18d3862033ef..36ba792293a8 100644 --- a/sympy/physics/quantum/tests/test_sho1d.py +++ b/sympy/physics/quantum/tests/test_sho1d.py @@ -40,8 +40,8 @@ omega = Symbol('omega') m = Symbol('m') ndim = Integer(4) -p = Symbol('p', is_integer=True) -q = Symbol('q', nonnegative=True, is_integer=True) +p = Symbol('p', integer=True) +q = Symbol('q', nonnegative=True, integer=True) np = import_module('numpy') @@ -167,10 +167,10 @@ def test_sho_coherant_state(): assert simplify(qapply(SHOBra(q)*a*cstate, sum_doit=True)) == simplify(qapply(SHOBra(q)*alpha*cstate, sum_doit=True)) def test_issue_26495(): - nbar = Symbol('nbar', is_real=True, nonnegative=True) - n = Symbol('n', is_integer=True) - i = Symbol('i', is_integer=True, nonnegative=True) - j = Symbol('j', is_integer=True, nonnegative=True) - rho = (1/(1+nbar))*Sum((nbar/(1+nbar))**n*SHOKet(n)*SHOBra(n), (n,0,oo)) + nbar = Symbol('nbar', real=True, nonnegative=True) + n = Symbol('n', integer=True) + i = Symbol('i', integer=True, nonnegative=True) + j = Symbol('j', integer=True, nonnegative=True) + rho = Sum((nbar/(1+nbar))**n*SHOKet(n)*SHOBra(n), (n,0,oo)) result = qapply(SHOBra(i)*rho*SHOKet(j), sum_doit=True) - assert simplify(result) == nbar**j*(nbar+1)**(-j-1)*KroneckerDelta(i,j) + assert simplify(result) == (nbar/(nbar+1))**i*KroneckerDelta(i,j) diff --git a/sympy/physics/quantum/tests/test_spin.py b/sympy/physics/quantum/tests/test_spin.py index 2bc038e656b5..f905a7de5aed 100644 --- a/sympy/physics/quantum/tests/test_spin.py +++ b/sympy/physics/quantum/tests/test_spin.py @@ -8,6 +8,8 @@ from sympy.functions.elementary.trigonometric import (cos, sin) from sympy.matrices.dense import Matrix from sympy.abc import alpha, beta, gamma, j, m +from sympy.simplify import simplify + from sympy.physics.quantum import hbar, represent, Commutator, InnerProduct from sympy.physics.quantum.qapply import qapply from sympy.physics.quantum.tensorproduct import TensorProduct @@ -28,6 +30,16 @@ 'j12 j13 j24 j34 j123 j134 mi mi1 mp') +def assert_simplify_expand(e1, e2): + """Helper for simplifying and expanding results. + + This is needed to help us test complex expressions whose form + might change in subtle ways as the rest of sympy evolves. + """ + assert simplify(e1.expand(tensorproduct=True)) == \ + simplify(e2.expand(tensorproduct=True)) + + def test_represent_spin_operators(): assert represent(Jx) == hbar*Matrix([[0, 1], [1, 0]])/2 assert represent( @@ -3738,18 +3750,22 @@ def test_jplus(): hbar*sqrt(j**2 + j - m**2 - m)*JzKetCoupled(j, m + 1, (j1, j2)) # Uncoupled operators, uncoupled states # Numerical - assert qapply(TensorProduct(Jplus, 1)*TensorProduct(JxKet(1, 1), JxKet(1, -1))) == \ - -hbar*sqrt(2)*TensorProduct(JxKet(1, 0), JxKet(1, -1))/2 + \ + e1 = qapply(TensorProduct(Jplus, 1)*TensorProduct(JxKet(1, 1), JxKet(1, -1))) + e2 = -hbar*sqrt(2)*TensorProduct(JxKet(1, 0), JxKet(1, -1))/2 + \ hbar*TensorProduct(JxKet(1, 1), JxKet(1, -1)) - assert qapply(TensorProduct(1, Jplus)*TensorProduct(JxKet(1, 1), JxKet(1, -1))) == \ - -hbar*TensorProduct(JxKet(1, 1), JxKet(1, -1)) + \ + assert_simplify_expand(e1, e2) + e1 = qapply(TensorProduct(1, Jplus)*TensorProduct(JxKet(1, 1), JxKet(1, -1))) + e2 = -hbar*TensorProduct(JxKet(1, 1), JxKet(1, -1)) + \ hbar*sqrt(2)*TensorProduct(JxKet(1, 1), JxKet(1, 0))/2 - assert qapply(TensorProduct(Jplus, 1)*TensorProduct(JyKet(1, 1), JyKet(1, -1))) == \ - hbar*sqrt(2)*TensorProduct(JyKet(1, 0), JyKet(1, -1))/2 + \ + assert_simplify_expand(e1, e2) + e1 = qapply(TensorProduct(Jplus, 1)*TensorProduct(JyKet(1, 1), JyKet(1, -1))) + e2 = hbar*sqrt(2)*TensorProduct(JyKet(1, 0), JyKet(1, -1))/2 + \ hbar*I*TensorProduct(JyKet(1, 1), JyKet(1, -1)) - assert qapply(TensorProduct(1, Jplus)*TensorProduct(JyKet(1, 1), JyKet(1, -1))) == \ - -hbar*I*TensorProduct(JyKet(1, 1), JyKet(1, -1)) + \ + assert_simplify_expand(e1, e2) + e1 = qapply(TensorProduct(1, Jplus)*TensorProduct(JyKet(1, 1), JyKet(1, -1))) + e2 = -hbar*I*TensorProduct(JyKet(1, 1), JyKet(1, -1)) + \ hbar*sqrt(2)*TensorProduct(JyKet(1, 1), JyKet(1, 0))/2 + assert_simplify_expand(e1, e2) assert qapply( TensorProduct(Jplus, 1)*TensorProduct(JzKet(1, 1), JzKet(1, -1))) == 0 assert qapply(TensorProduct(1, Jplus)*TensorProduct(JzKet(1, 1), JzKet(1, -1))) == \ @@ -3826,18 +3842,22 @@ def test_jminus(): hbar*sqrt(j**2 + j - m**2 + m)*JzKetCoupled(j, m - 1, (j1, j2)) # Uncoupled operators, uncoupled states # Numerical - assert qapply(TensorProduct(Jminus, 1)*TensorProduct(JxKet(1, 1), JxKet(1, -1))) == \ - hbar*sqrt(2)*TensorProduct(JxKet(1, 0), JxKet(1, -1))/2 + \ + e1 = qapply(TensorProduct(Jminus, 1)*TensorProduct(JxKet(1, 1), JxKet(1, -1))) + e2 = hbar*sqrt(2)*TensorProduct(JxKet(1, 0), JxKet(1, -1))/2 + \ hbar*TensorProduct(JxKet(1, 1), JxKet(1, -1)) - assert qapply(TensorProduct(1, Jminus)*TensorProduct(JxKet(1, 1), JxKet(1, -1))) == \ - -hbar*TensorProduct(JxKet(1, 1), JxKet(1, -1)) - \ + assert_simplify_expand(e1, e2) + e1 = qapply(TensorProduct(1, Jminus)*TensorProduct(JxKet(1, 1), JxKet(1, -1))) + e2 = -hbar*TensorProduct(JxKet(1, 1), JxKet(1, -1)) - \ hbar*sqrt(2)*TensorProduct(JxKet(1, 1), JxKet(1, 0))/2 - assert qapply(TensorProduct(Jminus, 1)*TensorProduct(JyKet(1, 1), JyKet(1, -1))) == \ - hbar*sqrt(2)*TensorProduct(JyKet(1, 0), JyKet(1, -1))/2 - \ + assert_simplify_expand(e1, e2) + e1 = qapply(TensorProduct(Jminus, 1)*TensorProduct(JyKet(1, 1), JyKet(1, -1))) + e2 = hbar*sqrt(2)*TensorProduct(JyKet(1, 0), JyKet(1, -1))/2 - \ hbar*I*TensorProduct(JyKet(1, 1), JyKet(1, -1)) - assert qapply(TensorProduct(1, Jminus)*TensorProduct(JyKet(1, 1), JyKet(1, -1))) == \ - hbar*I*TensorProduct(JyKet(1, 1), JyKet(1, -1)) + \ + assert_simplify_expand(e1, e2) + e1 = qapply(TensorProduct(1, Jminus)*TensorProduct(JyKet(1, 1), JyKet(1, -1))) + e2 = hbar*I*TensorProduct(JyKet(1, 1), JyKet(1, -1)) + \ hbar*sqrt(2)*TensorProduct(JyKet(1, 1), JyKet(1, 0))/2 + assert_simplify_expand(e1, e2) assert qapply(TensorProduct(Jminus, 1)*TensorProduct(JzKet(1, 1), JzKet(1, -1))) == \ sqrt(2)*hbar*TensorProduct(JzKet(1, 0), JzKet(1, -1)) assert qapply(TensorProduct( @@ -3915,24 +3935,30 @@ def test_j2(): assert qapply(TensorProduct(1, J2)*TensorProduct(JzKet(1, 1), JzKet(1, -1))) == \ 2*hbar**2*TensorProduct(JzKet(1, 1), JzKet(1, -1)) # Symbolic - assert qapply(TensorProduct(J2, 1)*TensorProduct(JxKet(j1, m1), JxKet(j2, m2))) == \ - hbar**2*j1**2*TensorProduct(JxKet(j1, m1), JxKet(j2, m2)) + \ + e1 = qapply(TensorProduct(J2, 1)*TensorProduct(JxKet(j1, m1), JxKet(j2, m2))) + e2 = hbar**2*j1**2*TensorProduct(JxKet(j1, m1), JxKet(j2, m2)) + \ hbar**2*j1*TensorProduct(JxKet(j1, m1), JxKet(j2, m2)) - assert qapply(TensorProduct(1, J2)*TensorProduct(JxKet(j1, m1), JxKet(j2, m2))) == \ - hbar**2*j2**2*TensorProduct(JxKet(j1, m1), JxKet(j2, m2)) + \ + assert_simplify_expand(e1, e2) + e1 = qapply(TensorProduct(1, J2)*TensorProduct(JxKet(j1, m1), JxKet(j2, m2))) + e2 = hbar**2*j2**2*TensorProduct(JxKet(j1, m1), JxKet(j2, m2)) + \ hbar**2*j2*TensorProduct(JxKet(j1, m1), JxKet(j2, m2)) - assert qapply(TensorProduct(J2, 1)*TensorProduct(JyKet(j1, m1), JyKet(j2, m2))) == \ - hbar**2*j1**2*TensorProduct(JyKet(j1, m1), JyKet(j2, m2)) + \ + assert_simplify_expand(e1, e2) + e1 = qapply(TensorProduct(J2, 1)*TensorProduct(JyKet(j1, m1), JyKet(j2, m2))) + e2 = hbar**2*j1**2*TensorProduct(JyKet(j1, m1), JyKet(j2, m2)) + \ hbar**2*j1*TensorProduct(JyKet(j1, m1), JyKet(j2, m2)) - assert qapply(TensorProduct(1, J2)*TensorProduct(JyKet(j1, m1), JyKet(j2, m2))) == \ - hbar**2*j2**2*TensorProduct(JyKet(j1, m1), JyKet(j2, m2)) + \ + assert_simplify_expand(e1, e2) + e1 = qapply(TensorProduct(1, J2)*TensorProduct(JyKet(j1, m1), JyKet(j2, m2))) + e2 = hbar**2*j2**2*TensorProduct(JyKet(j1, m1), JyKet(j2, m2)) + \ hbar**2*j2*TensorProduct(JyKet(j1, m1), JyKet(j2, m2)) - assert qapply(TensorProduct(J2, 1)*TensorProduct(JzKet(j1, m1), JzKet(j2, m2))) == \ - hbar**2*j1**2*TensorProduct(JzKet(j1, m1), JzKet(j2, m2)) + \ + assert_simplify_expand(e1, e2) + e1 = qapply(TensorProduct(J2, 1)*TensorProduct(JzKet(j1, m1), JzKet(j2, m2))) + e2 = hbar**2*j1**2*TensorProduct(JzKet(j1, m1), JzKet(j2, m2)) + \ hbar**2*j1*TensorProduct(JzKet(j1, m1), JzKet(j2, m2)) - assert qapply(TensorProduct(1, J2)*TensorProduct(JzKet(j1, m1), JzKet(j2, m2))) == \ - hbar**2*j2**2*TensorProduct(JzKet(j1, m1), JzKet(j2, m2)) + \ + assert_simplify_expand(e1, e2) + e1 = qapply(TensorProduct(1, J2)*TensorProduct(JzKet(j1, m1), JzKet(j2, m2))) + e2 = hbar**2*j2**2*TensorProduct(JzKet(j1, m1), JzKet(j2, m2)) + \ hbar**2*j2*TensorProduct(JzKet(j1, m1), JzKet(j2, m2)) + assert_simplify_expand(e1, e2) def test_jx(): @@ -4016,14 +4042,16 @@ def test_jx(): TensorProduct(Sum(hbar*mi*WignerD(j1, mi, m1, 0, 0, pi/2) * Sum(WignerD(j1, mi1, mi, pi*Rational(3, 2), 0, 0)*JyKet(j1, mi1), (mi1, -j1, j1)), (mi, -j1, j1)), JyKet(j2, m2)) assert qapply(TensorProduct(1, Jx)*TensorProduct(JyKet(j1, m1), JyKet(j2, m2))) == \ TensorProduct(JyKet(j1, m1), Sum(hbar*mi*WignerD(j2, mi, m2, 0, 0, pi/2) * Sum(WignerD(j2, mi1, mi, pi*Rational(3, 2), 0, 0)*JyKet(j2, mi1), (mi1, -j2, j2)), (mi, -j2, j2))) - assert qapply(TensorProduct(Jx, 1)*TensorProduct(JzKet(j1, m1), JzKet(j2, m2))) == \ - hbar*sqrt(j1**2 + j1 - m1**2 - m1)*TensorProduct(JzKet(j1, m1 + 1), JzKet(j2, m2))/2 + \ + e1 = qapply(TensorProduct(Jx, 1)*TensorProduct(JzKet(j1, m1), JzKet(j2, m2))) + e2 = hbar*sqrt(j1**2 + j1 - m1**2 - m1)*TensorProduct(JzKet(j1, m1 + 1), JzKet(j2, m2))/2 + \ hbar*sqrt( j1**2 + j1 - m1**2 + m1)*TensorProduct(JzKet(j1, m1 - 1), JzKet(j2, m2))/2 - assert qapply(TensorProduct(1, Jx)*TensorProduct(JzKet(j1, m1), JzKet(j2, m2))) == \ - hbar*sqrt(j2**2 + j2 - m2**2 - m2)*TensorProduct(JzKet(j1, m1), JzKet(j2, m2 + 1))/2 + \ + assert_simplify_expand(e1, e2) + e1 = qapply(TensorProduct(1, Jx)*TensorProduct(JzKet(j1, m1), JzKet(j2, m2))) + e2 = hbar*sqrt(j2**2 + j2 - m2**2 - m2)*TensorProduct(JzKet(j1, m1), JzKet(j2, m2 + 1))/2 + \ hbar*sqrt( j2**2 + j2 - m2**2 + m2)*TensorProduct(JzKet(j1, m1), JzKet(j2, m2 - 1))/2 + assert_simplify_expand(e1, e2) def test_jy(): @@ -4107,14 +4135,16 @@ def test_jy(): hbar*m1*TensorProduct(JyKet(j1, m1), JyKet(j2, m2)) assert qapply(TensorProduct(1, Jy)*TensorProduct(JyKet(j1, m1), JyKet(j2, m2))) == \ hbar*m2*TensorProduct(JyKet(j1, m1), JyKet(j2, m2)) - assert qapply(TensorProduct(Jy, 1)*TensorProduct(JzKet(j1, m1), JzKet(j2, m2))) == \ - -hbar*I*sqrt(j1**2 + j1 - m1**2 - m1)*TensorProduct(JzKet(j1, m1 + 1), JzKet(j2, m2))/2 + \ + e1 = qapply(TensorProduct(Jy, 1)*TensorProduct(JzKet(j1, m1), JzKet(j2, m2))) + e2 = -hbar*I*sqrt(j1**2 + j1 - m1**2 - m1)*TensorProduct(JzKet(j1, m1 + 1), JzKet(j2, m2))/2 + \ hbar*I*sqrt( j1**2 + j1 - m1**2 + m1)*TensorProduct(JzKet(j1, m1 - 1), JzKet(j2, m2))/2 - assert qapply(TensorProduct(1, Jy)*TensorProduct(JzKet(j1, m1), JzKet(j2, m2))) == \ - -hbar*I*sqrt(j2**2 + j2 - m2**2 - m2)*TensorProduct(JzKet(j1, m1), JzKet(j2, m2 + 1))/2 + \ + assert_simplify_expand(e1, e2) + e1 = qapply(TensorProduct(1, Jy)*TensorProduct(JzKet(j1, m1), JzKet(j2, m2))) + e2 = -hbar*I*sqrt(j2**2 + j2 - m2**2 - m2)*TensorProduct(JzKet(j1, m1), JzKet(j2, m2 + 1))/2 + \ hbar*I*sqrt( j2**2 + j2 - m2**2 + m2)*TensorProduct(JzKet(j1, m1), JzKet(j2, m2 - 1))/2 + assert_simplify_expand(e1, e2) def test_jz(): diff --git a/sympy/physics/quantum/tests/test_tensorproduct.py b/sympy/physics/quantum/tests/test_tensorproduct.py index 5c4560932861..c17d533ae6d4 100644 --- a/sympy/physics/quantum/tests/test_tensorproduct.py +++ b/sympy/physics/quantum/tests/test_tensorproduct.py @@ -2,6 +2,7 @@ from sympy.core.symbol import symbols from sympy.core.expr import unchanged from sympy.matrices import Matrix, SparseMatrix, ImmutableMatrix +from sympy.testing.pytest import warns_deprecated_sympy from sympy.physics.quantum.commutator import Commutator as Comm from sympy.physics.quantum.tensorproduct import TensorProduct @@ -9,12 +10,16 @@ from sympy.physics.quantum.tensorproduct import tensor_product_simp from sympy.physics.quantum.dagger import Dagger from sympy.physics.quantum.qubit import Qubit, QubitBra -from sympy.physics.quantum.operator import OuterProduct +from sympy.physics.quantum.operator import OuterProduct, Operator from sympy.physics.quantum.density import Density from sympy.physics.quantum.trace import Tr -A, B, C, D = symbols('A,B,C,D', commutative=False) +A = Operator('A') +B = Operator('B') +C = Operator('C') +D = Operator('D') x = symbols('x') +y = symbols('y', integer=True, positive=True) mat1 = Matrix([[1, 2*I], [1 + I, 3]]) mat2 = Matrix([[2*I, 3], [4*I, 2]]) @@ -61,12 +66,14 @@ def test_tensor_product_commutator(): def test_tensor_product_simp(): - assert tensor_product_simp(TP(A, B)*TP(B, C)) == TP(A*B, B*C) - # tests for Pow-expressions - assert tensor_product_simp(TP(A, B)**x) == TP(A**x, B**x) - assert tensor_product_simp(x*TP(A, B)**2) == x*TP(A**2,B**2) - assert tensor_product_simp(x*(TP(A, B)**2)*TP(C,D)) == x*TP(A**2*C,B**2*D) - assert tensor_product_simp(TP(A,B)-TP(C,D)**x) == TP(A,B)-TP(C**x,D**x) + with warns_deprecated_sympy(): + assert tensor_product_simp(TP(A, B)*TP(B, C)) == TP(A*B, B*C) + # tests for Pow-expressions + assert TP(A, B)**y == TP(A**y, B**y) + assert tensor_product_simp(TP(A, B)**y) == TP(A**y, B**y) + assert tensor_product_simp(x*TP(A, B)**2) == x*TP(A**2,B**2) + assert tensor_product_simp(x*(TP(A, B)**2)*TP(C,D)) == x*TP(A**2*C,B**2*D) + assert tensor_product_simp(TP(A,B)-TP(C,D)**y) == TP(A,B)-TP(C**y,D**y) def test_issue_5923(): @@ -82,8 +89,6 @@ def test_eval_trace(): #and density operators. Since, the test is more to test the behavior of #TensorProducts it remains here - A, B, C, D, E, F = symbols('A B C D E F', commutative=False) - # Density with simple tensor products as args t = TensorProduct(A, B) d = Density([t, 1.0]) diff --git a/sympy/physics/quantum/tests/test_transforms.py b/sympy/physics/quantum/tests/test_transforms.py new file mode 100644 index 000000000000..55349ebe3b80 --- /dev/null +++ b/sympy/physics/quantum/tests/test_transforms.py @@ -0,0 +1,75 @@ +"""Tests of transforms of quantum expressions for Mul and Pow.""" + +from sympy.core.symbol import symbols +from sympy.testing.pytest import raises + +from sympy.physics.quantum.operator import ( + Operator, OuterProduct +) +from sympy.physics.quantum.state import Ket, Bra +from sympy.physics.quantum.innerproduct import InnerProduct +from sympy.physics.quantum.tensorproduct import TensorProduct + + +k1 = Ket('k1') +k2 = Ket('k2') +k3 = Ket('k3') +b1 = Bra('b1') +b2 = Bra('b2') +b3 = Bra('b3') +A = Operator('A') +B = Operator('B') +C = Operator('C') +x, y, z = symbols('x y z') + + +def test_bra_ket(): + assert b1*k1 == InnerProduct(b1, k1) + assert k1*b1 == OuterProduct(k1, b1) + # Test priority of inner product + assert OuterProduct(k1, b1)*k2 == InnerProduct(b1, k2)*k1 + assert b1*OuterProduct(k1, b2) == InnerProduct(b1, k1)*b2 + + +def test_tensor_product(): + # We are attempting to be rigourous and raise TypeError when a user tries + # to combine bras, kets, and operators in a manner that doesn't make sense. + # In particular, we are not trying to interpret regular ``*`` multiplication + # as a tensor product. + with raises(TypeError): + k1*k1 + with raises(TypeError): + b1*b1 + with raises(TypeError): + k1*TensorProduct(k2, k3) + with raises(TypeError): + b1*TensorProduct(b2, b3) + with raises(TypeError): + TensorProduct(k2, k3)*k1 + with raises(TypeError): + TensorProduct(b2, b3)*b1 + + assert TensorProduct(A, B, C)*TensorProduct(k1, k2, k3) == \ + TensorProduct(A*k1, B*k2, C*k3) + assert TensorProduct(b1, b2, b3)*TensorProduct(A, B, C) == \ + TensorProduct(b1*A, b2*B, b3*C) + assert TensorProduct(b1, b2, b3)*TensorProduct(k1, k2, k3) == \ + InnerProduct(b1, k1)*InnerProduct(b2, k2)*InnerProduct(b3, k3) + assert TensorProduct(b1, b2, b3)*TensorProduct(A, B, C)*TensorProduct(k1, k2, k3) == \ + TensorProduct(b1*A*k1, b2*B*k2, b3*C*k3) + + +def test_outer_product(): + assert OuterProduct(k1, b1)*OuterProduct(k2, b2) == \ + InnerProduct(b1, k2)*OuterProduct(k1, b2) + + +def test_compound(): + e1 = b1*A*B*k1*b2*k2*b3 + assert e1 == InnerProduct(b2, k2)*b1*A*B*OuterProduct(k1, b3) + + e2 = TensorProduct(k1, k2)*TensorProduct(b1, b2) + assert e2 == TensorProduct( + OuterProduct(k1, b1), + OuterProduct(k2, b2) + ) diff --git a/sympy/physics/quantum/transforms.py b/sympy/physics/quantum/transforms.py new file mode 100644 index 000000000000..d646d3e8e779 --- /dev/null +++ b/sympy/physics/quantum/transforms.py @@ -0,0 +1,291 @@ +"""Transforms that are always applied to quantum expressions. + +This module uses the kind and _constructor_postprocessor_mapping APIs +to transform different combinations of Operators, Bras, and Kets into +Inner/Outer/TensorProducts. These transformations are registered +with the postprocessing API of core classes like `Mul` and `Pow` and +are always applied to any expression involving Bras, Kets, and +Operators. This API replaces the custom `__mul__` and `__pow__` +methods of the quantum classes, which were found to be inconsistent. + +THIS IS EXPERIMENTAL. +""" +from sympy.core.basic import Basic +from sympy.core.expr import Expr +from sympy.core.mul import Mul +from sympy.core.singleton import S +from sympy.multipledispatch.dispatcher import ( + Dispatcher, ambiguity_register_error_ignore_dup +) +from sympy.utilities.misc import debug + +from sympy.physics.quantum.innerproduct import InnerProduct +from sympy.physics.quantum.kind import KetKind, BraKind, OperatorKind +from sympy.physics.quantum.operator import ( + OuterProduct, IdentityOperator, Operator +) +from sympy.physics.quantum.state import BraBase, KetBase, StateBase +from sympy.physics.quantum.tensorproduct import TensorProduct + + +#----------------------------------------------------------------------------- +# Multipledispatch based transformed for Mul and Pow +#----------------------------------------------------------------------------- + +_transform_state_pair = Dispatcher('_transform_state_pair') +"""Transform a pair of expression in a Mul to their canonical form. + +All functions that are registered with this dispatcher need to take +two inputs and return either tuple of transformed outputs, or None if no +transform is applied. The output tuple is inserted into the right place +of the ``Mul`` that is being put into canonical form. It works something like +the following: + +``Mul(a, b, c, d, e, f) -> Mul(*(_transform_state_pair(a, b) + (c, d, e, f))))`` + +The transforms here are always applied when quantum objects are multiplied. + +THIS IS EXPERIMENTAL. + +However, users of ``sympy.physics.quantum`` can import this dispatcher and +register their own transforms to control the canonical form of products +of quantum expressions. +""" + +@_transform_state_pair.register(Expr, Expr) +def _transform_expr(a, b): + """Default transformer that does nothing for base types.""" + return None + + +# The identity times anything is the anything. +_transform_state_pair.add( + (IdentityOperator, Expr), + lambda x, y: (y,), + on_ambiguity=ambiguity_register_error_ignore_dup +) +_transform_state_pair.add( + (Expr, IdentityOperator), + lambda x, y: (x,), + on_ambiguity=ambiguity_register_error_ignore_dup +) +_transform_state_pair.add( + (IdentityOperator, IdentityOperator), + lambda x, y: S.One, + on_ambiguity=ambiguity_register_error_ignore_dup +) + +@_transform_state_pair.register(BraBase, KetBase) +def _transform_bra_ket(a, b): + """Transform a bra*ket -> InnerProduct(bra, ket).""" + return (InnerProduct(a, b),) + +@_transform_state_pair.register(KetBase, BraBase) +def _transform_ket_bra(a, b): + """Transform a keT*bra -> OuterProduct(ket, bra).""" + return (OuterProduct(a, b),) + +@_transform_state_pair.register(KetBase, KetBase) +def _transform_ket_ket(a, b): + """Raise a TypeError if a user tries to multiply two kets. + + Multiplication based on `*` is not a shorthand for tensor products. + """ + raise TypeError( + 'Multiplication of two kets is not allowed. Use TensorProduct instead.' + ) + +@_transform_state_pair.register(BraBase, BraBase) +def _transform_bra_bra(a, b): + """Raise a TypeError if a user tries to multiply two bras. + + Multiplication based on `*` is not a shorthand for tensor products. + """ + raise TypeError( + 'Multiplication of two bras is not allowed. Use TensorProduct instead.' + ) + +@_transform_state_pair.register(OuterProduct, KetBase) +def _transform_op_ket(a, b): + return (InnerProduct(a.bra, b), a.ket) + +@_transform_state_pair.register(BraBase, OuterProduct) +def _transform_bra_op(a, b): + return (InnerProduct(a, b.ket), b.bra) + +@_transform_state_pair.register(TensorProduct, KetBase) +def _transform_tp_ket(a, b): + """Raise a TypeError if a user tries to multiply TensorProduct(*kets)*ket. + + Multiplication based on `*` is not a shorthand for tensor products. + """ + if a.kind == KetKind: + raise TypeError( + 'Multiplication of TensorProduct(*kets)*ket is invalid.' + ) + +@_transform_state_pair.register(KetBase, TensorProduct) +def _transform_ket_tp(a, b): + """Raise a TypeError if a user tries to multiply ket*TensorProduct(*kets). + + Multiplication based on `*` is not a shorthand for tensor products. + """ + if b.kind == KetKind: + raise TypeError( + 'Multiplication of ket*TensorProduct(*kets) is invalid.' + ) + +@_transform_state_pair.register(TensorProduct, BraBase) +def _transform_tp_bra(a, b): + """Raise a TypeError if a user tries to multiply TensorProduct(*bras)*bra. + + Multiplication based on `*` is not a shorthand for tensor products. + """ + if a.kind == BraKind: + raise TypeError( + 'Multiplication of TensorProduct(*bras)*bra is invalid.' + ) + +@_transform_state_pair.register(BraBase, TensorProduct) +def _transform_bra_tp(a, b): + """Raise a TypeError if a user tries to multiply bra*TensorProduct(*bras). + + Multiplication based on `*` is not a shorthand for tensor products. + """ + if b.kind == BraKind: + raise TypeError( + 'Multiplication of bra*TensorProduct(*bras) is invalid.' + ) + +@_transform_state_pair.register(TensorProduct, TensorProduct) +def _transform_tp_tp(a, b): + """Combine a product of tensor products if their number of args matches.""" + debug('_transform_tp_tp', a, b) + if len(a.args) == len(b.args): + if a.kind == BraKind and b.kind == KetKind: + return tuple([InnerProduct(i, j) for (i, j) in zip(a.args, b.args)]) + else: + return (TensorProduct(*(i*j for (i, j) in zip(a.args, b.args))), ) + +@_transform_state_pair.register(OuterProduct, OuterProduct) +def _transform_op_op(a, b): + """Extract an inner produt from a product of outer products.""" + return (InnerProduct(a.bra, b.ket), OuterProduct(a.ket, b.bra)) + + +#----------------------------------------------------------------------------- +# Postprocessing transforms for Mul and Pow +#----------------------------------------------------------------------------- + + +def _postprocess_state_mul(expr): + """Trasform a ``Mul`` of quantum expressions into canonical form. + + This function is registered ``_constructor_postprocessor_mapping`` as a + transformer for ``Mul``. This means that every time a quantum expression + is multiplied, this function will be called to transform it into canonical + form as defined by the binary functions registered with + ``_transform_state_pair``. + + The algorithm of this function is as follows. It walks the args + of the input ``Mul`` from left to right and calls ``_transform_state_pair`` + on every overlapping pair of args. Each time ``_transform_state_pair`` + is called it can return a tuple of items or None. If None, the pair isn't + transformed. If a tuple, then the last element of the tuple goes back into + the args to be transformed again and the others are extended onto the result + args list. + + The algorithm can be visualized in the following table: + + step result args + ============================================================================ + #0 [] [a, b, c, d, e, f] + #1 [] [T(a,b), c, d, e, f] + #2 [T(a,b)[:-1]] [T(a,b)[-1], c, d, e, f] + #3 [T(a,b)[:-1]] [T(T(a,b)[-1], c), d, e, f] + #4 [T(a,b)[:-1], T(T(a,b)[-1], c)[:-1]] [T(T(T(a,b)[-1], c)[-1], d), e, f] + #5 ... + + One limitation of the current implementation is that we assume that only the + last item of the transformed tuple goes back into the args to be transformed + again. These seems to handle the cases needed for Mul. However, we may need + to extend the algorithm to have the entire tuple go back into the args for + further transformation. + """ + args = list(expr.args) + result = [] + + # Continue as long as we have at least 2 elements + while len(args) > 1: + # Get first two elements + first = args.pop(0) + second = args[0] # Look at second element without popping yet + + transformed = _transform_state_pair(first, second) + + if transformed is None: + # If transform returns None, append first element + result.append(first) + else: + # This item was transformed, pop and discard + args.pop(0) + # The last item goes back to be transformed again + args.insert(0, transformed[-1]) + # All other items go directly into the result + result.extend(transformed[:-1]) + + # Append any remaining element + if args: + result.append(args[0]) + + return Mul._from_args(result, is_commutative=False) + + +def _postprocess_state_pow(expr): + """Handle bras and kets raised to powers. + + Under ``*`` multiplication this is invalid. Users should use a + TensorProduct instead. + """ + base, exp = expr.as_base_exp() + if base.kind == KetKind or base.kind == BraKind: + raise TypeError( + 'A bra or ket to a power is invalid, use TensorProduct instead.' + ) + + +def _postprocess_tp_pow(expr): + """Handle TensorProduct(*operators)**(positive integer). + + This handles a tensor product of operators, to an integer power. + The power here is interpreted as regular multiplication, not + tensor product exponentiation. The form of exponentiation performed + here leaves the space and dimension of the object the same. + + This operation does not make sense for tensor product's of states. + """ + base, exp = expr.as_base_exp() + debug('_postprocess_tp_pow: ', base, exp, expr.args) + if isinstance(base, TensorProduct) and exp.is_integer and exp.is_positive and base.kind == OperatorKind: + new_args = [a**exp for a in base.args] + return TensorProduct(*new_args) + + +#----------------------------------------------------------------------------- +# Register the transformers with Basic._constructor_postprocessor_mapping +#----------------------------------------------------------------------------- + + +Basic._constructor_postprocessor_mapping[StateBase] = { + "Mul": [_postprocess_state_mul], + "Pow": [_postprocess_state_pow] +} + +Basic._constructor_postprocessor_mapping[TensorProduct] = { + "Mul": [_postprocess_state_mul], + "Pow": [_postprocess_tp_pow] +} + +Basic._constructor_postprocessor_mapping[Operator] = { + "Mul": [_postprocess_state_mul] +}
{ "difficulty": "high", "estimated_review_effort": 4, "problem_domain": "New Feature Additions" }
xorbitsai__inference-449@e4dbd11
xorbitsai/inference
Python
449
ENH: Safe iterate stream of ggml model
- [x] Use FastAPI stream. - [x] Fix parallel ggml stream undefined behaviour. Fix: https://github.com/xorbitsai/inference/issues/448
2023-09-14T06:59:40Z
BUG: Incorrect traceback if raises an exception when stream iteration ### Describe the bug A clear and concise description of what the bug is. ### To Reproduce To help us to reproduce this bug, please provide information below: ```python xinference/tests/test_client.py:188 (test_RESTful_client) self = <urllib3.response.HTTPResponse object at 0x2b3f437c0> def _update_chunk_length(self): # First, we'll figure out length of a chunk and then # we'll try to read it from socket. if self.chunk_left is not None: return line = self._fp.fp.readline() line = line.split(b";", 1)[0] try: > self.chunk_left = int(line, 16) E ValueError: invalid literal for int() with base 16: b'' ../../.pyenv/versions/3.11.4/lib/python3.11/site-packages/urllib3/response.py:761: ValueError During handling of the above exception, another exception occurred: self = <urllib3.response.HTTPResponse object at 0x2b3f437c0> @contextmanager def _error_catcher(self): """ Catch low-level python exceptions, instead re-raising urllib3 variants, so that low-level exceptions are not leaked in the high-level api. On exit, release the connection back to the pool. """ clean_exit = False try: try: > yield ../../.pyenv/versions/3.11.4/lib/python3.11/site-packages/urllib3/response.py:444: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../../.pyenv/versions/3.11.4/lib/python3.11/site-packages/urllib3/response.py:828: in read_chunked self._update_chunk_length() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <urllib3.response.HTTPResponse object at 0x2b3f437c0> def _update_chunk_length(self): # First, we'll figure out length of a chunk and then # we'll try to read it from socket. if self.chunk_left is not None: return line = self._fp.fp.readline() line = line.split(b";", 1)[0] try: self.chunk_left = int(line, 16) except ValueError: # Invalid chunked protocol response, abort. self.close() > raise InvalidChunkLength(self, line) E urllib3.exceptions.InvalidChunkLength: InvalidChunkLength(got length b'', 0 bytes read) ../../.pyenv/versions/3.11.4/lib/python3.11/site-packages/urllib3/response.py:765: InvalidChunkLength During handling of the above exception, another exception occurred: def generate(): # Special case for urllib3. if hasattr(self.raw, "stream"): try: > yield from self.raw.stream(chunk_size, decode_content=True) ../../.pyenv/versions/3.11.4/lib/python3.11/site-packages/requests/models.py:816: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../../.pyenv/versions/3.11.4/lib/python3.11/site-packages/urllib3/response.py:624: in stream for line in self.read_chunked(amt, decode_content=decode_content): ../../.pyenv/versions/3.11.4/lib/python3.11/site-packages/urllib3/response.py:816: in read_chunked with self._error_catcher(): ../../.pyenv/versions/3.11.4/lib/python3.11/contextlib.py:155: in __exit__ self.gen.throw(typ, value, traceback) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <urllib3.response.HTTPResponse object at 0x2b3f437c0> @contextmanager def _error_catcher(self): """ Catch low-level python exceptions, instead re-raising urllib3 variants, so that low-level exceptions are not leaked in the high-level api. On exit, release the connection back to the pool. """ clean_exit = False try: try: yield except SocketTimeout: # FIXME: Ideally we'd like to include the url in the ReadTimeoutError but # there is yet no clean way to get at it from this context. raise ReadTimeoutError(self._pool, None, "Read timed out.") except BaseSSLError as e: # FIXME: Is there a better way to differentiate between SSLErrors? if "read operation timed out" not in str(e): # SSL errors related to framing/MAC get wrapped and reraised here raise SSLError(e) raise ReadTimeoutError(self._pool, None, "Read timed out.") except (HTTPException, SocketError) as e: # This includes IncompleteRead. > raise ProtocolError("Connection broken: %r" % e, e) E urllib3.exceptions.ProtocolError: ("Connection broken: InvalidChunkLength(got length b'', 0 bytes read)", InvalidChunkLength(got length b'', 0 bytes read)) ../../.pyenv/versions/3.11.4/lib/python3.11/site-packages/urllib3/response.py:461: ProtocolError ``` 1. Your Python version. 2. The version of xinference you use. 3. Versions of crucial packages. 4. Full stack of the error. 5. Minimized code to reproduce the error. ### Expected behavior A clear and concise description of what you expected to happen. ### Additional context Add any other context about the problem here.
[ { "body": "### Describe the bug\r\nA clear and concise description of what the bug is.\r\n\r\n### To Reproduce\r\nTo help us to reproduce this bug, please provide information below:\r\n\r\n```python\r\nxinference/tests/test_client.py:188 (test_RESTful_client)\r\nself = <urllib3.response.HTTPResponse object at 0x2b3f437c0>\r\n\r\n def _update_chunk_length(self):\r\n # First, we'll figure out length of a chunk and then\r\n # we'll try to read it from socket.\r\n if self.chunk_left is not None:\r\n return\r\n line = self._fp.fp.readline()\r\n line = line.split(b\";\", 1)[0]\r\n try:\r\n> self.chunk_left = int(line, 16)\r\nE ValueError: invalid literal for int() with base 16: b''\r\n\r\n../../.pyenv/versions/3.11.4/lib/python3.11/site-packages/urllib3/response.py:761: ValueError\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nself = <urllib3.response.HTTPResponse object at 0x2b3f437c0>\r\n\r\n @contextmanager\r\n def _error_catcher(self):\r\n \"\"\"\r\n Catch low-level python exceptions, instead re-raising urllib3\r\n variants, so that low-level exceptions are not leaked in the\r\n high-level api.\r\n \r\n On exit, release the connection back to the pool.\r\n \"\"\"\r\n clean_exit = False\r\n \r\n try:\r\n try:\r\n> yield\r\n\r\n../../.pyenv/versions/3.11.4/lib/python3.11/site-packages/urllib3/response.py:444: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n../../.pyenv/versions/3.11.4/lib/python3.11/site-packages/urllib3/response.py:828: in read_chunked\r\n self._update_chunk_length()\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nself = <urllib3.response.HTTPResponse object at 0x2b3f437c0>\r\n\r\n def _update_chunk_length(self):\r\n # First, we'll figure out length of a chunk and then\r\n # we'll try to read it from socket.\r\n if self.chunk_left is not None:\r\n return\r\n line = self._fp.fp.readline()\r\n line = line.split(b\";\", 1)[0]\r\n try:\r\n self.chunk_left = int(line, 16)\r\n except ValueError:\r\n # Invalid chunked protocol response, abort.\r\n self.close()\r\n> raise InvalidChunkLength(self, line)\r\nE urllib3.exceptions.InvalidChunkLength: InvalidChunkLength(got length b'', 0 bytes read)\r\n\r\n../../.pyenv/versions/3.11.4/lib/python3.11/site-packages/urllib3/response.py:765: InvalidChunkLength\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\n def generate():\r\n # Special case for urllib3.\r\n if hasattr(self.raw, \"stream\"):\r\n try:\r\n> yield from self.raw.stream(chunk_size, decode_content=True)\r\n\r\n../../.pyenv/versions/3.11.4/lib/python3.11/site-packages/requests/models.py:816: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n../../.pyenv/versions/3.11.4/lib/python3.11/site-packages/urllib3/response.py:624: in stream\r\n for line in self.read_chunked(amt, decode_content=decode_content):\r\n../../.pyenv/versions/3.11.4/lib/python3.11/site-packages/urllib3/response.py:816: in read_chunked\r\n with self._error_catcher():\r\n../../.pyenv/versions/3.11.4/lib/python3.11/contextlib.py:155: in __exit__\r\n self.gen.throw(typ, value, traceback)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nself = <urllib3.response.HTTPResponse object at 0x2b3f437c0>\r\n\r\n @contextmanager\r\n def _error_catcher(self):\r\n \"\"\"\r\n Catch low-level python exceptions, instead re-raising urllib3\r\n variants, so that low-level exceptions are not leaked in the\r\n high-level api.\r\n \r\n On exit, release the connection back to the pool.\r\n \"\"\"\r\n clean_exit = False\r\n \r\n try:\r\n try:\r\n yield\r\n \r\n except SocketTimeout:\r\n # FIXME: Ideally we'd like to include the url in the ReadTimeoutError but\r\n # there is yet no clean way to get at it from this context.\r\n raise ReadTimeoutError(self._pool, None, \"Read timed out.\")\r\n \r\n except BaseSSLError as e:\r\n # FIXME: Is there a better way to differentiate between SSLErrors?\r\n if \"read operation timed out\" not in str(e):\r\n # SSL errors related to framing/MAC get wrapped and reraised here\r\n raise SSLError(e)\r\n \r\n raise ReadTimeoutError(self._pool, None, \"Read timed out.\")\r\n \r\n except (HTTPException, SocketError) as e:\r\n # This includes IncompleteRead.\r\n> raise ProtocolError(\"Connection broken: %r\" % e, e)\r\nE urllib3.exceptions.ProtocolError: (\"Connection broken: InvalidChunkLength(got length b'', 0 bytes read)\", InvalidChunkLength(got length b'', 0 bytes read))\r\n\r\n../../.pyenv/versions/3.11.4/lib/python3.11/site-packages/urllib3/response.py:461: ProtocolError\r\n```\r\n\r\n1. Your Python version.\r\n2. The version of xinference you use.\r\n3. Versions of crucial packages.\r\n4. Full stack of the error.\r\n5. Minimized code to reproduce the error.\r\n\r\n### Expected behavior\r\nA clear and concise description of what you expected to happen.\r\n\r\n### Additional context\r\nAdd any other context about the problem here.\r\n", "number": 448, "title": "BUG: Incorrect traceback if raises an exception when stream iteration " } ]
b3dcd1577e3986e264a4f985cbcd6140f67f4041
{ "head_commit": "e4dbd1137fd5c1f643b8a6170ba7f869c7804c37", "head_commit_message": "Avoid parallel stream iteration on ggml model", "patch_to_review": "diff --git a/setup.cfg b/setup.cfg\nindex 7b92e1f6dd..2acfd1a976 100644\n--- a/setup.cfg\n+++ b/setup.cfg\n@@ -34,7 +34,6 @@ install_requires =\n pydantic<2\n fastapi\n uvicorn\n- sse_starlette\n huggingface-hub>=0.14.1,<1.0\n typing_extensions\n fsspec\ndiff --git a/xinference/client.py b/xinference/client.py\nindex a69579f52d..c7af0b8747 100644\n--- a/xinference/client.py\n+++ b/xinference/client.py\n@@ -208,15 +208,15 @@ def streaming_response_iterator(\n \n # Duplicate code due to type hint issues\n def chat_streaming_response_iterator(\n- response_lines: Iterator[bytes],\n+ response_chunk: Iterator[bytes],\n ) -> Iterator[\"ChatCompletionChunk\"]:\n \"\"\"\n Create an Iterator to handle the streaming type of generation.\n \n Parameters\n ----------\n- response_lines: Iterator[bytes]\n- Generated lines by the Model Generator.\n+ response_chunk: Iterator[bytes]\n+ Generated chunk by the Model Generator.\n \n Returns\n -------\n@@ -225,11 +225,12 @@ def chat_streaming_response_iterator(\n \n \"\"\"\n \n- for line in response_lines:\n- line = line.strip()\n- if line.startswith(b\"data:\"):\n- data = json.loads(line.decode(\"utf-8\").replace(\"data: \", \"\", 1))\n- yield data\n+ for line in response_chunk:\n+ content = json.loads(line.decode(\"utf-8\"))\n+ error = content.get(\"error\", None)\n+ if error is not None:\n+ raise Exception(str(error))\n+ yield content\n \n \n class RESTfulModelHandle:\n@@ -405,7 +406,9 @@ def chat(\n )\n \n if stream:\n- return chat_streaming_response_iterator(response.iter_lines())\n+ return chat_streaming_response_iterator(\n+ response.iter_content(chunk_size=None)\n+ )\n \n response_data = response.json()\n return response_data\n@@ -469,7 +472,9 @@ def chat(\n )\n \n if stream:\n- return chat_streaming_response_iterator(response.iter_lines())\n+ return chat_streaming_response_iterator(\n+ response.iter_content(chunk_size=None)\n+ )\n \n response_data = response.json()\n return response_data\ndiff --git a/xinference/core/model.py b/xinference/core/model.py\nindex 5bde398808..b228dabcea 100644\n--- a/xinference/core/model.py\n+++ b/xinference/core/model.py\n@@ -110,6 +110,9 @@ def load(self):\n \n async def _wrap_generator(self, ret: Any):\n if inspect.isgenerator(ret):\n+ if self._lock is not None:\n+ # Make sure only one iterator is valid.\n+ self._generators.clear()\n generator_uid = str(uuid.uuid1())\n self._generators[generator_uid] = ret\n return IteratorWrapper(\n@@ -168,6 +171,11 @@ async def next(\n \n def _wrapper():\n try:\n+ if self._lock is not None and generator_uid not in self._generators:\n+ raise Exception(\n+ f\"The generator {generator_uid} is invalid, \"\n+ f\"parallel iteration is not supported by ggml.\"\n+ )\n return next(self._generators[generator_uid])\n except StopIteration:\n return stop\ndiff --git a/xinference/core/restful_api.py b/xinference/core/restful_api.py\nindex 59426d0522..b09fe70e37 100644\n--- a/xinference/core/restful_api.py\n+++ b/xinference/core/restful_api.py\n@@ -19,19 +19,15 @@\n import sys\n import threading\n import warnings\n-from functools import partial\n from typing import Any, Dict, List, Literal, Optional, Union\n \n-import anyio\n import gradio as gr\n import xoscar as xo\n-from anyio.streams.memory import MemoryObjectSendStream\n from fastapi import APIRouter, FastAPI, HTTPException, Request\n from fastapi.middleware.cors import CORSMiddleware\n-from fastapi.responses import JSONResponse\n+from fastapi.responses import JSONResponse, StreamingResponse\n from fastapi.staticfiles import StaticFiles\n from pydantic import BaseModel, Field\n-from sse_starlette.sse import EventSourceResponse\n from starlette.responses import RedirectResponse\n from typing_extensions import NotRequired, TypedDict\n from uvicorn import Config, Server\n@@ -517,32 +513,16 @@ async def create_completion(self, request: Request, body: CreateCompletionReques\n raise HTTPException(status_code=500, detail=str(e))\n \n if body.stream:\n- # create a pair of memory object streams\n- send_chan, recv_chan = anyio.create_memory_object_stream(10)\n-\n- async def event_publisher(inner_send_chan: MemoryObjectSendStream):\n- async with inner_send_chan:\n- try:\n- iterator = await model.generate(body.prompt, kwargs)\n- async for chunk in iterator:\n- await inner_send_chan.send(dict(data=json.dumps(chunk)))\n- if await request.is_disconnected():\n- raise anyio.get_cancelled_exc_class()()\n- except anyio.get_cancelled_exc_class() as e:\n- logger.warning(\"disconnected\")\n- with anyio.move_on_after(1, shield=True):\n- logger.warning(\n- f\"Disconnected from client (via refresh/close) {request.client}\"\n- )\n- await inner_send_chan.send(dict(closing=True))\n- raise e\n- except Exception as e:\n- raise HTTPException(status_code=500, detail=str(e))\n-\n- return EventSourceResponse(\n- recv_chan, data_sender_callable=partial(event_publisher, send_chan)\n- )\n \n+ async def encode_iterator():\n+ try:\n+ iterator = await model.generate(body.prompt, kwargs)\n+ async for item in iterator:\n+ yield json.dumps(item)\n+ except Exception as ex:\n+ yield json.dumps({\"error\": str(ex)})\n+\n+ return StreamingResponse(encode_iterator())\n else:\n try:\n return await model.generate(body.prompt, kwargs)\n@@ -640,37 +620,21 @@ async def create_chat_completion(\n )\n \n if body.stream:\n- # create a pair of memory object streams\n- send_chan, recv_chan = anyio.create_memory_object_stream(10)\n-\n- async def event_publisher(inner_send_chan: MemoryObjectSendStream):\n- async with inner_send_chan:\n- try:\n- if is_chatglm_ggml:\n- iterator = await model.chat(prompt, chat_history, kwargs)\n- else:\n- iterator = await model.chat(\n- prompt, system_prompt, chat_history, kwargs\n- )\n- async for chunk in iterator:\n- await inner_send_chan.send(dict(data=json.dumps(chunk)))\n- if await request.is_disconnected():\n- raise anyio.get_cancelled_exc_class()()\n- except anyio.get_cancelled_exc_class() as e:\n- logger.warning(\"disconnected\")\n- with anyio.move_on_after(1, shield=True):\n- logger.warning(\n- f\"Disconnected from client (via refresh/close) {request.client}\"\n- )\n- await inner_send_chan.send(dict(closing=True))\n- raise e\n- except Exception as e:\n- raise HTTPException(status_code=500, detail=str(e))\n-\n- return EventSourceResponse(\n- recv_chan, data_sender_callable=partial(event_publisher, send_chan)\n- )\n \n+ async def encode_iterator():\n+ try:\n+ if is_chatglm_ggml:\n+ iterator = await model.chat(prompt, chat_history, kwargs)\n+ else:\n+ iterator = await model.chat(\n+ prompt, system_prompt, chat_history, kwargs\n+ )\n+ async for item in iterator:\n+ yield json.dumps(item)\n+ except Exception as ex:\n+ yield json.dumps({\"error\": str(ex)})\n+\n+ return StreamingResponse(encode_iterator())\n else:\n try:\n if is_chatglm_ggml:\ndiff --git a/xinference/tests/test_client.py b/xinference/tests/test_client.py\nindex 5fe48e16b0..71f61f480b 100644\n--- a/xinference/tests/test_client.py\n+++ b/xinference/tests/test_client.py\n@@ -13,6 +13,7 @@\n # limitations under the License.\n \n import os\n+from concurrent.futures import ThreadPoolExecutor\n \n import pytest\n \n@@ -227,12 +228,30 @@ def test_RESTful_client(setup):\n completion = model.chat(\"What is the capital of France?\")\n assert \"content\" in completion[\"choices\"][0][\"message\"]\n \n- streaming_response = model.chat(\n- prompt=\"What is the capital of France?\", generate_config={\"stream\": True}\n- )\n-\n- for chunk in streaming_response:\n- assert \"content\" or \"role\" in chunk[\"choices\"][0][\"delta\"]\n+ def _check_stream():\n+ streaming_response = model.chat(\n+ prompt=\"What is the capital of France?\", generate_config={\"stream\": True}\n+ )\n+ for chunk in streaming_response:\n+ print(chunk)\n+ assert \"content\" or \"role\" in chunk[\"choices\"][0][\"delta\"]\n+\n+ _check_stream()\n+\n+ results = []\n+ with ThreadPoolExecutor() as executor:\n+ for _ in range(3):\n+ r = executor.submit(_check_stream)\n+ results.append(r)\n+ # Parallel iterates on a ggml model, only one can be success.\n+ error_count = 0\n+ for r in results:\n+ try:\n+ r.result()\n+ except Exception as ex:\n+ assert \"parallel iteration\" in str(ex)\n+ error_count += 1\n+ assert error_count == 2\n \n client.terminate_model(model_uid=model_uid)\n assert len(client.list_models()) == 0\n" }
[ { "diff_hunk": "@@ -517,32 +513,16 @@ async def create_completion(self, request: Request, body: CreateCompletionReques\n raise HTTPException(status_code=500, detail=str(e))\n \n if body.stream:\n- # create a pair of memory object streams\n- send_chan, recv_chan = anyio.create_memory_object_stream(10)\n-\n- async def event_publisher(inner_send_chan: MemoryObjectSendStream):\n- async with inner_send_chan:\n- try:\n- iterator = await model.generate(body.prompt, kwargs)\n- async for chunk in iterator:\n- await inner_send_chan.send(dict(data=json.dumps(chunk)))\n- if await request.is_disconnected():\n- raise anyio.get_cancelled_exc_class()()\n- except anyio.get_cancelled_exc_class() as e:\n- logger.warning(\"disconnected\")\n- with anyio.move_on_after(1, shield=True):\n- logger.warning(\n- f\"Disconnected from client (via refresh/close) {request.client}\"\n- )\n- await inner_send_chan.send(dict(closing=True))\n- raise e\n- except Exception as e:\n- raise HTTPException(status_code=500, detail=str(e))\n-\n- return EventSourceResponse(\n- recv_chan, data_sender_callable=partial(event_publisher, send_chan)\n- )\n \n+ async def encode_iterator():\n+ try:\n+ iterator = await model.generate(body.prompt, kwargs)\n+ async for item in iterator:\n+ yield json.dumps(item)\n+ except Exception as ex:\n+ yield json.dumps({\"error\": str(ex)})", "line": 524, "original_line": 523, "original_start_line": null, "path": "xinference/core/restful_api.py", "start_line": null, "text": "@user1:\nBetter log the error for debugging since `str(ex)` may not contain the full invocation stack.\n\n@author:\nGood suggestion." }, { "diff_hunk": "@@ -110,6 +110,9 @@ def load(self):\n \n async def _wrap_generator(self, ret: Any):\n if inspect.isgenerator(ret):\n+ if self._lock is not None:\n+ # Make sure only one iterator is valid.\n+ self._generators.clear()", "line": null, "original_line": 115, "original_start_line": null, "path": "xinference/core/model.py", "start_line": null, "text": "@user1:\nHow about simply raise an exception when self._lock is not None and self._generators is not empty?\n\n@author:\nCurrent behavior similars to C++ iterator, append data invalidate old iterators.\n\n@author:\nFixed. Thanks." }, { "diff_hunk": "@@ -517,32 +513,16 @@ async def create_completion(self, request: Request, body: CreateCompletionReques\n raise HTTPException(status_code=500, detail=str(e))\n \n if body.stream:\n- # create a pair of memory object streams\n- send_chan, recv_chan = anyio.create_memory_object_stream(10)\n-\n- async def event_publisher(inner_send_chan: MemoryObjectSendStream):\n- async with inner_send_chan:\n- try:\n- iterator = await model.generate(body.prompt, kwargs)\n- async for chunk in iterator:\n- await inner_send_chan.send(dict(data=json.dumps(chunk)))\n- if await request.is_disconnected():\n- raise anyio.get_cancelled_exc_class()()\n- except anyio.get_cancelled_exc_class() as e:\n- logger.warning(\"disconnected\")\n- with anyio.move_on_after(1, shield=True):\n- logger.warning(\n- f\"Disconnected from client (via refresh/close) {request.client}\"\n- )\n- await inner_send_chan.send(dict(closing=True))\n- raise e\n- except Exception as e:\n- raise HTTPException(status_code=500, detail=str(e))\n-\n- return EventSourceResponse(\n- recv_chan, data_sender_callable=partial(event_publisher, send_chan)\n- )\n \n+ async def encode_iterator():", "line": null, "original_line": 517, "original_start_line": null, "path": "xinference/core/restful_api.py", "start_line": null, "text": "@user1:\nWhat does `encode_iterator` mean? Since GPTs are decoder-only models, maybe `decode_iterator` sounds more reasonable?\r\n\r\nOr simply name it `stream_results`?\n\n@author:\nOK" }, { "diff_hunk": "@@ -168,6 +171,11 @@ async def next(\n \n def _wrapper():\n try:\n+ if self._lock is not None and generator_uid not in self._generators:\n+ raise Exception(", "line": null, "original_line": 175, "original_start_line": null, "path": "xinference/core/model.py", "start_line": null, "text": "@user1:\nHow about raising an RuntimeException since this is totally unexpected?\n\n@author:\nOK" } ]
38f7ffeca1242f9fe6cac197567c7f2e538e4d9e
diff --git a/.github/workflows/python.yaml b/.github/workflows/python.yaml index e10ce59281..f645fb323a 100644 --- a/.github/workflows/python.yaml +++ b/.github/workflows/python.yaml @@ -104,5 +104,8 @@ jobs: run: | pytest --timeout=1500 \ -W ignore::PendingDeprecationWarning \ - --cov-config=setup.cfg --cov-report=xml --cov=xinference xinference + --cov-config=setup.cfg --cov-report=xml --cov=xinference xinference/tests/test_client.py + pytest --timeout=1500 \ + -W ignore::PendingDeprecationWarning \ + --cov-config=setup.cfg --cov-report=xml --cov=xinference --ignore xinference/tests/test_client.py xinference working-directory: . diff --git a/setup.cfg b/setup.cfg index f245fe1a2e..509b1bafcd 100644 --- a/setup.cfg +++ b/setup.cfg @@ -34,7 +34,6 @@ install_requires = pydantic<2 fastapi uvicorn - sse_starlette huggingface-hub>=0.14.1,<1.0 typing_extensions fsspec diff --git a/xinference/client.py b/xinference/client.py index a69579f52d..04af40dcd8 100644 --- a/xinference/client.py +++ b/xinference/client.py @@ -182,14 +182,14 @@ def chat( def streaming_response_iterator( - response_lines: Iterator[bytes], + response_chunk: Iterator[bytes], ) -> Iterator["CompletionChunk"]: """ Create an Iterator to handle the streaming type of generation. Parameters ---------- - response_lines: Iterator[bytes] + response_chunk: Iterator[bytes] Generated lines by the Model Generator. Returns @@ -199,24 +199,25 @@ def streaming_response_iterator( """ - for line in response_lines: - line = line.strip() - if line.startswith(b"data:"): - data = json.loads(line.decode("utf-8").replace("data: ", "", 1)) - yield data + for chunk in response_chunk: + content = json.loads(chunk.decode("utf-8")) + error = content.get("error", None) + if error is not None: + raise Exception(str(error)) + yield content # Duplicate code due to type hint issues def chat_streaming_response_iterator( - response_lines: Iterator[bytes], + response_chunk: Iterator[bytes], ) -> Iterator["ChatCompletionChunk"]: """ Create an Iterator to handle the streaming type of generation. Parameters ---------- - response_lines: Iterator[bytes] - Generated lines by the Model Generator. + response_chunk: Iterator[bytes] + Generated chunk by the Model Generator. Returns ------- @@ -225,11 +226,12 @@ def chat_streaming_response_iterator( """ - for line in response_lines: - line = line.strip() - if line.startswith(b"data:"): - data = json.loads(line.decode("utf-8").replace("data: ", "", 1)) - yield data + for chunk in response_chunk: + content = json.loads(chunk.decode("utf-8")) + error = content.get("error", None) + if error is not None: + raise Exception(str(error)) + yield content class RESTfulModelHandle: @@ -327,7 +329,7 @@ def generate( ) if stream: - return streaming_response_iterator(response.iter_lines()) + return streaming_response_iterator(response.iter_content(chunk_size=None)) response_data = response.json() return response_data @@ -405,7 +407,9 @@ def chat( ) if stream: - return chat_streaming_response_iterator(response.iter_lines()) + return chat_streaming_response_iterator( + response.iter_content(chunk_size=None) + ) response_data = response.json() return response_data @@ -469,7 +473,9 @@ def chat( ) if stream: - return chat_streaming_response_iterator(response.iter_lines()) + return chat_streaming_response_iterator( + response.iter_content(chunk_size=None) + ) response_data = response.json() return response_data diff --git a/xinference/core/model.py b/xinference/core/model.py index 165e916820..7d9d0b37b9 100644 --- a/xinference/core/model.py +++ b/xinference/core/model.py @@ -118,6 +118,8 @@ def load(self): async def _wrap_generator(self, ret: Any): if inspect.isgenerator(ret) or inspect.isasyncgen(ret): + if self._lock is not None and self._generators: + raise Exception("Parallel generation is not supported by ggml.") generator_uid = str(uuid.uuid1()) self._generators[generator_uid] = ret diff --git a/xinference/core/restful_api.py b/xinference/core/restful_api.py index 9d2e55fdfa..a2569de5c0 100644 --- a/xinference/core/restful_api.py +++ b/xinference/core/restful_api.py @@ -19,19 +19,15 @@ import sys import threading import warnings -from functools import partial from typing import Any, Dict, List, Literal, Optional, Union -import anyio import gradio as gr import xoscar as xo -from anyio.streams.memory import MemoryObjectSendStream from fastapi import APIRouter, FastAPI, HTTPException, Request from fastapi.middleware.cors import CORSMiddleware -from fastapi.responses import JSONResponse +from fastapi.responses import JSONResponse, StreamingResponse from fastapi.staticfiles import StaticFiles from pydantic import BaseModel, Field -from sse_starlette.sse import EventSourceResponse from starlette.responses import RedirectResponse from typing_extensions import NotRequired, TypedDict from uvicorn import Config, Server @@ -517,32 +513,17 @@ async def create_completion(self, request: Request, body: CreateCompletionReques raise HTTPException(status_code=500, detail=str(e)) if body.stream: - # create a pair of memory object streams - send_chan, recv_chan = anyio.create_memory_object_stream(10) - - async def event_publisher(inner_send_chan: MemoryObjectSendStream): - async with inner_send_chan: - try: - iterator = await model.generate(body.prompt, kwargs) - async for chunk in iterator: - await inner_send_chan.send(dict(data=json.dumps(chunk))) - if await request.is_disconnected(): - raise anyio.get_cancelled_exc_class()() - except anyio.get_cancelled_exc_class() as e: - logger.warning("disconnected") - with anyio.move_on_after(1, shield=True): - logger.warning( - f"Disconnected from client (via refresh/close) {request.client}" - ) - await inner_send_chan.send(dict(closing=True)) - raise e - except Exception as e: - raise HTTPException(status_code=500, detail=str(e)) - - return EventSourceResponse( - recv_chan, data_sender_callable=partial(event_publisher, send_chan) - ) + async def stream_results(): + try: + iterator = await model.generate(body.prompt, kwargs) + async for item in iterator: + yield json.dumps(item) + except Exception as ex: + logger.exception("Completion stream got an error: %s", ex) + yield json.dumps({"error": str(ex)}) + + return StreamingResponse(stream_results()) else: try: return await model.generate(body.prompt, kwargs) @@ -640,37 +621,22 @@ async def create_chat_completion( ) if body.stream: - # create a pair of memory object streams - send_chan, recv_chan = anyio.create_memory_object_stream(10) - - async def event_publisher(inner_send_chan: MemoryObjectSendStream): - async with inner_send_chan: - try: - if is_chatglm_ggml: - iterator = await model.chat(prompt, chat_history, kwargs) - else: - iterator = await model.chat( - prompt, system_prompt, chat_history, kwargs - ) - async for chunk in iterator: - await inner_send_chan.send(dict(data=json.dumps(chunk))) - if await request.is_disconnected(): - raise anyio.get_cancelled_exc_class()() - except anyio.get_cancelled_exc_class() as e: - logger.warning("disconnected") - with anyio.move_on_after(1, shield=True): - logger.warning( - f"Disconnected from client (via refresh/close) {request.client}" - ) - await inner_send_chan.send(dict(closing=True)) - raise e - except Exception as e: - raise HTTPException(status_code=500, detail=str(e)) - - return EventSourceResponse( - recv_chan, data_sender_callable=partial(event_publisher, send_chan) - ) + async def stream_results(): + try: + if is_chatglm_ggml: + iterator = await model.chat(prompt, chat_history, kwargs) + else: + iterator = await model.chat( + prompt, system_prompt, chat_history, kwargs + ) + async for item in iterator: + yield json.dumps(item) + except Exception as ex: + logger.exception("Chat completion stream got an error: %s", ex) + yield json.dumps({"error": str(ex)}) + + return StreamingResponse(stream_results()) else: try: if is_chatglm_ggml: diff --git a/xinference/tests/test_client.py b/xinference/tests/test_client.py index 20d839fe91..5d746d7c98 100644 --- a/xinference/tests/test_client.py +++ b/xinference/tests/test_client.py @@ -231,12 +231,33 @@ def test_RESTful_client(setup): completion = model.chat("What is the capital of France?") assert "content" in completion["choices"][0]["message"] - streaming_response = model.chat( - prompt="What is the capital of France?", generate_config={"stream": True} - ) + def _check_stream(): + streaming_response = model.chat( + prompt="What is the capital of France?", + generate_config={"stream": True, "max_tokens": 5}, + ) + for chunk in streaming_response: + assert "content" or "role" in chunk["choices"][0]["delta"] + + _check_stream() + + results = [] + with ThreadPoolExecutor() as executor: + for _ in range(2): + r = executor.submit(_check_stream) + results.append(r) + # Parallel generation is not supported by ggml. + error_count = 0 + for r in results: + try: + r.result() + except Exception as ex: + assert "Parallel generation" in str(ex) + error_count += 1 + assert error_count == 1 - for chunk in streaming_response: - assert "content" or "role" in chunk["choices"][0]["delta"] + # After iteration finish, we can iterate again. + _check_stream() client.terminate_model(model_uid=model_uid) assert len(client.list_models()) == 0 @@ -250,8 +271,9 @@ def test_RESTful_client(setup): assert len(client.list_models()) == 1 # Test concurrent chat is OK. + model = client.get_model(model_uid=model_uid) + def _check(stream=False): - model = client.get_model(model_uid=model_uid) completion = model.generate( "AI is going to", generate_config={"stream": stream, "max_tokens": 5} ) @@ -265,12 +287,18 @@ def _check(stream=False): for stream in [True, False]: results = [] + error_count = 0 with ThreadPoolExecutor() as executor: for _ in range(3): r = executor.submit(_check, stream=stream) results.append(r) for r in results: - r.result() + try: + r.result() + except Exception as ex: + assert "Parallel generation" in str(ex) + error_count += 1 + assert error_count == (2 if stream else 0) client.terminate_model(model_uid=model_uid) assert len(client.list_models()) == 0
{ "difficulty": "medium", "estimated_review_effort": 4, "problem_domain": "Bug Fixes" }
sympy__sympy-27269@0105c46
sympy/sympy
Python
27,269
Added Lambdify with latex symbols test in test_codeprinter.py
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> Fixes #23374 #### Brief description of what is fixed or changed This PR adds the `test_lambdify_LaTeX_symbols_issue_23374 `test in `test_codeprinter.py `to verify the handling of LaTeX-style symbols such as x_{1} being correctly converted to x_1 during the lambdify process. @oscargus and @oscarbenjamin. - Remove curly braces from subscripted variables. x_{1} -> x_1 - Added test for _print_Symbol in test_codeprinter.py #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> * printing * Fixed handling of LaTeX-style symbols in test_lambdify_LaTeX_symbols_issue_23374 by accounting for possible substitution with Dummy symbols during lambdify. <!-- END RELEASE NOTES -->
2024-11-17T23:33:38Z
Lambdify with latex symbols I have a complicated expression that needs to be evaluated. If the evaluation fails with numpy, than it switches to sympy. My symbols contains latex syntax. This is the simplest reproducible example I could come up with: ```python from sympy import * import inspect x1, x2 = symbols("x_{1} x_2") f1 = lambdify([x1, x2], cos(x1**2 + x2**2)) print(inspect.getsource(f1)) # def _lambdifygenerated(x_1, x_2): # return cos(x_2**2 + x_1**2) ``` So far so good. Note that `x1` was processed to `x_1`. Now, let's switch to SymPy: ```python f2 = lambdify([x1, x2], cos(x1**2 + x2**2), modules="sympy") print(inspect.getsource(f2)) ``` ```text Traceback (most recent call last): File "/home/davide/Documents/Development/envs/sympy_plot/lib/python3.9/site-packages/IPython/core/interactiveshell.py", line 3457, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "/tmp/ipykernel_82217/2465285261.py", line 8, in <module> f2 = lambdify([x1, x2], cos(x1**2 + x2**2), modules="sympy") File "/home/davide/Documents/Development/envs/sympy_plot/lib/python3.9/site-packages/sympy/utilities/lambdify.py", line 900, in lambdify c = compile(funcstr, filename, 'exec') File "<lambdifygenerated-12>", line 2 return cos(x_2**2 + x_{1}**2) ^ SyntaxError: invalid syntax ``` Note that no processing is applied to `x1`, so curl brackets end up in the expression to be executed.
This should probably be fixed in the SymPy-printer (and all printers...) similar to https://github.com/sympy/sympy/pull/20806 Probably the best place is to change here: https://github.com/sympy/sympy/blob/68c37df362e8585d72fc7ef490013bb8eff16e3e/sympy/printing/codeprinter.py#L397-L408 In that way, all code printers would benefit from it (under the assumption that {} cannot be part of a variable name in any supported language). This also means that at least the `_print_Symbol` method added in https://github.com/sympy/sympy/pull/20806 can be removed (actually that method is exactly what the above should be replaced with). Possibly this holds for other printers as well. is this issue still open? Sort of. However, there is a PR solving it that is not yet merged. So if you want to work on something there are probably better issues. > actually that method is exactly what the above should be replaced with replacing the function actually resolves error. So is this a valid solution for this issue or are there any other conditions remaining to be checked upon ? Related to this issue https://github.com/sympy/sympy/issues/12463. Ideally this should be handled automatically by the dummify flag. After having a look at all related PRs and issues (i.e. pertaining to dummify and lamdify), I have came to the conclusion that we should let `dummify=False` by default, but if the variable name is invalid (`.isidentifier()==False`) then `dummfiy=True` should be overridden. Am I correct till here ? Another altogether different thought of mine is that `dummify` option should not be there because we use dummify only if we _have to_. If there is no need for dummification then why would any user want to dummify its variables. So should it be removed ? There might be situations where dummification is needed but we don't correctly identify it. So I would keep the option. I think maybe I found the problem, while creating the lambda function there are two things `args` and `expr`. So in case of `args` , they are getting passed through the `_print_Symbol` of `PythonCodePrinter` in pycode.py https://github.com/sympy/sympy/blob/0c4cc831ab88b6ba39685540f74d9551544d06e5/sympy/printing/pycode.py#L564-L577 so the `x_{1}` changes to `x_1` and therefore the `_is_safe_ident` is passed in https://github.com/sympy/sympy/blob/0c4cc831ab88b6ba39685540f74d9551544d06e5/sympy/utilities/lambdify.py#L1189-L1190 and the control doesn't enter `if` block. However the `expr` (when `modules=sympy`) doesn't gets passed through `_print_Symbol` of `PythonCodePrinter` in pycode.py and as a result it remains `x_{1}` which then raises error on compiling. On removing the code that removes "{}" from variables, the `_is_safe_ident` check is failed and as a result that argument is dummified. ```diff def _print_Symbol(self, expr): name = super()._print_Symbol(expr) if name in self.reserved_words: if self._settings['error_on_reserved']: msg = ('This expression includes the symbol "{}" which is a ' 'reserved keyword in this language.') raise ValueError(msg.format(name)) return name + self._settings['reserved_word_suffix'] - elif '{' in name: # Remove curly braces from subscripted variables - return name.replace('{', '').replace('}', '') else: return name ``` So after the changes, this will be the output of above example ```python >>> from sympy import * >>> import inspect >>> x1, x2 = symbols("x_1 x_{2}") >>> f1 = lambdify([x1, x2], cos(x1**2 + x2**2)) >>> print(inspect.getsource(f1)) def _lambdifygenerated(x_1, Dummy_22): return cos(Dummy_22**2 + x_1**2) >>> f2 = lambdify([x1, x2], cos(x1**2 + x2**2), modules="sympy") >>> print(inspect.getsource(f2)) def _lambdifygenerated(x_1, Dummy_23): return cos(Dummy_23**2 + x_1**2) ``` which is the desired the output ? Any thoughts ?
[ { "body": "I have a complicated expression that needs to be evaluated. If the evaluation fails with numpy, than it switches to sympy. My symbols contains latex syntax.\r\n\r\nThis is the simplest reproducible example I could come up with:\r\n\r\n```python\r\nfrom sympy import *\r\nimport inspect\r\nx1, x2 = symbols(\"x_{1} x_2\")\r\n\r\nf1 = lambdify([x1, x2], cos(x1**2 + x2**2))\r\nprint(inspect.getsource(f1))\r\n\r\n# def _lambdifygenerated(x_1, x_2):\r\n# return cos(x_2**2 + x_1**2) \r\n```\r\nSo far so good. Note that `x1` was processed to `x_1`. Now, let's switch to SymPy:\r\n\r\n```python\r\nf2 = lambdify([x1, x2], cos(x1**2 + x2**2), modules=\"sympy\")\r\nprint(inspect.getsource(f2))\r\n```\r\n\r\n```text\r\nTraceback (most recent call last):\r\n\r\n File \"/home/davide/Documents/Development/envs/sympy_plot/lib/python3.9/site-packages/IPython/core/interactiveshell.py\", line 3457, in run_code\r\n exec(code_obj, self.user_global_ns, self.user_ns)\r\n\r\n File \"/tmp/ipykernel_82217/2465285261.py\", line 8, in <module>\r\n f2 = lambdify([x1, x2], cos(x1**2 + x2**2), modules=\"sympy\")\r\n\r\n File \"/home/davide/Documents/Development/envs/sympy_plot/lib/python3.9/site-packages/sympy/utilities/lambdify.py\", line 900, in lambdify\r\n c = compile(funcstr, filename, 'exec')\r\n\r\n File \"<lambdifygenerated-12>\", line 2\r\n return cos(x_2**2 + x_{1}**2)\r\n ^\r\nSyntaxError: invalid syntax\r\n```\r\n\r\nNote that no processing is applied to `x1`, so curl brackets end up in the expression to be executed.", "number": 23374, "title": "Lambdify with latex symbols" } ]
d3bded9d53d554bab23689b45b77871555d2f2fc
{ "head_commit": "0105c46db79135bf786c8b74f3a61ec683f016d2", "head_commit_message": "Added test for _print_Symbol in test_codeprinter.py", "patch_to_review": "diff --git a/sympy/printing/codeprinter.py b/sympy/printing/codeprinter.py\nindex 765f7f01f4c7..15a054609274 100644\n--- a/sympy/printing/codeprinter.py\n+++ b/sympy/printing/codeprinter.py\n@@ -418,7 +418,6 @@ def _print_Variable(self, expr):\n return self._print(expr.symbol)\n \n def _print_Symbol(self, expr):\n-\n name = super()._print_Symbol(expr)\n \n if name in self.reserved_words:\n@@ -428,7 +427,8 @@ def _print_Symbol(self, expr):\n raise ValueError(msg.format(name))\n return name + self._settings['reserved_word_suffix']\n else:\n- return name\n+ # Remove curly braces from subscripted variables. x_{1} -> x_1\n+ return name.replace('{', '').replace('}', '')\n \n def _can_print(self, name):\n \"\"\" Check if function ``name`` is either a known function or has its own\ndiff --git a/sympy/printing/tests/test_codeprinter.py b/sympy/printing/tests/test_codeprinter.py\nindex 2d89a27dab37..bb4efd95a2f4 100644\n--- a/sympy/printing/tests/test_codeprinter.py\n+++ b/sympy/printing/tests/test_codeprinter.py\n@@ -1,3 +1,4 @@\n+from sympy import lambdify\n from sympy.printing.codeprinter import CodePrinter, PrintMethodNotImplementedError\n from sympy.core import symbols\n from sympy.core.symbol import Dummy\n@@ -36,6 +37,28 @@ def test_print_Symbol():\n p.reserved_words.update(['if'])\n assert p._print(y) == 'if_He_Man'\n \n+\n+def test_lambdify_LaTeX_symbols_issue_23374():\n+ # Create symbols with Latex style names\n+ x1, x2 = symbols(\"x_{1} x_2\")\n+\n+ # Set up the printer\n+ p = setup_test_printer()\n+\n+ # Print symbols with Latex style names to check if they are converted properly\n+ assert p._print(x1) == 'x_1'\n+ assert p._print(x2) == 'x_2'\n+\n+ # Lambdify the function\n+ from sympy import cos\n+ f1 = lambdify([x1, x2], cos(x1 ** 2 + x2 ** 2))\n+\n+ # Check if the generated Python function is correct (no curly braces in variable names)\n+ import inspect\n+ generated_code = inspect.getsource(f1)\n+ assert 'x_1' in generated_code or 'Dummy' in generated_code\n+\n+\n def test_issue_15791():\n class CrashingCodePrinter(CodePrinter):\n def emptyPrinter(self, obj):\n" }
[ { "diff_hunk": "@@ -428,7 +427,8 @@ def _print_Symbol(self, expr):\n raise ValueError(msg.format(name))\n return name + self._settings['reserved_word_suffix']\n else:\n- return name\n+ # Remove curly braces from subscripted variables. x_{1} -> x_1\n+ return name.replace('{', '').replace('}', '')", "line": null, "original_line": 431, "original_start_line": 430, "path": "sympy/printing/codeprinter.py", "start_line": null, "text": "@user1:\nWhy do you even need this if the code already works fine in the master for dummyfying.\r\n```\r\nx1, x2 = symbols(\"x_{1} x_2\")\r\nf1 = lambdify([x1, x2], cos(x1 ** 2 + x2 ** 2))\r\nf1(1, 2)\r\n```\r\n\r\nI also don't think that printer should have the string replace logic unless necessary.\n\n@author:\nThank you for your feedback @user1 I will restore to the previous code and update the test." } ]
c37138c0c340a2bd445092425582777312559bfd
diff --git a/sympy/printing/codeprinter.py b/sympy/printing/codeprinter.py index 765f7f01f4c7..4a654580b8a3 100644 --- a/sympy/printing/codeprinter.py +++ b/sympy/printing/codeprinter.py @@ -418,7 +418,6 @@ def _print_Variable(self, expr): return self._print(expr.symbol) def _print_Symbol(self, expr): - name = super()._print_Symbol(expr) if name in self.reserved_words: diff --git a/sympy/printing/tests/test_codeprinter.py b/sympy/printing/tests/test_codeprinter.py index 2d89a27dab37..4b077037eb84 100644 --- a/sympy/printing/tests/test_codeprinter.py +++ b/sympy/printing/tests/test_codeprinter.py @@ -2,6 +2,10 @@ from sympy.core import symbols from sympy.core.symbol import Dummy from sympy.testing.pytest import raises +from sympy import cos +from sympy.utilities.lambdify import lambdify +from math import cos as math_cos +from sympy.printing.lambdarepr import LambdaPrinter def setup_test_printer(**kwargs): @@ -36,6 +40,24 @@ def test_print_Symbol(): p.reserved_words.update(['if']) assert p._print(y) == 'if_He_Man' + +def test_lambdify_LaTeX_symbols_issue_23374(): + # Create symbols with Latex style names + x1, x2 = symbols("x_{1} x_2") + + # Lambdify the function + f1 = lambdify([x1, x2], cos(x1 ** 2 + x2 ** 2)) + + # Test that the function works correctly (numerically) + assert f1(1, 2) == math_cos(1 ** 2 + 2 ** 2) + + # Explicitly generate a custom printer to verify the naming convention + p = LambdaPrinter() + expr_str = p.doprint(cos(x1 ** 2 + x2 ** 2)) + assert 'x_1' in expr_str + assert 'x_2' in expr_str + + def test_issue_15791(): class CrashingCodePrinter(CodePrinter): def emptyPrinter(self, obj):
{ "difficulty": "low", "estimated_review_effort": 2, "problem_domain": "Bug Fixes" }
sympy__sympy-27352@b3187d6
sympy/sympy
Python
27,352
Add QR Decomposition Method for DomainMatrix and DDM
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> "Fixes #27250" #### Brief description of what is fixed or changed * Added `qr` method to DomainMatrix * Implemented Division-based `qr` method for DDM:
 * Added tests to verify: * Correct shapes of Q and R * Matrix equality A = Q ⋅ R within the specified domain. #### Other comments Future work includes adding `qrd` method using fraction free LU decomposition #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> * matrices * Added qr method for DomainMatrix to compute the QR decomposition. * Added a Division-based qr method for DDM to handle domain-aware QR decomposition. <!-- END RELEASE NOTES -->
2024-12-08T15:36:38Z
GramSchmidt is really, really slow I tried applying Gram-Schmidt to construct an orthonormal basis of the following four vectors: $$ \begin{bmatrix} -1 + \frac{\sqrt{2}}{2} \\\\ \frac{1}{2} + \frac{\sqrt{2}}{2} \\\\ - \frac{\sqrt{2}}{2} - \frac{1}{2} \\\\ 0 \end{bmatrix}, \begin{bmatrix} -1 - \frac{\sqrt{2}}{2} \\\\ 0 \\\\ 0 \\\\ 0 \end{bmatrix}, \begin{bmatrix} \frac{\sqrt{2}}{2} \\\\ \frac{1}{2} \\\\ -\frac{1}{2} \\\\ \frac{1}{2} \end{bmatrix}, \begin{bmatrix} - \frac{\sqrt{2}}{2} \\\\ \frac{\sqrt{2}}{2} \\\\ - \frac{\sqrt{2}}{2} \\\\ -\frac{1}{2} \end{bmatrix} $$ I've been waiting for _hours_ for a result and don't know why it should take so long. Here's my code: ```Python from sympy import * vecs = [ Matrix([ -1 + sqrt(2)/2, 1/2 + sqrt(2)/2, -sqrt(2)/2 - 1/2, 0, ]), Matrix([ -1 - sqrt(2)/2, 0, 0, 0, ]), Matrix([ sqrt(2)/2, 1/2, -1/2, 1/2]), Matrix([ -sqrt(2)/2, sqrt(2)/2, -sqrt(2)/2, -1/2, ]), ] gs = GramSchmidt(vecs, True) print(gs) ``` Am I doing something wrong, or is this slowness to be expected? I'm running the SymPy 1.12 package that ships with Ubuntu Noble.
It looks like this suffers expression blowup in the QR code. I guess the QR decomposition should be implemented for DomainMatrix. This is the matrix code for QR adapted to DomainMatrix: ```python from sympy import Matrix from sympy.polys.matrices import DomainMatrix from sympy.polys.matrices.ddm import DDM, DMDomainError def orthogonalize(A: Matrix): Ad: DDM = A.to_DM(extension=True, field=True).to_ddm() K = Ad.domain if not K.is_Field: raise DMDomainError("orthogonalize requires a field") rows, cols = Ad.shape Q = Ad.copy() R = Ad.zeros((rows, cols), K) is_zero_col = lambda j: not any(Q[i][j] for i in range(rows)) dot_cols = lambda i, j: K.sum([Q[k][i] * Q[k][j] for k in range(rows)]) ranked = [] for j in range(cols): for i in range(j): if is_zero_col(i): continue R[i][j] = dot_cols(i, j) / dot_cols(i, i) for k in range(rows): Q[k][j] -= R[i][j] * Q[k][i] if not is_zero_col(j): ranked.append(j) R[j][j] = K.one Q = Q.extract(range(rows), ranked) R = R.extract(ranked, range(cols)) QM = DomainMatrix.from_rep(Q).to_Matrix() RM = DomainMatrix.from_rep(R).to_Matrix() return QM, RM ``` Then: ```python In [2]: M = Matrix.hstack(*vecs).applyfunc(nsimplify) In [3]: M Out[3]: ⎡ √2 √2 √2 -√2 ⎤ ⎢-1 + ── -1 - ── ── ────⎥ ⎢ 2 2 2 2 ⎥ ⎢ ⎥ ⎢ 1 √2 √2 ⎥ ⎢ ─ + ── 0 1/2 ── ⎥ ⎢ 2 2 2 ⎥ ⎢ ⎥ ⎢ √2 1 -√2 ⎥ ⎢- ── - ─ 0 -1/2 ────⎥ ⎢ 2 2 2 ⎥ ⎢ ⎥ ⎣ 0 0 1/2 -1/2⎦ In [4]: edit ortho.py Editing... done. Executing edited code... In [5]: %time Q, R = orthogonalize(M) CPU times: user 24.4 ms, sys: 23 μs, total: 24.4 ms Wall time: 23.6 ms In [6]: Q.shape Out[6]: (4, 3) In [7]: Q Out[7]: ⎡ √2 5 7⋅√2 ⎤ ⎢-1 + ── - ─ - ──── 0 ⎥ ⎢ 2 6 12 ⎥ ⎢ ⎥ ⎢ 1 √2 √2 1 ⎥ ⎢ ─ + ── - ── - ── 0 ⎥ ⎢ 2 2 12 12 ⎥ ⎢ ⎥ ⎢ √2 1 1 √2 ⎥ ⎢- ── - ─ ── + ── 0 ⎥ ⎢ 2 2 12 12 ⎥ ⎢ ⎥ ⎣ 0 0 1/2⎦ In [8]: expand(Q.T*Q) Out[8]: ⎡3 0 0 ⎤ ⎢ ⎥ ⎢ 17 ⎥ ⎢0 √2 + ── 0 ⎥ ⎢ 12 ⎥ ⎢ ⎥ ⎣0 0 1/4⎦ In [9]: expand(Q*R - M) Out[9]: ⎡0 0 0 0⎤ ⎢ ⎥ ⎢0 0 0 0⎥ ⎢ ⎥ ⎢0 0 0 0⎥ ⎢ ⎥ ⎣0 0 0 0⎦ ``` There will be some way to adjust the above so that it is fraction-free. I'm not sure what is the best approach to control blowup here: ```python In [50]: M = randMatrix(3) + randMatrix(3)*x In [51]: M Out[51]: ⎡47⋅x + 56 2⋅x + 94 67⋅x + 88⎤ ⎢ ⎥ ⎢88⋅x + 30 68⋅x + 52 11⋅x + 26⎥ ⎢ ⎥ ⎣53⋅x + 25 5⋅x + 73 31⋅x + 24⎦ In [52]: %time Q, R = orthogonalize(M) CPU times: user 65.8 ms, sys: 21 μs, total: 65.8 ms Wall time: 65.1 ms In [53]: Q[:,0].T Out[53]: [47⋅x + 56 88⋅x + 30 53⋅x + 25] In [54]: Q[:,1].T Out[54]: ⎡ 3 2 3 2 3 2 ⎤ ⎢- 272597⋅x + 159228⋅x - 4785⋅x - 46210 309632⋅x + 38206⋅x - 212276⋅x - 17098 - 272369⋅x + 36601⋅x + 149570⋅x + 124028⎥ ⎢──────────────────────────────────────── ─────────────────────────────────────── ──────────────────────────────────────────⎥ ⎢ 2 2 2 ⎥ ⎣ 12762⋅x + 13194⋅x + 4661 12762⋅x + 13194⋅x + 4661 12762⋅x + 13194⋅x + 4661 ⎦ In [55]: Q[:,2].T Out[55]: ⎡ 5 4 3 2 5 4 3 2 5 4 ↪ ⎢379006068⋅x + 117583370⋅x - 1032802148⋅x + 237364162⋅x + 265464980⋅x + 31452600 15452523⋅x - 143100606⋅x + 25543331⋅x + 482317556⋅x - 325550272⋅x - 61420920 - 361756740⋅x - 105237020⋅x + 87 ↪ ⎢─────────────────────────────────────────────────────────────────────────────────── ──────────────────────────────────────────────────────────────────────────────── ────────────────────────────────── ↪ ⎢ 4 3 2 4 3 2 4 ↪ ⎣ 19147937⋅x - 26306722⋅x + 5929529⋅x - 1204476⋅x + 3821208 19147937⋅x - 26306722⋅x + 5929529⋅x - 1204476⋅x + 3821208 19147937⋅x - 26306722⋅ ↪ ↪ 3 2 ⎤ ↪ 9870996⋅x - 349551228⋅x - 53803192⋅x + 3251280⎥ ↪ ────────────────────────────────────────────────⎥ ↪ 3 2 ⎥ ↪ x + 5929529⋅x - 1204476⋅x + 3821208 ⎦ ``` This would be the non-field version: ```python def orthogonalize(A: Matrix): Ad: DDM = A.to_DM(extension=True).to_ddm() K = Ad.domain rows, cols = Ad.shape Q = Ad.copy() is_zero_col = lambda j: not any(Q[i][j] for i in range(rows)) dot_cols = lambda i, j: K.sum([Q[k][i] * Q[k][j] for k in range(rows)]) ranked = [] for j in range(cols): for i in range(j): if is_zero_col(i): continue _, dij, dii = K.cofactors(dot_cols(i, j), dot_cols(i, i)) for k in range(rows): Q[k][j] = dii * Q[k][j] - dij * Q[k][i] g = Q[0][j] for k in range(1, rows): g = K.gcd(g, Q[k][j]) for k in range(rows): Q[k][j] = K.exquo(Q[k][j], g) if not is_zero_col(j): ranked.append(j) Q = Q.extract(range(rows), ranked) QM = DomainMatrix.from_rep(Q).to_Matrix() return QM ``` Works better for polynomials but less well for algebraic number fields because gcd there is always 1. There should be some way to generalise this to a fraction-free QR decomposition something like: ``` M = Q*R*D**-1 ``` where `D` is a diagonal matrix and `Q`, `R` and `D` are fraction-free. Fraction-free QR is described here: https://www.uwo.ca/apmaths/faculty/jeffrey/pdfs/FFLUQR.pdf > Fraction-free matrix factors: new forms for LU and QR factors > Wenqin ZHOU (*), David J. JEFFREY Nice! Thanks, @oscarbenjamin, for the code. The various versions do appear to work as advertised. (I'll have to read up more on domainmatrixes, though.) Hello @oscarbenjamin , should the `orthogonalize` function replace GramSchmidt, or would implementing QR decomposition for DomainMatrix better address the expression blowup issue you mentioned? These things should be implemented for DomainMatrix and then the Matrix methods for this should use the DomainMatrix methods. There should be DomainMatrix methods `qr` and `qrd`. The `qr` method can use the first algorithm I shows and return matrices `Q` and `R` such that `A = Q*R`. The `qrd` method should be more like the second implementation I showed and compute `Q`, `R` and `D` such that `A = Q*D**-1*R` where `D` is a diagonal matrix and `Q` and `R` are in the same domain (integer, polynomial, ...) as the original matrix `A`. The paper I linked shows how to compute the QRD decomposition using fraction-free LU decomposition which should also be implemented. For now though just adding a `qr` method and ensuring that the matrix code calls it would be good. The implementation I showed can be a method on the `DDM` class that `DomainMatrix` would call. > There should be DomainMatrix methods `qr` and `qrd`. The `qr` method can use the first algorithm I shows and return matrices `Q` and `R` such that `A = Q*R`. The `qrd` method should be more like the second implementation I showed and compute `Q`, `R` and `D` such that `A = Q*D**-1*R` where `D` is a diagonal matrix and `Q` and `R` are in the same domain (integer, polynomial, ...) as the original matrix `A`. > > The paper I linked shows how to compute the QRD decomposition using fraction-free LU decomposition which should also be implemented. For now though just adding a `qr` method and ensuring that the matrix code calls it would be good. The implementation I showed can be a method on the `DDM` class that `DomainMatrix` would call. Okay got it, thank you for the clarity
[ { "body": "I tried applying Gram-Schmidt to construct an orthonormal basis of the following four vectors:\n\n$$\n\\begin{bmatrix} -1 + \\frac{\\sqrt{2}}{2} \\\\\\\\ \\frac{1}{2} + \\frac{\\sqrt{2}}{2} \\\\\\\\ - \\frac{\\sqrt{2}}{2} - \\frac{1}{2} \\\\\\\\ 0 \\end{bmatrix},\n\\begin{bmatrix} -1 - \\frac{\\sqrt{2}}{2} \\\\\\\\ 0 \\\\\\\\ 0 \\\\\\\\ 0 \\end{bmatrix},\n\\begin{bmatrix} \\frac{\\sqrt{2}}{2} \\\\\\\\ \\frac{1}{2} \\\\\\\\ -\\frac{1}{2} \\\\\\\\ \\frac{1}{2} \\end{bmatrix},\n\\begin{bmatrix} - \\frac{\\sqrt{2}}{2} \\\\\\\\ \\frac{\\sqrt{2}}{2} \\\\\\\\ - \\frac{\\sqrt{2}}{2} \\\\\\\\ -\\frac{1}{2} \\end{bmatrix}\n$$\n\nI've been waiting for _hours_ for a result and don't know why it should take so long. Here's my code:\n```Python\nfrom sympy import *\n\nvecs = [\n Matrix([\n -1 + sqrt(2)/2,\n 1/2 + sqrt(2)/2,\n -sqrt(2)/2 - 1/2,\n 0,\n ]),\n Matrix([\n -1 - sqrt(2)/2,\n 0,\n 0,\n 0,\n ]),\n Matrix([\n sqrt(2)/2,\n 1/2,\n -1/2,\n 1/2]),\n Matrix([\n -sqrt(2)/2,\n sqrt(2)/2,\n -sqrt(2)/2,\n -1/2,\n ]),\n]\ngs = GramSchmidt(vecs, True)\nprint(gs)\n```\nAm I doing something wrong, or is this slowness to be expected? I'm running the SymPy 1.12 package that ships with Ubuntu Noble.", "number": 27250, "title": "GramSchmidt is really, really slow" } ]
e53690af7e57eeef7e8c6dea1521a4dc0024cf95
{ "head_commit": "b3187d679de12daa6b2224b521af4a58bc09f02c", "head_commit_message": "Added qr to SDM & DFM", "patch_to_review": "diff --git a/sympy/polys/matrices/_dfm.py b/sympy/polys/matrices/_dfm.py\nindex c2f6f16922e0..6a915f4fc61e 100644\n--- a/sympy/polys/matrices/_dfm.py\n+++ b/sympy/polys/matrices/_dfm.py\n@@ -694,6 +694,11 @@ def lu(self):\n L, U, swaps = self.to_ddm().lu()\n return L.to_dfm(), U.to_dfm(), swaps\n \n+ def qr(self):\n+ \"\"\"Return the QR decomposition of the matrix.\"\"\"\n+ Q, R = self.to_ddm().qr()\n+ return Q.to_dfm(), R.to_dfm()\n+\n # XXX: The lu_solve function should be renamed to solve. Whether or not it\n # uses an LU decomposition is an implementation detail. A method called\n # lu_solve would make sense for a situation in which an LU decomposition is\ndiff --git a/sympy/polys/matrices/ddm.py b/sympy/polys/matrices/ddm.py\nindex 8f90a3305367..1d027f8a5e6f 100644\n--- a/sympy/polys/matrices/ddm.py\n+++ b/sympy/polys/matrices/ddm.py\n@@ -958,6 +958,34 @@ def lu(a):\n \n return L, U, swaps\n \n+ def qr(self):\n+ \"\"\"\n+ Fraction-free QR decomposition for DDM.\n+ Returns:\n+ - Q: Orthogonal matrix as a DDM.\n+ - R: Upper triangular matrix as a DDM.\n+ \"\"\"\n+ rows, cols = self.shape\n+ Q = self.copy()\n+ R = self.zeros((rows, cols), self.domain)\n+\n+ is_zero_col = lambda j: not any(Q[i][j] for i in range(rows))\n+ dot_cols = lambda i, j: self.domain.sum(Q[k][i] * Q[k][j] for k in range(rows))\n+\n+ for j in range(cols):\n+ for i in range(j):\n+ if is_zero_col(i):\n+ continue\n+\n+ R[i][j] = dot_cols(i, j) // dot_cols(i, i)\n+ for k in range(rows):\n+ Q[k][j] -= R[i][j] * Q[k][i]\n+\n+ if not is_zero_col(j):\n+ R[j][j] = self.domain.one\n+\n+ return Q, R\n+\n def lu_solve(a, b):\n \"\"\"x where a*x = b\"\"\"\n m, n = a.shape\ndiff --git a/sympy/polys/matrices/domainmatrix.py b/sympy/polys/matrices/domainmatrix.py\nindex b91bef314d16..ff0bc07248ae 100644\n--- a/sympy/polys/matrices/domainmatrix.py\n+++ b/sympy/polys/matrices/domainmatrix.py\n@@ -3256,6 +3256,23 @@ def lu(self):\n L, U, swaps = self.rep.lu()\n return self.from_rep(L), self.from_rep(U), swaps\n \n+ def qr(self):\n+ \"\"\"\n+ QR decomposition for DomainMatrix.\n+\n+ Returns:\n+ - Q: Orthogonal DomainMatrix.\n+ - R: Upper triangular DomainMatrix.\n+ \"\"\"\n+ if isinstance(self.rep, DFM):\n+ ddm_q, ddm_r = self.rep.qr()\n+ else:\n+ ddm_q, ddm_r = self.rep.qr()\n+\n+ Q = DomainMatrix.from_rep(ddm_q)\n+ R = DomainMatrix.from_rep(ddm_r)\n+ return Q, R\n+\n def lu_solve(self, rhs):\n r\"\"\"\n Solver for DomainMatrix x in the A*x = B\ndiff --git a/sympy/polys/matrices/sdm.py b/sympy/polys/matrices/sdm.py\nindex 0e0685f5d6fb..02aba89e906b 100644\n--- a/sympy/polys/matrices/sdm.py\n+++ b/sympy/polys/matrices/sdm.py\n@@ -1065,6 +1065,19 @@ def lu(A):\n L, U, swaps = A.to_ddm().lu()\n return A.from_ddm(L), A.from_ddm(U), swaps\n \n+ def qr(self):\n+ \"\"\"\n+ QR decomposition for SDM (Sparse Domain Matrix).\n+\n+ Returns:\n+ - Q: Orthogonal matrix as a SDM.\n+ - R: Upper triangular matrix as a SDM.\n+ \"\"\"\n+ ddm_q, ddm_r = self.to_ddm().qr()\n+ Q = ddm_q.to_sdm()\n+ R = ddm_r.to_sdm()\n+ return Q, R\n+\n def lu_solve(A, b):\n \"\"\"\n \ndiff --git a/sympy/polys/matrices/tests/test_ddm.py b/sympy/polys/matrices/tests/test_ddm.py\nindex 44c862461e85..5bb1ba920467 100644\n--- a/sympy/polys/matrices/tests/test_ddm.py\n+++ b/sympy/polys/matrices/tests/test_ddm.py\n@@ -7,6 +7,7 @@\n from sympy.polys.matrices.exceptions import (\n DMShapeError, DMNonInvertibleMatrixError, DMDomainError,\n DMBadInputError)\n+from sympy.polys.matrices import DomainMatrix\n \n \n def test_DDM_init():\n@@ -556,3 +557,15 @@ def test_DDM_is_lower():\n ], (4, 3), QQ).transpose()\n assert A.is_lower() is True\n assert B.is_lower() is False\n+\n+\n+def test_ddm_qr():\n+ # Create a sample matrix in DomainMatrix\n+ A = DomainMatrix([[ZZ(3), ZZ(1)], [ZZ(4), ZZ(3)]], (2, 2), ZZ)\n+ # Perform QR decomposition\n+ Q, R = A.qr()\n+ # Check that the shapes are correct\n+ assert Q.shape == (2, 2)\n+ assert R.shape == (2, 2)\n+ # Check that A = Q * R (within the domain of integers)\n+ assert Q * R == A\ndiff --git a/sympy/polys/matrices/tests/test_domainmatrix.py b/sympy/polys/matrices/tests/test_domainmatrix.py\nindex 2b59d76a9d46..cd305f492eb8 100644\n--- a/sympy/polys/matrices/tests/test_domainmatrix.py\n+++ b/sympy/polys/matrices/tests/test_domainmatrix.py\n@@ -1360,3 +1360,11 @@ def test_DomainMatrix_pickling():\n assert pickle.loads(pickle.dumps(dM)) == dM\n dM = DomainMatrix([[ZZ(1), ZZ(2)], [ZZ(3), ZZ(4)]], (2, 2), ZZ)\n assert pickle.loads(pickle.dumps(dM)) == dM\n+\n+\n+def test_qr():\n+ A = DomainMatrix([[ZZ(3), ZZ(1)], [ZZ(4), ZZ(3)]], (2, 2), ZZ)\n+ Q, R = A.qr()\n+ assert Q.shape == (2, 2)\n+ assert R.shape == (2, 2)\n+ assert Q * R == A\n" }
[ { "diff_hunk": "@@ -3256,6 +3256,23 @@ def lu(self):\n L, U, swaps = self.rep.lu()\n return self.from_rep(L), self.from_rep(U), swaps\n \n+ def qr(self):\n+ \"\"\"\n+ QR decomposition for DomainMatrix.\n+\n+ Returns:\n+ - Q: Orthogonal DomainMatrix.\n+ - R: Upper triangular DomainMatrix.\n+ \"\"\"\n+ if isinstance(self.rep, DFM):\n+ ddm_q, ddm_r = self.rep.qr()\n+ else:\n+ ddm_q, ddm_r = self.rep.qr()", "line": null, "original_line": 3270, "original_start_line": 3267, "path": "sympy/polys/matrices/domainmatrix.py", "start_line": null, "text": "@user1:\nIf the code is the same in both branches then the if check is not needed.\n\n@author:\nOkay, I will update that." }, { "diff_hunk": "@@ -958,6 +958,34 @@ def lu(a):\n \n return L, U, swaps\n \n+ def qr(self):\n+ \"\"\"\n+ Fraction-free QR decomposition for DDM.\n+ Returns:\n+ - Q: Orthogonal matrix as a DDM.\n+ - R: Upper triangular matrix as a DDM.\n+ \"\"\"\n+ rows, cols = self.shape\n+ Q = self.copy()\n+ R = self.zeros((rows, cols), self.domain)\n+\n+ is_zero_col = lambda j: not any(Q[i][j] for i in range(rows))\n+ dot_cols = lambda i, j: self.domain.sum(Q[k][i] * Q[k][j] for k in range(rows))\n+\n+ for j in range(cols):\n+ for i in range(j):\n+ if is_zero_col(i):\n+ continue\n+\n+ R[i][j] = dot_cols(i, j) // dot_cols(i, i)", "line": null, "original_line": 980, "original_start_line": null, "path": "sympy/polys/matrices/ddm.py", "start_line": null, "text": "@user1:\nUsing `//` here is incorrect. This is not how fraction-free algorithms work: exact division (`exquo`) can be used but floor division should not.\n\n@author:\nOkay, I used `/` at first but after installing `python-flint`, `/` started giving me this` \"error: flint.utils.flint_exceptions.DomainError: fmpz division is not exact \"`\r\n\n\n@author:\nbut i will work around it, thank you for the nuggets. \n\n@user1:\nIf the domain is a field then `/` can be used.\n\n@author:\nwhat are the domains the `qr` method should support? The current implementation only supports integers (ZZ).\n\n@user1:\nThe QR decomposition should be computable for any field. To learn more about the domains see:\r\nhttps://docs.sympy.org/latest/modules/polys/domainsintro.html" } ]
f838189e7b9ec735e12498ea23aef51cfc33b361
diff --git a/sympy/polys/matrices/_dfm.py b/sympy/polys/matrices/_dfm.py index c2f6f16922e0..6a915f4fc61e 100644 --- a/sympy/polys/matrices/_dfm.py +++ b/sympy/polys/matrices/_dfm.py @@ -694,6 +694,11 @@ def lu(self): L, U, swaps = self.to_ddm().lu() return L.to_dfm(), U.to_dfm(), swaps + def qr(self): + """Return the QR decomposition of the matrix.""" + Q, R = self.to_ddm().qr() + return Q.to_dfm(), R.to_dfm() + # XXX: The lu_solve function should be renamed to solve. Whether or not it # uses an LU decomposition is an implementation detail. A method called # lu_solve would make sense for a situation in which an LU decomposition is diff --git a/sympy/polys/matrices/ddm.py b/sympy/polys/matrices/ddm.py index 8f90a3305367..a767dba163c9 100644 --- a/sympy/polys/matrices/ddm.py +++ b/sympy/polys/matrices/ddm.py @@ -958,6 +958,48 @@ def lu(a): return L, U, swaps + def qr(self): + """ + QR decomposition for DDM. + + Returns: + - Q: Orthogonal matrix as a DDM. + - R: Upper triangular matrix as a DDM. + + See Also + ======== + + sympy.polys.matrices.domainmatrix.DomainMatrix.qr + The higher-level interface to this function. + """ + rows, cols = self.shape + K = self.domain + Q = self.copy() + R = self.zeros((min(rows, cols), cols), K) + + # Check that the domain is a field + if not K.is_Field: + raise DMDomainError("QR decomposition requires a field (e.g. QQ).") + + dot_cols = lambda i, j: K.sum(Q[k][i] * Q[k][j] for k in range(rows)) + + for j in range(cols): + for i in range(min(j, rows)): + dot_ii = dot_cols(i, i) + if dot_ii != K.zero: + R[i][j] = dot_cols(i, j) / dot_ii + for k in range(rows): + Q[k][j] -= R[i][j] * Q[k][i] + + if j < rows: + dot_jj = dot_cols(j, j) + if dot_jj != K.zero: + R[j][j] = K.one + + Q = Q.extract(range(rows), range(min(rows, cols))) + + return Q, R + def lu_solve(a, b): """x where a*x = b""" m, n = a.shape diff --git a/sympy/polys/matrices/domainmatrix.py b/sympy/polys/matrices/domainmatrix.py index b91bef314d16..cb35714b4484 100644 --- a/sympy/polys/matrices/domainmatrix.py +++ b/sympy/polys/matrices/domainmatrix.py @@ -3256,6 +3256,68 @@ def lu(self): L, U, swaps = self.rep.lu() return self.from_rep(L), self.from_rep(U), swaps + def qr(self): + r""" + QR decomposition of the DomainMatrix. + + Explanation + =========== + + The QR decomposition expresses a matrix as the product of an orthogonal + matrix (Q) and an upper triangular matrix (R). In this implementation, + Q is not orthonormal: its columns are orthogonal but not normalized to + unit vectors. This avoids unnecessary divisions and is particularly + suited for exact arithmetic domains. + + Note + ==== + + This implementation is valid only for matrices over real domains. For + matrices over complex domains, a proper QR decomposition would require + handling conjugation to ensure orthogonality. + + Returns + ======= + + (Q, R) + Q is the orthogonal matrix, and R is the upper triangular matrix + resulting from the QR decomposition of the DomainMatrix. + + Raises + ====== + + DMDomainError + If the domain of the DomainMatrix is not a field (e.g., QQ). + + Examples + ======== + + >>> from sympy import QQ + >>> from sympy.polys.matrices import DomainMatrix + >>> A = DomainMatrix([[1, 2], [3, 4], [5, 6]], (3, 2), QQ) + >>> Q, R = A.qr() + >>> Q + DomainMatrix([[1, 26/35], [3, 8/35], [5, -2/7]], (3, 2), QQ) + >>> R + DomainMatrix([[1, 44/35], [0, 1]], (2, 2), QQ) + >>> Q * R == A + True + >>> (Q.transpose() * Q).is_diagonal + True + >>> R.is_upper + True + + See Also + ======== + + lu + + """ + ddm_q, ddm_r = self.rep.qr() + Q = self.from_rep(ddm_q) + R = self.from_rep(ddm_r) + return Q, R + def lu_solve(self, rhs): r""" Solver for DomainMatrix x in the A*x = B diff --git a/sympy/polys/matrices/sdm.py b/sympy/polys/matrices/sdm.py index 0e0685f5d6fb..02aba89e906b 100644 --- a/sympy/polys/matrices/sdm.py +++ b/sympy/polys/matrices/sdm.py @@ -1065,6 +1065,19 @@ def lu(A): L, U, swaps = A.to_ddm().lu() return A.from_ddm(L), A.from_ddm(U), swaps + def qr(self): + """ + QR decomposition for SDM (Sparse Domain Matrix). + + Returns: + - Q: Orthogonal matrix as a SDM. + - R: Upper triangular matrix as a SDM. + """ + ddm_q, ddm_r = self.to_ddm().qr() + Q = ddm_q.to_sdm() + R = ddm_r.to_sdm() + return Q, R + def lu_solve(A, b): """ diff --git a/sympy/polys/matrices/tests/test_xxm.py b/sympy/polys/matrices/tests/test_xxm.py index 60386a1bea4c..b68247749810 100644 --- a/sympy/polys/matrices/tests/test_xxm.py +++ b/sympy/polys/matrices/tests/test_xxm.py @@ -862,3 +862,138 @@ def test_XXM_lll(DM): assert M.lll() == M_lll assert M.lll_transform() == (M_lll, T) assert T.matmul(M) == M_lll + + [email protected]('DM', DMQ_all) +def test_XXM_qr_mixed_signs(DM): + lol = [[QQ(1), QQ(-2)], [QQ(-3), QQ(4)]] + A = DM(lol) + Q, R = A.qr() + assert Q.matmul(R) == A + assert (Q.transpose().matmul(Q)).is_diagonal + assert R.is_upper + + [email protected]('DM', DMQ_all) +def test_XXM_qr_large_matrix(DM): + lol = [[QQ(i + j) for j in range(10)] for i in range(10)] + A = DM(lol) + Q, R = A.qr() + assert Q.matmul(R) == A + assert (Q.transpose().matmul(Q)).is_diagonal + assert R.is_upper + + [email protected]('DM', DMQ_all) +def test_XXM_qr_identity_matrix(DM): + T = type(DM([[0]])) + A = T.eye(3, QQ) + Q, R = A.qr() + assert Q == A + assert R == A + assert (Q.transpose().matmul(Q)).is_diagonal + assert R.is_upper + + [email protected]('DM', DMQ_all) +def test_XXM_qr_square_matrix(DM): + lol = [[QQ(3), QQ(1)], [QQ(4), QQ(3)]] + A = DM(lol) + Q, R = A.qr() + assert Q.matmul(R) == A + assert (Q.transpose().matmul(Q)).is_diagonal + assert R.is_upper + + [email protected]('DM', DMQ_all) +def test_XXM_qr_matrix_with_zero_columns(DM): + lol = [[QQ(3), QQ(0)], [QQ(4), QQ(0)]] + A = DM(lol) + Q, R = A.qr() + assert Q.matmul(R) == A + assert (Q.transpose().matmul(Q)).is_diagonal + assert R.is_upper + + [email protected]('DM', DMQ_all) +def test_XXM_qr_linearly_dependent_columns(DM): + lol = [[QQ(1), QQ(2)], [QQ(2), QQ(4)]] + A = DM(lol) + Q, R = A.qr() + assert Q.matmul(R) == A + assert (Q.transpose().matmul(Q)).is_diagonal + assert R.is_upper + + [email protected]('DM', DMZ_all) +def test_XXM_qr_non_field(DM): + lol = [[ZZ(3), ZZ(1)], [ZZ(4), ZZ(3)]] + A = DM(lol) + with pytest.raises(DMDomainError): + A.qr() + + [email protected]('DM', DMQ_all) +def test_XXM_qr_field(DM): + lol = [[QQ(3), QQ(1)], [QQ(4), QQ(3)]] + A = DM(lol) + Q, R = A.qr() + assert Q.matmul(R) == A + assert (Q.transpose().matmul(Q)).is_diagonal + assert R.is_upper + + [email protected]('DM', DMQ_all) +def test_XXM_qr_tall_matrix(DM): + lol = [[QQ(1), QQ(2)], [QQ(3), QQ(4)], [QQ(5), QQ(6)]] + A = DM(lol) + Q, R = A.qr() + assert Q.matmul(R) == A + assert (Q.transpose().matmul(Q)).is_diagonal + assert R.is_upper + + [email protected]('DM', DMQ_all) +def test_XXM_qr_wide_matrix(DM): + lol = [[QQ(1), QQ(2), QQ(3)], [QQ(4), QQ(5), QQ(6)]] + A = DM(lol) + Q, R = A.qr() + assert Q.matmul(R) == A + assert (Q.transpose().matmul(Q)).is_diagonal + assert R.is_upper + + [email protected]('DM', DMQ_all) +def test_XXM_qr_empty_matrix_0x0(DM): + T = type(DM([[0]])) + A = T.zeros((0, 0), QQ) + Q, R = A.qr() + assert Q.matmul(R).shape == A.shape + assert (Q.transpose().matmul(Q)).is_diagonal + assert R.is_upper + assert Q.shape == (0, 0) + assert R.shape == (0, 0) + + [email protected]('DM', DMQ_all) +def test_XXM_qr_empty_matrix_2x0(DM): + T = type(DM([[0]])) + A = T.zeros((2, 0), QQ) + Q, R = A.qr() + assert Q.matmul(R).shape == A.shape + assert (Q.transpose().matmul(Q)).is_diagonal + assert R.is_upper + assert Q.shape == (2, 0) + assert R.shape == (0, 0) + + [email protected]('DM', DMQ_all) +def test_XXM_qr_empty_matrix_0x2(DM): + T = type(DM([[0]])) + A = T.zeros((0, 2), QQ) + Q, R = A.qr() + assert Q.matmul(R).shape == A.shape + assert (Q.transpose().matmul(Q)).is_diagonal + assert R.is_upper + assert Q.shape == (0, 0) + assert R.shape == (0, 2)
{ "difficulty": "high", "estimated_review_effort": 4, "problem_domain": "Performance Optimizations" }
xorbitsai__inference-289@25d3dd4
xorbitsai/inference
Python
289
FEAT: Support Ctransformers
Resolves #220
2023-08-03T06:38:28Z
FEAT: support starcoder ENH: incorporate ctransformer ### Is your feature request related to a problem? Please describe We need a backend to support StarCoder, MPT, and falcon in GGML. ### Describe the solution you'd like A clear and concise description of what you want to happen. ### Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered. ### Additional context Add any other context or screenshots about the feature request here.
[ { "body": "", "number": 184, "title": "FEAT: support starcoder" }, { "body": "### Is your feature request related to a problem? Please describe\r\nWe need a backend to support StarCoder, MPT, and falcon in GGML.\r\n\r\n### Describe the solution you'd like\r\nA clear and concise description of what you want to happen.\r\n\r\n### Describe alternatives you've considered\r\nA clear and concise description of any alternative solutions or features you've considered.\r\n\r\n### Additional context\r\nAdd any other context or screenshots about the feature request here.\r\n", "number": 220, "title": "ENH: incorporate ctransformer" } ]
22146b0ab9a78e0452cfa1b7126e77ed3ef16dcb
{ "head_commit": "25d3dd454257aeb2c4520eb831f72cad47008d27", "head_commit_message": "add GPU check to make sure only supported model can initialize Cuda.", "patch_to_review": "diff --git a/.github/workflows/python.yaml b/.github/workflows/python.yaml\nindex 810939afbd..28741ea347 100644\n--- a/.github/workflows/python.yaml\n+++ b/.github/workflows/python.yaml\n@@ -92,6 +92,7 @@ jobs:\n pip install sentencepiece\n pip install transformers_stream_generator\n pip install bitsandbytes\n+ pip install ctransformers\n pip install -e \".[dev]\"\n working-directory: .\n \ndiff --git a/setup.cfg b/setup.cfg\nindex 143cef1acb..6b81d1fb19 100644\n--- a/setup.cfg\n+++ b/setup.cfg\n@@ -52,6 +52,7 @@ dev =\n pytest-timeout>=1.2.0\n pytest-forked>=1.0\n pytest-asyncio>=0.14.0\n+ pytest-mock>=3.11.1\n ipython>=6.5.0\n sphinx>=3.0.0,<5.0.0\n pydata-sphinx-theme>=0.3.0\n@@ -60,6 +61,7 @@ dev =\n flake8>=3.8.0\n black\n all =\n+ ctransformers\n llama-cpp-python>=0.1.77\n transformers>=4.31.0\n torch\n@@ -72,6 +74,7 @@ all =\n tiktoken\n ggml =\n llama-cpp-python>=0.1.77\n+ ctransformers\n pytorch =\n transformers>=4.31.0\n torch\ndiff --git a/xinference/model/llm/__init__.py b/xinference/model/llm/__init__.py\nindex 89d9c1a0b4..14ca84554d 100644\n--- a/xinference/model/llm/__init__.py\n+++ b/xinference/model/llm/__init__.py\n@@ -35,6 +35,7 @@\n \n def _install():\n from .ggml.chatglm import ChatglmCppChatModel\n+ from .ggml.ctransformers import CtransformersModel\n from .ggml.llamacpp import LlamaCppChatModel, LlamaCppModel\n from .pytorch.baichuan import BaichuanPytorchChatModel\n from .pytorch.chatglm import ChatglmPytorchChatModel\n@@ -54,6 +55,7 @@ def _install():\n FalconPytorchModel,\n FalconPytorchChatModel,\n ChatglmPytorchChatModel,\n+ CtransformersModel,\n ]\n )\n \ndiff --git a/xinference/model/llm/ggml/ctransformers.py b/xinference/model/llm/ggml/ctransformers.py\nnew file mode 100644\nindex 0000000000..3a5984dac2\n--- /dev/null\n+++ b/xinference/model/llm/ggml/ctransformers.py\n@@ -0,0 +1,277 @@\n+# Copyright 2022-2023 XProbe Inc.\n+#\n+# Licensed under the Apache License, Version 2.0 (the \"License\");\n+# you may not use this file except in compliance with the License.\n+# You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing, software\n+# distributed under the License is distributed on an \"AS IS\" BASIS,\n+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+# See the License for the specific language governing permissions and\n+# limitations under the License.\n+\n+import logging\n+import os\n+from typing import TYPE_CHECKING, Iterator, Optional, Sequence, TypedDict, Union\n+\n+if TYPE_CHECKING:\n+ from ctransformers import AutoConfig\n+\n+from ....types import Completion, CompletionChunk\n+from ..core import LLM\n+from ..llm_family import LLMFamilyV1, LLMSpecV1\n+from .ctransformers_util import generate_stream\n+\n+logger = logging.getLogger(__name__)\n+\n+# all supported models for Ctransformers with their model type.\n+# Please Strictly follows this name format when inputting new model to model_family.\n+MODEL_TYPE_FOR_CTRANSFORMERS = {\n+ \"gpt-2\": \"gpt2\",\n+ \"gpt-j\": \"gptj\",\n+ \"gpt4all-j\": \"gptj\",\n+ \"gpt-neox\": \"gpt_neox\",\n+ \"stablelm\": \"gpt_neox\",\n+ \"llama\": \"llama\",\n+ \"llama-2\": \"llama\",\n+ \"mpt\": \"mpt\",\n+ \"dolly-v2\": \"dolly-v2\",\n+ \"replit\": \"replit\",\n+ \"starcoder\": \"starcoder\",\n+ \"starchat\": \"starcoder\",\n+ \"falcon\": \"falcon\",\n+}\n+\n+# these two constants subjects to change for future development and ctransformers updates.\n+CTRANSFORMERS_SUPPORTED_MODEL = [\"starcoder\", \"gpt-2\"]\n+\n+CTRANSFORMERS_GPU_SUPPORT = [\"llama\", \"llama-2\", \"mpt\", \"falcon\"]\n+\n+SIZE_TO_GPU_LAYERS = {\n+ 3: 26,\n+ 7: 32,\n+ 13: 40,\n+ 30: 60,\n+ 65: 80,\n+}\n+\n+\n+class CtransformersModelConfig(TypedDict, total=False):\n+ n_ctx: int\n+ n_gpu_layers: int\n+\n+\n+class CtransformersGenerateConfig(TypedDict, total=False):\n+ max_tokens: Optional[int]\n+ top_k: Optional[int]\n+ top_p: Optional[float]\n+ temperature: Optional[float]\n+ repetition_penalty: Optional[float]\n+ last_n_tokens: Optional[int]\n+ seed: Optional[int]\n+ batch_size: Optional[int]\n+ threads: Optional[int]\n+ stop: Optional[Sequence[str]]\n+ stream: Optional[bool]\n+ reset: Optional[bool]\n+\n+\n+def _has_cuda_device():\n+ from xorbits._mars.resource import cuda_count\n+\n+ return cuda_count() > 0\n+\n+\n+class CtransformersModel(LLM):\n+ def __init__(\n+ self,\n+ model_uid: str,\n+ model_family: \"LLMFamilyV1\",\n+ model_spec: \"LLMSpecV1\",\n+ quantization: str,\n+ model_path: str,\n+ ctransformers_Model_Config: Optional[CtransformersModelConfig],\n+ ):\n+ super().__init__(model_uid, model_family, model_spec, quantization, model_path)\n+\n+ self._model_type = None\n+ closest_size = min(\n+ SIZE_TO_GPU_LAYERS.keys(),\n+ key=lambda x: abs(x - model_spec.model_size_in_billions),\n+ )\n+\n+ self._model_family = model_family\n+ self._model_uid = model_uid\n+ self._llm = None\n+\n+ self._gpu_layers = SIZE_TO_GPU_LAYERS[closest_size]\n+ self._ctransformer_model_config = self._sanitize_model_config(\n+ model_path, ctransformers_Model_Config\n+ )\n+\n+ def _sanitize_model_config(\n+ self, model_path, ctransformers_model_config: Optional[CtransformersModelConfig]\n+ ) -> \"AutoConfig\":\n+ try:\n+ from ctransformers import AutoConfig, Config\n+ except ImportError:\n+ error_message = (\n+ \"Failed to import module 'ctransformers - AutoConfig and Config'\"\n+ )\n+\n+ installation_guide = [\n+ f\"Please make sure 'ctransformers' is installed.\",\n+ f\"You can install it by checking out the repository for command:\"\n+ f\"https://github.com/marella/ctransformers\",\n+ ]\n+\n+ raise ImportError(f\"{error_message}\\n\\n{''.join(installation_guide)}\")\n+\n+ # if the model have customized config, we update it.\n+ ctransformers_model_config_returned = Config()\n+ potential_gpu_layers = None\n+ if ctransformers_model_config:\n+ potential_context_length = ctransformers_model_config.pop(\"n_ctx\", None)\n+ potential_gpu_layers = ctransformers_model_config.pop(\"n_gpu_layers\", None)\n+\n+ ctransformers_model_config_returned.context_length = (\n+ potential_context_length\n+ )\n+ ctransformers_model_config_returned.gpu_layers = potential_gpu_layers\n+\n+ # if user does not define gpu layers, we have to set it with our system if applicable.\n+ if potential_gpu_layers is None:\n+ if self._model_family.model_name not in CTRANSFORMERS_GPU_SUPPORT:\n+ ctransformers_model_config_returned.gpu_layers = -1\n+ elif self._is_darwin_and_apple_silicon():\n+ ctransformers_model_config_returned.gpu_layers = 1\n+ elif _has_cuda_device():\n+ ctransformers_model_config_returned.gpu_layers = self._gpu_layers\n+\n+ return AutoConfig(ctransformers_model_config_returned)\n+\n+ def _sanitize_generate_config(\n+ self,\n+ ctransformers_generate_config: Optional[CtransformersGenerateConfig],\n+ ) -> CtransformersGenerateConfig:\n+ # if the input config is not None, we try to copy the selected attributes to the ctransformersGenerateConfig.\n+ if ctransformers_generate_config is None:\n+ ctransformers_generate_config = CtransformersGenerateConfig()\n+\n+ # for our system, the threads will have to be set to 4\n+ # all other parameters, if not specified, will be set to default when generate.\n+ ctransformers_generate_config.setdefault(\"threads\", 4)\n+\n+ return ctransformers_generate_config\n+\n+ def load(self):\n+ try:\n+ from ctransformers import AutoModelForCausalLM\n+ except ImportError:\n+ error_message = \"Failed to import module 'ctransformers'\"\n+\n+ installation_guide = [\n+ f\"Please make sure 'ctransformers' is installed.\",\n+ f\"You can install it by checking out the repository for command.\"\n+ f\"https://github.com/marella/ctransformers\",\n+ ]\n+\n+ raise ImportError(f\"{error_message}\\n\\n{''.join(installation_guide)}\")\n+\n+ model_path = os.path.join(\n+ self.model_path,\n+ self.model_spec.model_file_name_template.format(\n+ quantization=self.quantization\n+ ),\n+ )\n+\n+ self._model_type = self._determine_model_type()\n+ self._llm = AutoModelForCausalLM.from_pretrained(\n+ model_path_or_repo_id=model_path,\n+ model_type=self._model_type,\n+ config=self._ctransformer_model_config,\n+ )\n+\n+ @classmethod\n+ def match(cls, llm_family: LLMFamilyV1, llm_spec: LLMSpecV1) -> bool:\n+ if llm_spec.model_format != \"ggmlv3\":\n+ return False\n+ if llm_family.model_name not in CTRANSFORMERS_SUPPORTED_MODEL:\n+ return False\n+ if \"generate\" not in llm_family.model_ability:\n+ return False\n+ return True\n+\n+ def _determine_model_type(self):\n+ if self._model_family.model_name not in MODEL_TYPE_FOR_CTRANSFORMERS:\n+ raise ValueError(\n+ f\"The current model {self._model_family.model_name} is not supported, check your model name. \"\n+ )\n+ return MODEL_TYPE_FOR_CTRANSFORMERS[self._model_family.model_name]\n+\n+ def generate(\n+ self, prompt: str, generate_config_raw: CtransformersGenerateConfig\n+ ) -> Union[Completion, Iterator[CompletionChunk]]:\n+ def generator_wrapper(\n+ _prompt: str,\n+ _max_new_tokens: Union[int, None],\n+ _generate_config: CtransformersGenerateConfig,\n+ ) -> Iterator[CompletionChunk]:\n+ assert self._model_uid is not None\n+ for _completion_chunk, _ in generate_stream(\n+ model=self._model_uid,\n+ model_ref=self._llm,\n+ prompt=_prompt,\n+ max_new_tokens=_max_new_tokens,\n+ **_generate_config,\n+ ):\n+ yield _completion_chunk\n+\n+ generate_config = self._sanitize_generate_config(generate_config_raw)\n+ max_new_tokens = generate_config.pop(\"max_tokens\", None)\n+\n+ logger.debug(\n+ \"Enter generate, prompt: %s, generate config: %s\", prompt, generate_config\n+ )\n+\n+ stream_or_not = generate_config.get(\"stream\", False)\n+ if stream_or_not:\n+ return generator_wrapper(\n+ _prompt=prompt,\n+ _max_new_tokens=max_new_tokens,\n+ _generate_config=generate_config,\n+ )\n+ else:\n+ assert self.model_uid is not None\n+ completion_chunk = None\n+ completion_usage = None\n+ for completion_chunk, completion_usage in generate_stream(\n+ model=self.model_uid,\n+ model_ref=self._llm,\n+ prompt=prompt,\n+ max_new_tokens=max_new_tokens,\n+ **generate_config,\n+ ):\n+ pass\n+\n+ assert completion_chunk is not None\n+ assert completion_usage is not None\n+\n+ completion = Completion(\n+ id=completion_chunk[\"id\"],\n+ object=completion_chunk[\"object\"],\n+ created=completion_chunk[\"created\"],\n+ model=completion_chunk[\"model\"],\n+ choices=completion_chunk[\"choices\"],\n+ usage=completion_usage,\n+ )\n+\n+ logger.debug(\n+ \"Generated, completion: %s, generate config: %s\",\n+ completion,\n+ generate_config,\n+ )\n+\n+ return completion\ndiff --git a/xinference/model/llm/ggml/ctransformers_util.py b/xinference/model/llm/ggml/ctransformers_util.py\nnew file mode 100644\nindex 0000000000..33a14705be\n--- /dev/null\n+++ b/xinference/model/llm/ggml/ctransformers_util.py\n@@ -0,0 +1,165 @@\n+# Copyright 2022-2023 XProbe Inc.\n+#\n+# Licensed under the Apache License, Version 2.0 (the \"License\");\n+# you may not use this file except in compliance with the License.\n+# You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing, software\n+# distributed under the License is distributed on an \"AS IS\" BASIS,\n+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+# See the License for the specific language governing permissions and\n+# limitations under the License.\n+import logging\n+import re\n+import time\n+import uuid\n+from typing import Iterator, Optional, Sequence, Tuple\n+\n+from ....types import CompletionChoice, CompletionChunk, CompletionUsage\n+\n+logger = logging.getLogger(__name__)\n+\n+\n+def generate_stream(\n+ model,\n+ model_ref,\n+ prompt: str,\n+ *,\n+ max_new_tokens: Optional[int] = None,\n+ top_k: Optional[int] = None,\n+ top_p: Optional[float] = None,\n+ temperature: Optional[float] = None,\n+ repetition_penalty: Optional[float] = None,\n+ last_n_tokens: Optional[int] = None,\n+ seed: Optional[int] = None,\n+ batch_size: Optional[int] = None,\n+ stream: Optional[bool] = False,\n+ threads: Optional[int] = None,\n+ stop: Optional[Sequence[str]] = None,\n+ reset: Optional[bool] = None,\n+ **kwargs,\n+) -> Iterator[Tuple[CompletionChunk, CompletionUsage]]:\n+ stop = stop or []\n+ if isinstance(stop, str):\n+ stop = [stop]\n+\n+ tokens = model_ref.tokenize(prompt)\n+\n+ stop_regex = re.compile(\"|\".join(map(re.escape, stop)))\n+ count = 0\n+ text = \"\"\n+ total_text = \"\"\n+ incomplete = b\"\"\n+\n+ # parameters needed for Xinference.\n+ finish_reason = None\n+\n+ for token in model_ref.generate(\n+ tokens,\n+ top_k=top_k,\n+ top_p=top_p,\n+ temperature=temperature,\n+ repetition_penalty=repetition_penalty,\n+ last_n_tokens=last_n_tokens,\n+ seed=seed,\n+ batch_size=batch_size,\n+ threads=threads,\n+ reset=reset,\n+ ):\n+ # Handle incomplete UTF-8 multi-byte characters.\n+ try:\n+ from ctransformers.utils import utf8_split_incomplete\n+ except ImportError:\n+ error_message = (\n+ \"Failed to import module 'ctransformers - utf8_split_incomplete'\"\n+ )\n+\n+ installation_guide = [\n+ \"Please make sure 'ctransformers' is installed. You can install it by checking out the repository: \"\n+ \"https://github.com/marella/ctransformers\",\n+ ]\n+\n+ raise ImportError(f\"{error_message}\\n\\n{''.join(installation_guide)}\")\n+\n+ incomplete += model_ref.detokenize([token], decode=False)\n+ complete, incomplete = utf8_split_incomplete(incomplete)\n+ output = complete.decode(errors=\"ignore\")\n+ text += output\n+ total_text += output\n+\n+ logger.debug(\"Output, completion: %s\", text)\n+\n+ # https://github.com/abetlen/llama-cpp-python/blob/1a13d76c487df1c8560132d10bda62d6e2f4fa93/llama_cpp/llama.py#L686-L706\n+ # Check if one of the stop sequences is part of the text.\n+ # Note that the stop sequence may not always be at the end of text.\n+ if stop:\n+ match = stop_regex.search(text)\n+ if match:\n+ text = text[: match.start()]\n+ finish_reason = \"stop\"\n+ break\n+\n+ # Avoid sending the longest suffix of text which is also a prefix\n+ # of a stop sequence, as it can form a stop sequence with the text\n+ # generated later.\n+ longest = 0\n+ for s in stop:\n+ for i in range(len(s), 0, -1):\n+ if text.endswith(s[:i]):\n+ longest = max(i, longest)\n+ break\n+\n+ end = len(text) - longest\n+ if end > 0:\n+ output = text[:end]\n+ completion_choice = CompletionChoice(\n+ text=output, index=0, logprobs=None, finish_reason=None\n+ )\n+ completion_chunk = CompletionChunk(\n+ id=str(uuid.uuid1()),\n+ object=\"text_completion\",\n+ created=int(time.time()),\n+ model=model,\n+ choices=[completion_choice],\n+ )\n+ completion_usage = CompletionUsage(\n+ prompt_tokens=len(tokens),\n+ completion_tokens=count + 1,\n+ total_tokens=count + 1 + len(tokens),\n+ )\n+\n+ yield completion_chunk, completion_usage\n+ text = text[end:]\n+\n+ count += 1\n+ if max_new_tokens is not None and count >= max_new_tokens:\n+ finish_reason = \"length\"\n+ break\n+\n+ if stream is False:\n+ completion_choice = CompletionChoice(\n+ text=total_text, index=0, logprobs=None, finish_reason=finish_reason\n+ )\n+ else:\n+ completion_choice = CompletionChoice(\n+ text=text, index=0, logprobs=None, finish_reason=finish_reason\n+ )\n+\n+ completion_chunk = CompletionChunk(\n+ id=str(uuid.uuid1()),\n+ object=\"text_completion\",\n+ created=int(time.time()),\n+ model=model,\n+ choices=[completion_choice],\n+ )\n+ completion_usage = CompletionUsage(\n+ prompt_tokens=len(tokens),\n+ completion_tokens=count,\n+ total_tokens=count + len(tokens),\n+ )\n+\n+ logger.debug(\"Completionchoice: %s\", completion_choice)\n+\n+ yield completion_chunk, completion_usage\ndiff --git a/xinference/model/llm/ggml/llamacpp.py b/xinference/model/llm/ggml/llamacpp.py\nindex bea3e4744e..4bcee28059 100644\n--- a/xinference/model/llm/ggml/llamacpp.py\n+++ b/xinference/model/llm/ggml/llamacpp.py\n@@ -28,6 +28,7 @@\n from ..core import LLM\n from ..llm_family import LLMFamilyV1, LLMSpecV1\n from ..utils import ChatModelMixin\n+from .ctransformers import CTRANSFORMERS_SUPPORTED_MODEL\n \n if TYPE_CHECKING:\n from llama_cpp import LogitsProcessorList, StoppingCriteriaList\n@@ -187,7 +188,10 @@ def load(self):\n def match(cls, llm_family: LLMFamilyV1, llm_spec: LLMSpecV1) -> bool:\n if llm_spec.model_format != \"ggmlv3\":\n return False\n- if \"chatglm\" in llm_family.model_name:\n+ if (\n+ \"chatglm\" in llm_family.model_name\n+ or llm_family.model_name in CTRANSFORMERS_SUPPORTED_MODEL\n+ ):\n return False\n if \"generate\" not in llm_family.model_ability:\n return False\n@@ -258,7 +262,10 @@ def __init__(\n def match(cls, llm_family: LLMFamilyV1, llm_spec: LLMSpecV1) -> bool:\n if llm_spec.model_format != \"ggmlv3\":\n return False\n- if \"chatglm\" in llm_family.model_name:\n+ if (\n+ \"chatglm\" in llm_family.model_name\n+ or llm_family.model_name in CTRANSFORMERS_SUPPORTED_MODEL\n+ ):\n return False\n if \"chat\" not in llm_family.model_ability:\n return False\ndiff --git a/xinference/model/llm/ggml/tests/test_ctransformers.py b/xinference/model/llm/ggml/tests/test_ctransformers.py\nnew file mode 100644\nindex 0000000000..d13b2377fb\n--- /dev/null\n+++ b/xinference/model/llm/ggml/tests/test_ctransformers.py\n@@ -0,0 +1,166 @@\n+# Copyright 2022-2023 XProbe Inc.\n+#\n+# Licensed under the Apache License, Version 2.0 (the \"License\");\n+# you may not use this file except in compliance with the License.\n+# You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing, software\n+# distributed under the License is distributed on an \"AS IS\" BASIS,\n+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+# See the License for the specific language governing permissions and\n+# limitations under the License.\n+import random\n+import string\n+\n+import pytest\n+\n+from .....client import Client, GenerateModelHandle\n+from ....llm import GgmlLLMSpecV1, LLMFamilyV1\n+from ..ctransformers import CtransformersModel\n+\n+\n+class MockCtransformersModel(CtransformersModel):\n+ def load(self):\n+ pass\n+\n+\n+mock_model_spec = GgmlLLMSpecV1(\n+ model_format=\"ggmlv3\",\n+ model_size_in_billions=6,\n+ quantizations=[\"q2_k\", \"q4_0\"],\n+ model_id=\"test_id\",\n+ model_file_name_template=\"TestModel.{quantization}.ggmlv3.bin\",\n+)\n+\n+test_model_spec = \"\"\"{\n+ \"version\":1,\n+ \"model_name\":\"TestModel\",\n+ \"model_lang\":[\n+ \"en\"\n+ ],\n+ \"model_ability\":[\n+ \"embed\", \"generate\"\n+ ],\n+ \"model_specs\":[\n+ {\n+ \"model_format\":\"ggmlv3\",\n+ \"model_size_in_billions\":6,\n+ \"quantizations\": [\"q2_k\", \"q4_0\"],\n+ \"model_id\":\"test_id\",\n+ \"model_file_name_template\":\"TestModel.{quantization}.ggmlv3.bin\"\n+ },\n+ {\n+ \"model_format\":\"pytorch\",\n+ \"model_size_in_billions\":3,\n+ \"quantizations\": [\"int8\", \"int4\", \"none\"],\n+ \"model_id\":\"example/TestModel\"\n+ }\n+ ],\n+ \"prompt_style\": null\n+}\"\"\"\n+\n+mock_model_family = LLMFamilyV1.parse_raw(test_model_spec)\n+\n+\[email protected](\n+ \"model_spec, model_family\", [(mock_model_spec, mock_model_family)]\n+)\n+def test_ctransformer_init(model_spec, model_family):\n+ from ctransformers import AutoConfig\n+\n+ quantization = \"q4_0\"\n+ uid = \"\".join(random.choice(string.digits) for i in range(15))\n+ path = \"\".join(\n+ random.choice(string.ascii_letters + string.punctuation) for i in range(100)\n+ )\n+ model = MockCtransformersModel(\n+ model_uid=uid,\n+ model_family=model_family,\n+ model_spec=model_spec,\n+ quantization=quantization,\n+ model_path=path,\n+ ctransformers_Model_Config=None,\n+ )\n+\n+ assert model.model_uid == uid\n+ assert model.quantization == quantization\n+ assert model.model_path == path\n+ assert model._ctransformer_model_config is not None\n+ assert isinstance(model._ctransformer_model_config, AutoConfig)\n+\n+ assert isinstance(model.model_spec, GgmlLLMSpecV1)\n+ assert isinstance(model.model_family, LLMFamilyV1)\n+ assert isinstance(model.model_family.model_specs[0], GgmlLLMSpecV1)\n+\n+ assert (\n+ model.model_family.model_specs[0].model_format == model.model_spec.model_format\n+ )\n+ assert model.model_family.model_specs[0].model_format == model_spec.model_format\n+ assert (\n+ model.model_family.model_specs[0].model_size_in_billions\n+ == model.model_spec.model_size_in_billions\n+ )\n+ assert (\n+ model.model_family.model_specs[0].model_size_in_billions\n+ == model_spec.model_size_in_billions\n+ )\n+ assert (\n+ model.model_family.model_specs[0].quantizations\n+ == model.model_spec.quantizations\n+ )\n+ assert model.model_family.model_specs[0].quantizations == model_spec.quantizations\n+ assert model.model_family.model_specs[0].model_id == model.model_spec.model_id\n+ assert model.model_family.model_specs[0].model_id == model_spec.model_id\n+ assert (\n+ model.model_family.model_specs[0].model_file_name_template\n+ == model.model_spec.model_file_name_template\n+ )\n+ assert (\n+ model.model_family.model_specs[0].model_file_name_template\n+ == model_spec.model_file_name_template\n+ )\n+ assert model._llm is None\n+\n+\[email protected]\n+async def test_ctransformers_generate(setup):\n+ endpoint, _ = setup\n+ client = Client(endpoint)\n+ assert len(client.list_models()) == 0\n+\n+ model_uid = client.launch_model(\n+ model_name=\"gpt-2\",\n+ model_size_in_billions=1,\n+ model_format=\"ggmlv3\",\n+ quantization=\"none\",\n+ )\n+\n+ assert len(client.list_models()) == 1\n+\n+ model = client.get_model(model_uid=model_uid)\n+ assert isinstance(model, GenerateModelHandle)\n+\n+ completion = model.generate(\"AI is going to\", generate_config={\"max_tokens\": 5})\n+ print(completion)\n+ assert \"id\" in completion\n+ assert \"text\" in completion[\"choices\"][0]\n+ assert len(completion[\"choices\"][0][\"text\"]) > 0\n+\n+ assert completion[\"model\"] == model_uid\n+\n+ assert \"finish_reason\" in completion[\"choices\"][0]\n+ assert completion[\"choices\"][0][\"finish_reason\"] == \"length\"\n+\n+ assert \"prompt_tokens\" in completion[\"usage\"]\n+ assert completion[\"usage\"][\"prompt_tokens\"] == 4\n+\n+ assert \"completion_tokens\" in completion[\"usage\"]\n+ assert completion[\"usage\"][\"completion_tokens\"] == 5\n+\n+ assert \"total_tokens\" in completion[\"usage\"]\n+ assert completion[\"usage\"][\"total_tokens\"] == 9\n+\n+ client.terminate_model(model_uid=model_uid)\n+ assert len(client.list_models()) == 0\ndiff --git a/xinference/model/llm/llm_family.json b/xinference/model/llm/llm_family.json\nindex ae481f2245..8e709182f2 100644\n--- a/xinference/model/llm/llm_family.json\n+++ b/xinference/model/llm/llm_family.json\n@@ -744,7 +744,8 @@\n \"version\": 1,\n \"model_name\": \"qwen-chat\",\n \"model_lang\": [\n- \"en\", \"zh\"\n+ \"en\",\n+ \"zh\"\n ],\n \"model_ability\": [\n \"embed\",\n@@ -774,5 +775,53 @@\n 151643\n ]\n }\n+ },\n+ {\n+ \"version\": 1,\n+ \"model_name\": \"starcoder\",\n+ \"model_lang\": [\n+ \"en\"\n+ ],\n+ \"model_ability\":[\n+ \"generate\"\n+ ],\n+ \"model_specs\": [\n+ {\n+ \"model_format\": \"ggmlv3\",\n+ \"model_size_in_billions\": 16,\n+ \"quantizations\": [\n+ \"q4_0\",\n+ \"q4_1\",\n+ \"q5_0\",\n+ \"q5_1\",\n+ \"q8_0\"\n+ ],\n+ \"model_id\": \"TheBloke/starcoder-GGML\",\n+ \"model_file_name_template\": \"starcoder.ggmlv3.{quantization}.bin\"\n+ }\n+ ],\n+ \"prompt_style\": null\n+ },\n+ {\n+ \"version\": 1,\n+ \"model_name\": \"gpt-2\",\n+ \"model_lang\": [\n+ \"en\"\n+ ],\n+ \"model_ability\":[\n+ \"generate\"\n+ ],\n+ \"model_specs\": [\n+ {\n+ \"model_format\": \"ggmlv3\",\n+ \"model_size_in_billions\": 1,\n+ \"quantizations\": [\n+ \"none\"\n+ ],\n+ \"model_id\": \"marella/gpt-2-ggml\",\n+ \"model_file_name_template\": \"ggml-model.bin\"\n+ }\n+ ],\n+ \"prompt_style\": null\n }\n ]\n" }
[ { "diff_hunk": "@@ -0,0 +1,277 @@\n+# Copyright 2022-2023 XProbe Inc.\n+#\n+# Licensed under the Apache License, Version 2.0 (the \"License\");\n+# you may not use this file except in compliance with the License.\n+# You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing, software\n+# distributed under the License is distributed on an \"AS IS\" BASIS,\n+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+# See the License for the specific language governing permissions and\n+# limitations under the License.\n+\n+import logging\n+import os\n+from typing import TYPE_CHECKING, Iterator, Optional, Sequence, TypedDict, Union\n+\n+if TYPE_CHECKING:\n+ from ctransformers import AutoConfig\n+\n+from ....types import Completion, CompletionChunk\n+from ..core import LLM\n+from ..llm_family import LLMFamilyV1, LLMSpecV1\n+from .ctransformers_util import generate_stream\n+\n+logger = logging.getLogger(__name__)\n+\n+# all supported models for Ctransformers with their model type.\n+# Please Strictly follows this name format when inputting new model to model_family.\n+MODEL_TYPE_FOR_CTRANSFORMERS = {\n+ \"gpt-2\": \"gpt2\",\n+ \"gpt-j\": \"gptj\",\n+ \"gpt4all-j\": \"gptj\",\n+ \"gpt-neox\": \"gpt_neox\",\n+ \"stablelm\": \"gpt_neox\",\n+ \"llama\": \"llama\",\n+ \"llama-2\": \"llama\",\n+ \"mpt\": \"mpt\",\n+ \"dolly-v2\": \"dolly-v2\",\n+ \"replit\": \"replit\",\n+ \"starcoder\": \"starcoder\",\n+ \"starchat\": \"starcoder\",\n+ \"falcon\": \"falcon\",\n+}\n+\n+# these two constants subjects to change for future development and ctransformers updates.\n+CTRANSFORMERS_SUPPORTED_MODEL = [\"starcoder\", \"gpt-2\"]\n+\n+CTRANSFORMERS_GPU_SUPPORT = [\"llama\", \"llama-2\", \"mpt\", \"falcon\"]\n+\n+SIZE_TO_GPU_LAYERS = {\n+ 3: 26,\n+ 7: 32,\n+ 13: 40,\n+ 30: 60,\n+ 65: 80,\n+}\n+\n+\n+class CtransformersModelConfig(TypedDict, total=False):\n+ n_ctx: int\n+ n_gpu_layers: int\n+\n+\n+class CtransformersGenerateConfig(TypedDict, total=False):\n+ max_tokens: Optional[int]\n+ top_k: Optional[int]\n+ top_p: Optional[float]\n+ temperature: Optional[float]\n+ repetition_penalty: Optional[float]\n+ last_n_tokens: Optional[int]\n+ seed: Optional[int]\n+ batch_size: Optional[int]\n+ threads: Optional[int]\n+ stop: Optional[Sequence[str]]\n+ stream: Optional[bool]\n+ reset: Optional[bool]\n+\n+\n+def _has_cuda_device():\n+ from xorbits._mars.resource import cuda_count\n+\n+ return cuda_count() > 0\n+\n+\n+class CtransformersModel(LLM):\n+ def __init__(\n+ self,\n+ model_uid: str,\n+ model_family: \"LLMFamilyV1\",\n+ model_spec: \"LLMSpecV1\",\n+ quantization: str,\n+ model_path: str,\n+ ctransformers_Model_Config: Optional[CtransformersModelConfig],\n+ ):\n+ super().__init__(model_uid, model_family, model_spec, quantization, model_path)\n+\n+ self._model_type = None\n+ closest_size = min(\n+ SIZE_TO_GPU_LAYERS.keys(),\n+ key=lambda x: abs(x - model_spec.model_size_in_billions),\n+ )\n+\n+ self._model_family = model_family\n+ self._model_uid = model_uid\n+ self._llm = None\n+\n+ self._gpu_layers = SIZE_TO_GPU_LAYERS[closest_size]\n+ self._ctransformer_model_config = self._sanitize_model_config(\n+ model_path, ctransformers_Model_Config\n+ )\n+\n+ def _sanitize_model_config(\n+ self, model_path, ctransformers_model_config: Optional[CtransformersModelConfig]\n+ ) -> \"AutoConfig\":\n+ try:\n+ from ctransformers import AutoConfig, Config\n+ except ImportError:\n+ error_message = (\n+ \"Failed to import module 'ctransformers - AutoConfig and Config'\"\n+ )\n+\n+ installation_guide = [\n+ f\"Please make sure 'ctransformers' is installed.\",\n+ f\"You can install it by checking out the repository for command:\"\n+ f\"https://github.com/marella/ctransformers\",\n+ ]\n+\n+ raise ImportError(f\"{error_message}\\n\\n{''.join(installation_guide)}\")\n+\n+ # if the model have customized config, we update it.\n+ ctransformers_model_config_returned = Config()\n+ potential_gpu_layers = None\n+ if ctransformers_model_config:\n+ potential_context_length = ctransformers_model_config.pop(\"n_ctx\", None)\n+ potential_gpu_layers = ctransformers_model_config.pop(\"n_gpu_layers\", None)\n+\n+ ctransformers_model_config_returned.context_length = (", "line": null, "original_line": 139, "original_start_line": null, "path": "xinference/model/llm/ggml/ctransformers.py", "start_line": null, "text": "@user1:\nThe redundant parentheses could be misleading.\n\n@author:\nchanged in the latest version" }, { "diff_hunk": "@@ -0,0 +1,277 @@\n+# Copyright 2022-2023 XProbe Inc.\n+#\n+# Licensed under the Apache License, Version 2.0 (the \"License\");\n+# you may not use this file except in compliance with the License.\n+# You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing, software\n+# distributed under the License is distributed on an \"AS IS\" BASIS,\n+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+# See the License for the specific language governing permissions and\n+# limitations under the License.\n+\n+import logging\n+import os\n+from typing import TYPE_CHECKING, Iterator, Optional, Sequence, TypedDict, Union\n+\n+if TYPE_CHECKING:\n+ from ctransformers import AutoConfig\n+\n+from ....types import Completion, CompletionChunk\n+from ..core import LLM\n+from ..llm_family import LLMFamilyV1, LLMSpecV1\n+from .ctransformers_util import generate_stream\n+\n+logger = logging.getLogger(__name__)\n+\n+# all supported models for Ctransformers with their model type.\n+# Please Strictly follows this name format when inputting new model to model_family.\n+MODEL_TYPE_FOR_CTRANSFORMERS = {\n+ \"gpt-2\": \"gpt2\",\n+ \"gpt-j\": \"gptj\",\n+ \"gpt4all-j\": \"gptj\",\n+ \"gpt-neox\": \"gpt_neox\",\n+ \"stablelm\": \"gpt_neox\",\n+ \"llama\": \"llama\",\n+ \"llama-2\": \"llama\",\n+ \"mpt\": \"mpt\",\n+ \"dolly-v2\": \"dolly-v2\",\n+ \"replit\": \"replit\",\n+ \"starcoder\": \"starcoder\",\n+ \"starchat\": \"starcoder\",\n+ \"falcon\": \"falcon\",\n+}\n+\n+# these two constants subjects to change for future development and ctransformers updates.\n+CTRANSFORMERS_SUPPORTED_MODEL = [\"starcoder\", \"gpt-2\"]\n+\n+CTRANSFORMERS_GPU_SUPPORT = [\"llama\", \"llama-2\", \"mpt\", \"falcon\"]\n+\n+SIZE_TO_GPU_LAYERS = {\n+ 3: 26,\n+ 7: 32,\n+ 13: 40,\n+ 30: 60,\n+ 65: 80,\n+}\n+\n+\n+class CtransformersModelConfig(TypedDict, total=False):\n+ n_ctx: int\n+ n_gpu_layers: int\n+\n+\n+class CtransformersGenerateConfig(TypedDict, total=False):\n+ max_tokens: Optional[int]\n+ top_k: Optional[int]\n+ top_p: Optional[float]\n+ temperature: Optional[float]\n+ repetition_penalty: Optional[float]\n+ last_n_tokens: Optional[int]\n+ seed: Optional[int]\n+ batch_size: Optional[int]\n+ threads: Optional[int]\n+ stop: Optional[Sequence[str]]\n+ stream: Optional[bool]\n+ reset: Optional[bool]\n+\n+\n+def _has_cuda_device():\n+ from xorbits._mars.resource import cuda_count\n+\n+ return cuda_count() > 0\n+\n+\n+class CtransformersModel(LLM):\n+ def __init__(\n+ self,\n+ model_uid: str,\n+ model_family: \"LLMFamilyV1\",\n+ model_spec: \"LLMSpecV1\",\n+ quantization: str,\n+ model_path: str,\n+ ctransformers_Model_Config: Optional[CtransformersModelConfig],", "line": null, "original_line": 95, "original_start_line": null, "path": "xinference/model/llm/ggml/ctransformers.py", "start_line": null, "text": "@user1:\n`ctransformers_Model_Config` -> `ctransformers_model_config`\n\n@author:\nchanged in the latest version" } ]
bfecc8a24670a57ff67bc160a80dc380cdd52f85
diff --git a/.github/workflows/python.yaml b/.github/workflows/python.yaml index 810939afbd..28741ea347 100644 --- a/.github/workflows/python.yaml +++ b/.github/workflows/python.yaml @@ -92,6 +92,7 @@ jobs: pip install sentencepiece pip install transformers_stream_generator pip install bitsandbytes + pip install ctransformers pip install -e ".[dev]" working-directory: . diff --git a/setup.cfg b/setup.cfg index 143cef1acb..6b81d1fb19 100644 --- a/setup.cfg +++ b/setup.cfg @@ -52,6 +52,7 @@ dev = pytest-timeout>=1.2.0 pytest-forked>=1.0 pytest-asyncio>=0.14.0 + pytest-mock>=3.11.1 ipython>=6.5.0 sphinx>=3.0.0,<5.0.0 pydata-sphinx-theme>=0.3.0 @@ -60,6 +61,7 @@ dev = flake8>=3.8.0 black all = + ctransformers llama-cpp-python>=0.1.77 transformers>=4.31.0 torch @@ -72,6 +74,7 @@ all = tiktoken ggml = llama-cpp-python>=0.1.77 + ctransformers pytorch = transformers>=4.31.0 torch diff --git a/xinference/model/llm/__init__.py b/xinference/model/llm/__init__.py index 89d9c1a0b4..14ca84554d 100644 --- a/xinference/model/llm/__init__.py +++ b/xinference/model/llm/__init__.py @@ -35,6 +35,7 @@ def _install(): from .ggml.chatglm import ChatglmCppChatModel + from .ggml.ctransformers import CtransformersModel from .ggml.llamacpp import LlamaCppChatModel, LlamaCppModel from .pytorch.baichuan import BaichuanPytorchChatModel from .pytorch.chatglm import ChatglmPytorchChatModel @@ -54,6 +55,7 @@ def _install(): FalconPytorchModel, FalconPytorchChatModel, ChatglmPytorchChatModel, + CtransformersModel, ] ) diff --git a/xinference/model/llm/ggml/ctransformers.py b/xinference/model/llm/ggml/ctransformers.py new file mode 100644 index 0000000000..e04cc2ef14 --- /dev/null +++ b/xinference/model/llm/ggml/ctransformers.py @@ -0,0 +1,276 @@ +# Copyright 2022-2023 XProbe Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import logging +import os +from typing import TYPE_CHECKING, Iterator, Optional, Sequence, TypedDict, Union + +if TYPE_CHECKING: + from ctransformers import AutoConfig + +from ....types import Completion, CompletionChunk +from ..core import LLM +from ..llm_family import LLMFamilyV1, LLMSpecV1 +from .ctransformers_util import generate_stream + +logger = logging.getLogger(__name__) + +# all supported models for Ctransformers with their model type. +# Please Strictly follows this name format when inputting new model to model_family. +MODEL_TYPE_FOR_CTRANSFORMERS = { + "gpt-2": "gpt2", + "gpt-j": "gptj", + "gpt4all-j": "gptj", + "gpt-neox": "gpt_neox", + "stablelm": "gpt_neox", + "llama": "llama", + "llama-2": "llama", + "mpt": "mpt", + "dolly-v2": "dolly-v2", + "replit": "replit", + "starcoder": "starcoder", + "starchat": "starcoder", + "falcon": "falcon", +} + +# these two constants subjects to change for future development and ctransformers updates. +CTRANSFORMERS_SUPPORTED_MODEL = ["starcoder", "gpt-2"] + +CTRANSFORMERS_GPU_SUPPORT = ["llama", "llama-2", "mpt", "falcon"] + +SIZE_TO_GPU_LAYERS = { + 3: 26, + 7: 32, + 13: 40, + 30: 60, + 65: 80, +} + + +class CtransformersModelConfig(TypedDict, total=False): + n_ctx: int + n_gpu_layers: int + + +class CtransformersGenerateConfig(TypedDict, total=False): + max_tokens: Optional[int] + top_k: Optional[int] + top_p: Optional[float] + temperature: Optional[float] + repetition_penalty: Optional[float] + last_n_tokens: Optional[int] + seed: Optional[int] + batch_size: Optional[int] + threads: Optional[int] + stop: Optional[Sequence[str]] + stream: Optional[bool] + reset: Optional[bool] + + +def _has_cuda_device(): + from xorbits._mars.resource import cuda_count + + return cuda_count() > 0 + + +class CtransformersModel(LLM): + def __init__( + self, + model_uid: str, + model_family: "LLMFamilyV1", + model_spec: "LLMSpecV1", + quantization: str, + model_path: str, + ctransformers_model_config: Optional[CtransformersModelConfig], + ): + super().__init__(model_uid, model_family, model_spec, quantization, model_path) + + self._model_type = None + closest_size = min( + SIZE_TO_GPU_LAYERS.keys(), + key=lambda x: abs(x - model_spec.model_size_in_billions), + ) + + self._model_family = model_family + self._model_uid = model_uid + self._llm = None + + self._gpu_layers = SIZE_TO_GPU_LAYERS[closest_size] + self._ctransformer_model_config = self._sanitize_model_config( + model_path, ctransformers_model_config + ) + + def _sanitize_model_config( + self, model_path, ctransformers_model_config: Optional[CtransformersModelConfig] + ) -> "AutoConfig": + try: + from ctransformers import AutoConfig, Config + except ImportError: + error_message = ( + "Failed to import module 'ctransformers - AutoConfig and Config'" + ) + + installation_guide = [ + f"Please make sure 'ctransformers' is installed.", + f"You can install it by checking out the repository for command:" + f"https://github.com/marella/ctransformers", + ] + + raise ImportError(f"{error_message}\n\n{''.join(installation_guide)}") + + # if the model have customized config, we update it. + model_config_ret = Config() + potential_gpu_layers = None + if ctransformers_model_config: + potential_context_length = ctransformers_model_config.pop("n_ctx", None) + potential_gpu_layers = ctransformers_model_config.pop("n_gpu_layers", None) + + model_config_ret.context_length = potential_context_length + model_config_ret.gpu_layers = potential_gpu_layers + + # if user does not define gpu layers, we have to set it with our system if applicable. + if potential_gpu_layers is None: + if self._model_family.model_name not in CTRANSFORMERS_GPU_SUPPORT: + model_config_ret.gpu_layers = -1 + elif self._is_darwin_and_apple_silicon(): + model_config_ret.gpu_layers = 1 + elif _has_cuda_device(): + model_config_ret.gpu_layers = self._gpu_layers + + return AutoConfig(model_config_ret) + + def _sanitize_generate_config( + self, + ctransformers_generate_config: Optional[CtransformersGenerateConfig], + ) -> CtransformersGenerateConfig: + # if the input config is not None, we try to copy the selected attributes to the ctransformersGenerateConfig. + if ctransformers_generate_config is None: + ctransformers_generate_config = CtransformersGenerateConfig() + + # for our system, the threads will have to be set to 4 + # all other parameters, if not specified, will be set to default when generate. + ctransformers_generate_config.setdefault("threads", 4) + + return ctransformers_generate_config + + def load(self): + try: + from ctransformers import AutoModelForCausalLM + except ImportError: + error_message = "Failed to import module 'ctransformers'" + + installation_guide = [ + f"Please make sure 'ctransformers' is installed.", + f"You can install it by checking out the repository for command." + f"https://github.com/marella/ctransformers", + ] + + raise ImportError(f"{error_message}\n\n{''.join(installation_guide)}") + + model_path = os.path.join( + self.model_path, + self.model_spec.model_file_name_template.format( + quantization=self.quantization + ), + ) + + self._model_type = self._determine_model_type() + self._llm = AutoModelForCausalLM.from_pretrained( + model_path_or_repo_id=model_path, + model_type=self._model_type, + config=self._ctransformer_model_config, + ) + + @classmethod + def match(cls, llm_family: LLMFamilyV1, llm_spec: LLMSpecV1) -> bool: + if llm_spec.model_format != "ggmlv3": + return False + if llm_family.model_name not in CTRANSFORMERS_SUPPORTED_MODEL: + return False + if "generate" not in llm_family.model_ability: + return False + return True + + def _determine_model_type(self): + if self._model_family.model_name not in MODEL_TYPE_FOR_CTRANSFORMERS: + raise ValueError( + f"The current model {self._model_family.model_name} is not supported, check your model name. " + ) + return MODEL_TYPE_FOR_CTRANSFORMERS[self._model_family.model_name] + + def generate( + self, prompt: str, generate_config_raw: CtransformersGenerateConfig + ) -> Union[Completion, Iterator[CompletionChunk]]: + def generator_wrapper( + _prompt: str, + _max_new_tokens: Union[int, None], + _generate_config: CtransformersGenerateConfig, + ) -> Iterator[CompletionChunk]: + assert self._model_uid is not None + for _completion_chunk, _ in generate_stream( + model=self._model_uid, + model_ref=self._llm, + prompt=_prompt, + max_new_tokens=_max_new_tokens, + **_generate_config, + ): + yield _completion_chunk + + generate_config = self._sanitize_generate_config(generate_config_raw) + + logger.debug( + "Enter generate, prompt: %s, generate config: %s", prompt, generate_config + ) + + max_new_tokens = generate_config.pop("max_tokens", None) + + stream_or_not = generate_config.get("stream", False) + if stream_or_not: + return generator_wrapper( + _prompt=prompt, + _max_new_tokens=max_new_tokens, + _generate_config=generate_config, + ) + else: + assert self.model_uid is not None + completion_chunk = None + completion_usage = None + for completion_chunk, completion_usage in generate_stream( + model=self.model_uid, + model_ref=self._llm, + prompt=prompt, + max_new_tokens=max_new_tokens, + **generate_config, + ): + pass + + assert completion_chunk is not None + assert completion_usage is not None + + completion = Completion( + id=completion_chunk["id"], + object=completion_chunk["object"], + created=completion_chunk["created"], + model=completion_chunk["model"], + choices=completion_chunk["choices"], + usage=completion_usage, + ) + + logger.debug( + "Generated, completion: %s, generate config: %s", + completion, + generate_config, + ) + + return completion diff --git a/xinference/model/llm/ggml/ctransformers_util.py b/xinference/model/llm/ggml/ctransformers_util.py new file mode 100644 index 0000000000..e263a56a70 --- /dev/null +++ b/xinference/model/llm/ggml/ctransformers_util.py @@ -0,0 +1,161 @@ +# Copyright 2022-2023 XProbe Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +import logging +import re +import time +import uuid +from typing import Iterator, Optional, Sequence, Tuple + +from ....types import CompletionChoice, CompletionChunk, CompletionUsage + +logger = logging.getLogger(__name__) + + +def generate_stream( + model, + model_ref, + prompt: str, + *, + max_new_tokens: Optional[int] = None, + top_k: Optional[int] = None, + top_p: Optional[float] = None, + temperature: Optional[float] = None, + repetition_penalty: Optional[float] = None, + last_n_tokens: Optional[int] = None, + seed: Optional[int] = None, + batch_size: Optional[int] = None, + stream: Optional[bool] = False, + threads: Optional[int] = None, + stop: Optional[Sequence[str]] = None, + reset: Optional[bool] = None, + **kwargs, +) -> Iterator[Tuple[CompletionChunk, CompletionUsage]]: + stop = stop or [] + if isinstance(stop, str): + stop = [stop] + + tokens = model_ref.tokenize(prompt) + + stop_regex = re.compile("|".join(map(re.escape, stop))) + count = 0 + text = "" + total_text = "" + incomplete = b"" + + # parameters needed for Xinference. + finish_reason = None + + try: + from ctransformers.utils import utf8_split_incomplete + except ImportError: + error_message = ( + "Failed to import module 'ctransformers - utf8_split_incomplete'" + ) + + installation_guide = [ + "Please make sure 'ctransformers' is installed. You can install it by checking out the repository: " + "https://github.com/marella/ctransformers", + ] + + raise ImportError(f"{error_message}\n\n{''.join(installation_guide)}") + + for token in model_ref.generate( + tokens, + top_k=top_k, + top_p=top_p, + temperature=temperature, + repetition_penalty=repetition_penalty, + last_n_tokens=last_n_tokens, + seed=seed, + batch_size=batch_size, + threads=threads, + reset=reset, + ): + # Handle incomplete UTF-8 multi-byte characters. + incomplete += model_ref.detokenize([token], decode=False) + complete, incomplete = utf8_split_incomplete(incomplete) + output = complete.decode(errors="ignore") + text += output + total_text += output + + # https://github.com/abetlen/llama-cpp-python/blob/1a13d76c487df1c8560132d10bda62d6e2f4fa93/llama_cpp/llama.py#L686-L706 + # Check if one of the stop sequences is part of the text. + # Note that the stop sequence may not always be at the end of text. + if stop: + match = stop_regex.search(text) + if match: + text = text[: match.start()] + finish_reason = "stop" + break + + # Avoid sending the longest suffix of text which is also a prefix + # of a stop sequence, as it can form a stop sequence with the text + # generated later. + longest = 0 + for s in stop: + for i in range(len(s), 0, -1): + if text.endswith(s[:i]): + longest = max(i, longest) + break + + end = len(text) - longest + if end > 0: + output = text[:end] + completion_choice = CompletionChoice( + text=output, index=0, logprobs=None, finish_reason=None + ) + completion_chunk = CompletionChunk( + id=str(uuid.uuid1()), + object="text_completion", + created=int(time.time()), + model=model, + choices=[completion_choice], + ) + completion_usage = CompletionUsage( + prompt_tokens=len(tokens), + completion_tokens=count + 1, + total_tokens=count + 1 + len(tokens), + ) + + yield completion_chunk, completion_usage + text = text[end:] + + count += 1 + if max_new_tokens is not None and count >= max_new_tokens: + finish_reason = "length" + break + + if stream is False: + completion_choice = CompletionChoice( + text=total_text, index=0, logprobs=None, finish_reason=finish_reason + ) + else: + completion_choice = CompletionChoice( + text=text, index=0, logprobs=None, finish_reason=finish_reason + ) + + completion_chunk = CompletionChunk( + id=str(uuid.uuid1()), + object="text_completion", + created=int(time.time()), + model=model, + choices=[completion_choice], + ) + completion_usage = CompletionUsage( + prompt_tokens=len(tokens), + completion_tokens=count, + total_tokens=count + len(tokens), + ) + + yield completion_chunk, completion_usage diff --git a/xinference/model/llm/ggml/llamacpp.py b/xinference/model/llm/ggml/llamacpp.py index bea3e4744e..4bcee28059 100644 --- a/xinference/model/llm/ggml/llamacpp.py +++ b/xinference/model/llm/ggml/llamacpp.py @@ -28,6 +28,7 @@ from ..core import LLM from ..llm_family import LLMFamilyV1, LLMSpecV1 from ..utils import ChatModelMixin +from .ctransformers import CTRANSFORMERS_SUPPORTED_MODEL if TYPE_CHECKING: from llama_cpp import LogitsProcessorList, StoppingCriteriaList @@ -187,7 +188,10 @@ def load(self): def match(cls, llm_family: LLMFamilyV1, llm_spec: LLMSpecV1) -> bool: if llm_spec.model_format != "ggmlv3": return False - if "chatglm" in llm_family.model_name: + if ( + "chatglm" in llm_family.model_name + or llm_family.model_name in CTRANSFORMERS_SUPPORTED_MODEL + ): return False if "generate" not in llm_family.model_ability: return False @@ -258,7 +262,10 @@ def __init__( def match(cls, llm_family: LLMFamilyV1, llm_spec: LLMSpecV1) -> bool: if llm_spec.model_format != "ggmlv3": return False - if "chatglm" in llm_family.model_name: + if ( + "chatglm" in llm_family.model_name + or llm_family.model_name in CTRANSFORMERS_SUPPORTED_MODEL + ): return False if "chat" not in llm_family.model_ability: return False diff --git a/xinference/model/llm/ggml/tests/test_ctransformers.py b/xinference/model/llm/ggml/tests/test_ctransformers.py new file mode 100644 index 0000000000..a129a43a34 --- /dev/null +++ b/xinference/model/llm/ggml/tests/test_ctransformers.py @@ -0,0 +1,160 @@ +# Copyright 2022-2023 XProbe Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +import random +import string + +import pytest + +from .....client import Client, GenerateModelHandle +from ....llm import GgmlLLMSpecV1, LLMFamilyV1 +from ..ctransformers import CtransformersModel + +mock_model_spec = GgmlLLMSpecV1( + model_format="ggmlv3", + model_size_in_billions=6, + quantizations=["q2_k", "q4_0"], + model_id="test_id", + model_file_name_template="TestModel.{quantization}.ggmlv3.bin", +) + +test_model_spec = """{ + "version":1, + "model_name":"TestModel", + "model_lang":[ + "en" + ], + "model_ability":[ + "embed", "generate" + ], + "model_specs":[ + { + "model_format":"ggmlv3", + "model_size_in_billions":6, + "quantizations": ["q2_k", "q4_0"], + "model_id":"test_id", + "model_file_name_template":"TestModel.{quantization}.ggmlv3.bin" + }, + { + "model_format":"pytorch", + "model_size_in_billions":3, + "quantizations": ["int8", "int4", "none"], + "model_id":"example/TestModel" + } + ], + "prompt_style": null +}""" + +mock_model_family = LLMFamilyV1.parse_raw(test_model_spec) + + [email protected]( + "model_spec, model_family", [(mock_model_spec, mock_model_family)] +) +def test_ctransformer_init(model_spec, model_family): + from ctransformers import AutoConfig + + quantization = "q4_0" + uid = "".join(random.choice(string.digits) for i in range(15)) + path = "".join( + random.choice(string.ascii_letters + string.punctuation) for i in range(100) + ) + model = CtransformersModel( + model_uid=uid, + model_family=model_family, + model_spec=model_spec, + quantization=quantization, + model_path=path, + ctransformers_model_config=None, + ) + + assert model.model_uid == uid + assert model.quantization == quantization + assert model.model_path == path + assert model._ctransformer_model_config is not None + assert isinstance(model._ctransformer_model_config, AutoConfig) + + assert isinstance(model.model_spec, GgmlLLMSpecV1) + assert isinstance(model.model_family, LLMFamilyV1) + assert isinstance(model.model_family.model_specs[0], GgmlLLMSpecV1) + + assert ( + model.model_family.model_specs[0].model_format == model.model_spec.model_format + ) + assert model.model_family.model_specs[0].model_format == model_spec.model_format + assert ( + model.model_family.model_specs[0].model_size_in_billions + == model.model_spec.model_size_in_billions + ) + assert ( + model.model_family.model_specs[0].model_size_in_billions + == model_spec.model_size_in_billions + ) + assert ( + model.model_family.model_specs[0].quantizations + == model.model_spec.quantizations + ) + assert model.model_family.model_specs[0].quantizations == model_spec.quantizations + assert model.model_family.model_specs[0].model_id == model.model_spec.model_id + assert model.model_family.model_specs[0].model_id == model_spec.model_id + assert ( + model.model_family.model_specs[0].model_file_name_template + == model.model_spec.model_file_name_template + ) + assert ( + model.model_family.model_specs[0].model_file_name_template + == model_spec.model_file_name_template + ) + assert model._llm is None + + [email protected] +async def test_ctransformers_generate(setup): + endpoint, _ = setup + client = Client(endpoint) + assert len(client.list_models()) == 0 + + model_uid = client.launch_model( + model_name="gpt-2", + model_size_in_billions=1, + model_format="ggmlv3", + quantization="none", + ) + + assert len(client.list_models()) == 1 + + model = client.get_model(model_uid=model_uid) + assert isinstance(model, GenerateModelHandle) + + completion = model.generate("AI is going to", generate_config={"max_tokens": 5}) + print(completion) + assert "id" in completion + assert "text" in completion["choices"][0] + assert len(completion["choices"][0]["text"]) > 0 + + assert completion["model"] == model_uid + + assert "finish_reason" in completion["choices"][0] + assert completion["choices"][0]["finish_reason"] == "length" + + assert "prompt_tokens" in completion["usage"] + assert completion["usage"]["prompt_tokens"] == 4 + + assert "completion_tokens" in completion["usage"] + assert completion["usage"]["completion_tokens"] == 5 + + assert "total_tokens" in completion["usage"] + assert completion["usage"]["total_tokens"] == 9 + + client.terminate_model(model_uid=model_uid) + assert len(client.list_models()) == 0 diff --git a/xinference/model/llm/llm_family.json b/xinference/model/llm/llm_family.json index ae481f2245..753b9bf2d0 100644 --- a/xinference/model/llm/llm_family.json +++ b/xinference/model/llm/llm_family.json @@ -744,7 +744,8 @@ "version": 1, "model_name": "qwen-chat", "model_lang": [ - "en", "zh" + "en", + "zh" ], "model_ability": [ "embed", @@ -774,5 +775,51 @@ 151643 ] } + }, + { + "version": 1, + "model_name": "starcoder", + "model_lang": [ + "en" + ], + "model_ability":[ + "generate" + ], + "model_specs": [ + { + "model_format": "ggmlv3", + "model_size_in_billions": 16, + "quantizations": [ + "q4_0", + "q4_1", + "q5_0", + "q5_1", + "q8_0" + ], + "model_id": "TheBloke/starcoder-GGML", + "model_file_name_template": "starcoder.ggmlv3.{quantization}.bin" + } + ] + }, + { + "version": 1, + "model_name": "gpt-2", + "model_lang": [ + "en" + ], + "model_ability":[ + "generate" + ], + "model_specs": [ + { + "model_format": "ggmlv3", + "model_size_in_billions": 1, + "quantizations": [ + "none" + ], + "model_id": "marella/gpt-2-ggml", + "model_file_name_template": "ggml-model.bin" + } + ] } ]
{ "difficulty": "medium", "estimated_review_effort": 4, "problem_domain": "New Feature Additions" }
xorbitsai__inference-285@69c91fa
xorbitsai/inference
Python
285
ENH: Support Enviroment Variable
ATT This implementation is capable to: If the user didn't type in the -e or --endpoint when required: The program will search if user have exported an endpoint into OS environ. If so, the endpoint variable in the program will automatically be set to that endpoint in OS. If not, we will give it the default endpoint back, which is http://127.0.0.1:9997. if the user did have an input at the -e parameters, we keep the users' input to the endpoint variable.
2023-08-02T03:40:35Z
ENH: client supports env variable XINFERENCE_ENDPOINT ### Is your feature request related to a problem? Please describe It could be troublesome when starting xinference with a custom port. You have to specify the endpoint every time when you use xinference CLI. ### Describe the solution you'd like I hope we can optimize user experience by using an env variable XINFERENCE_ENDPOINT. When you starts an xinference service, an endpoint will be printed. Then in the terminal for xinference CLI, execute `export XINFERENCE_ENDPOINT=${endpoint}`. Xinference CLI should firstly check if `-e` is specified. If not, check the env var XINFERENCE_ENDPOINT. If neither of them is set, use the default endpoint.
[ { "body": "### Is your feature request related to a problem? Please describe\r\nIt could be troublesome when starting xinference with a custom port. You have to specify the endpoint every time when you use xinference CLI.\r\n\r\n### Describe the solution you'd like\r\nI hope we can optimize user experience by using an env variable XINFERENCE_ENDPOINT.\r\n\r\nWhen you starts an xinference service, an endpoint will be printed. Then in the terminal for xinference CLI, execute `export XINFERENCE_ENDPOINT=${endpoint}`.\r\n\r\nXinference CLI should firstly check if `-e` is specified. If not, check the env var XINFERENCE_ENDPOINT. If neither of them is set, use the default endpoint. ", "number": 249, "title": "ENH: client supports env variable XINFERENCE_ENDPOINT" } ]
0a6e90c96e745d83688355202a700878771b74b7
{ "head_commit": "69c91fa7b824daa044dbd93a09cee23187d73cdd", "head_commit_message": "ADD the Enviroment Variable Enhancement", "patch_to_review": "diff --git a/xinference/constants.py b/xinference/constants.py\nindex 18c606a831..070d6c7510 100644\n--- a/xinference/constants.py\n+++ b/xinference/constants.py\n@@ -22,3 +22,4 @@\n XINFERENCE_DEFAULT_LOCAL_HOST = \"127.0.0.1\"\n XINFERENCE_DEFAULT_DISTRIBUTED_HOST = \"0.0.0.0\"\n XINFERENCE_DEFAULT_ENDPOINT_PORT = 9997\n+XINFERENCE_ENV_ENDPOINT_VARIABLE = \"XINFERENCE_ENDPOINT\"\ndiff --git a/xinference/deploy/cmdline.py b/xinference/deploy/cmdline.py\nindex 2c7915903b..ee7712e9d3 100644\n--- a/xinference/deploy/cmdline.py\n+++ b/xinference/deploy/cmdline.py\n@@ -14,6 +14,8 @@\n \n \n import logging\n+import os\n+from typing import Optional\n \n import click\n from xoscar.utils import get_next_port\n@@ -24,9 +26,22 @@\n XINFERENCE_DEFAULT_DISTRIBUTED_HOST,\n XINFERENCE_DEFAULT_ENDPOINT_PORT,\n XINFERENCE_DEFAULT_LOCAL_HOST,\n+ XINFERENCE_ENV_ENDPOINT_VARIABLE,\n )\n \n \n+def getExistingEndpointFromEnv(endpoint: Optional[str]) -> str:\n+ # user didn't specify the endpoint.\n+ if endpoint is None:\n+ if XINFERENCE_ENV_ENDPOINT_VARIABLE in os.environ:\n+ return os.environ[XINFERENCE_ENV_ENDPOINT_VARIABLE]\n+ else:\n+ default_endpoint = f\"http://{XINFERENCE_DEFAULT_LOCAL_HOST}:{XINFERENCE_DEFAULT_ENDPOINT_PORT}\"\n+ return default_endpoint\n+ else:\n+ return endpoint\n+\n+\n @click.group(invoke_without_command=True, name=\"xinference\")\n @click.pass_context\n @click.version_option(__version__, \"--version\", \"-v\")\n@@ -81,17 +96,18 @@ def supervisor(\n @click.option(\n \"--endpoint\",\n \"-e\",\n- default=f\"http://{XINFERENCE_DEFAULT_LOCAL_HOST}:{XINFERENCE_DEFAULT_ENDPOINT_PORT}\",\n type=str,\n )\n @click.option(\"--host\", \"-H\", default=XINFERENCE_DEFAULT_DISTRIBUTED_HOST, type=str)\n-def worker(log_level: str, endpoint: str, host: str):\n+def worker(log_level: str, endpoint: Optional[str], host: str):\n from ..deploy.worker import main\n \n if log_level:\n logging.basicConfig(level=logging.getLevelName(log_level.upper()))\n logging_conf = dict(level=log_level.upper())\n \n+ endpoint = getExistingEndpointFromEnv(endpoint)\n+\n client = RESTfulClient(base_url=endpoint)\n supervisor_internal_addr = client._get_supervisor_internal_address()\n \n@@ -107,7 +123,6 @@ def worker(log_level: str, endpoint: str, host: str):\n @click.option(\n \"--endpoint\",\n \"-e\",\n- default=f\"http://{XINFERENCE_DEFAULT_LOCAL_HOST}:{XINFERENCE_DEFAULT_ENDPOINT_PORT}\",\n type=str,\n )\n @click.option(\"--model-name\", \"-n\", type=str)\n@@ -121,6 +136,8 @@ def model_launch(\n model_format: str,\n quantization: str,\n ):\n+ endpoint = getExistingEndpointFromEnv(endpoint)\n+\n client = RESTfulClient(base_url=endpoint)\n model_uid = client.launch_model(\n model_name=model_name,\n@@ -136,7 +153,6 @@ def model_launch(\n @click.option(\n \"--endpoint\",\n \"-e\",\n- default=f\"http://{XINFERENCE_DEFAULT_LOCAL_HOST}:{XINFERENCE_DEFAULT_ENDPOINT_PORT}\",\n type=str,\n )\n @click.option(\"--all\", is_flag=True)\n@@ -148,6 +164,8 @@ def model_list(endpoint: str, all: bool):\n # TODO: get from the supervisor\n from ..model.llm import LLM_FAMILIES\n \n+ endpoint = getExistingEndpointFromEnv(endpoint)\n+\n table = []\n if all:\n for model_family in LLM_FAMILIES:\n@@ -195,7 +213,6 @@ def model_list(endpoint: str, all: bool):\n @click.option(\n \"--endpoint\",\n \"-e\",\n- default=f\"http://{XINFERENCE_DEFAULT_LOCAL_HOST}:{XINFERENCE_DEFAULT_ENDPOINT_PORT}\",\n type=str,\n )\n @click.option(\"--model-uid\", type=str)\n@@ -203,6 +220,8 @@ def model_terminate(\n endpoint: str,\n model_uid: str,\n ):\n+ endpoint = getExistingEndpointFromEnv(endpoint)\n+\n client = RESTfulClient(base_url=endpoint)\n client.terminate_model(model_uid=model_uid)\n \n" }
[ { "diff_hunk": "@@ -22,3 +22,4 @@\n XINFERENCE_DEFAULT_LOCAL_HOST = \"127.0.0.1\"\n XINFERENCE_DEFAULT_DISTRIBUTED_HOST = \"0.0.0.0\"\n XINFERENCE_DEFAULT_ENDPOINT_PORT = 9997\n+XINFERENCE_ENV_ENDPOINT_VARIABLE = \"XINFERENCE_ENDPOINT\"", "line": null, "original_line": 25, "original_start_line": null, "path": "xinference/constants.py", "start_line": null, "text": "@user1:\n`XINFERENCE_ENV_ENDPOINT_VARIABLE` -> `XINFERENCE_ENV_ENDPOINT`" }, { "diff_hunk": "@@ -24,9 +26,22 @@\n XINFERENCE_DEFAULT_DISTRIBUTED_HOST,\n XINFERENCE_DEFAULT_ENDPOINT_PORT,\n XINFERENCE_DEFAULT_LOCAL_HOST,\n+ XINFERENCE_ENV_ENDPOINT_VARIABLE,\n )\n \n \n+def getExistingEndpointFromEnv(endpoint: Optional[str]) -> str:", "line": null, "original_line": 33, "original_start_line": null, "path": "xinference/deploy/cmdline.py", "start_line": null, "text": "@user1:\n`getExistingEndpointFromEnv` -> `get_endpoint`" } ]
a489f3eb559a5a8e9a8a36e8ff3512b6b2cefeeb
diff --git a/xinference/constants.py b/xinference/constants.py index 18c606a831..34a684c5e7 100644 --- a/xinference/constants.py +++ b/xinference/constants.py @@ -22,3 +22,4 @@ XINFERENCE_DEFAULT_LOCAL_HOST = "127.0.0.1" XINFERENCE_DEFAULT_DISTRIBUTED_HOST = "0.0.0.0" XINFERENCE_DEFAULT_ENDPOINT_PORT = 9997 +XINFERENCE_ENV_ENDPOINT = "XINFERENCE_ENDPOINT" diff --git a/xinference/deploy/cmdline.py b/xinference/deploy/cmdline.py index 2c7915903b..ce180372cd 100644 --- a/xinference/deploy/cmdline.py +++ b/xinference/deploy/cmdline.py @@ -14,6 +14,8 @@ import logging +import os +from typing import Optional import click from xoscar.utils import get_next_port @@ -24,9 +26,22 @@ XINFERENCE_DEFAULT_DISTRIBUTED_HOST, XINFERENCE_DEFAULT_ENDPOINT_PORT, XINFERENCE_DEFAULT_LOCAL_HOST, + XINFERENCE_ENV_ENDPOINT, ) +def get_endpoint(endpoint: Optional[str]) -> str: + # user didn't specify the endpoint. + if endpoint is None: + if XINFERENCE_ENV_ENDPOINT in os.environ: + return os.environ[XINFERENCE_ENV_ENDPOINT] + else: + default_endpoint = f"http://{XINFERENCE_DEFAULT_LOCAL_HOST}:{XINFERENCE_DEFAULT_ENDPOINT_PORT}" + return default_endpoint + else: + return endpoint + + @click.group(invoke_without_command=True, name="xinference") @click.pass_context @click.version_option(__version__, "--version", "-v") @@ -81,17 +96,18 @@ def supervisor( @click.option( "--endpoint", "-e", - default=f"http://{XINFERENCE_DEFAULT_LOCAL_HOST}:{XINFERENCE_DEFAULT_ENDPOINT_PORT}", type=str, ) @click.option("--host", "-H", default=XINFERENCE_DEFAULT_DISTRIBUTED_HOST, type=str) -def worker(log_level: str, endpoint: str, host: str): +def worker(log_level: str, endpoint: Optional[str], host: str): from ..deploy.worker import main if log_level: logging.basicConfig(level=logging.getLevelName(log_level.upper())) logging_conf = dict(level=log_level.upper()) + endpoint = get_endpoint(endpoint) + client = RESTfulClient(base_url=endpoint) supervisor_internal_addr = client._get_supervisor_internal_address() @@ -107,7 +123,6 @@ def worker(log_level: str, endpoint: str, host: str): @click.option( "--endpoint", "-e", - default=f"http://{XINFERENCE_DEFAULT_LOCAL_HOST}:{XINFERENCE_DEFAULT_ENDPOINT_PORT}", type=str, ) @click.option("--model-name", "-n", type=str) @@ -115,12 +130,14 @@ def worker(log_level: str, endpoint: str, host: str): @click.option("--model-format", "-f", default=None, type=str) @click.option("--quantization", "-q", default=None, type=str) def model_launch( - endpoint: str, + endpoint: Optional[str], model_name: str, size_in_billions: int, model_format: str, quantization: str, ): + endpoint = get_endpoint(endpoint) + client = RESTfulClient(base_url=endpoint) model_uid = client.launch_model( model_name=model_name, @@ -136,11 +153,10 @@ def model_launch( @click.option( "--endpoint", "-e", - default=f"http://{XINFERENCE_DEFAULT_LOCAL_HOST}:{XINFERENCE_DEFAULT_ENDPOINT_PORT}", type=str, ) @click.option("--all", is_flag=True) -def model_list(endpoint: str, all: bool): +def model_list(endpoint: Optional[str], all: bool): import sys from tabulate import tabulate @@ -148,6 +164,8 @@ def model_list(endpoint: str, all: bool): # TODO: get from the supervisor from ..model.llm import LLM_FAMILIES + endpoint = get_endpoint(endpoint) + table = [] if all: for model_family in LLM_FAMILIES: @@ -195,14 +213,15 @@ def model_list(endpoint: str, all: bool): @click.option( "--endpoint", "-e", - default=f"http://{XINFERENCE_DEFAULT_LOCAL_HOST}:{XINFERENCE_DEFAULT_ENDPOINT_PORT}", type=str, ) @click.option("--model-uid", type=str) def model_terminate( - endpoint: str, + endpoint: Optional[str], model_uid: str, ): + endpoint = get_endpoint(endpoint) + client = RESTfulClient(base_url=endpoint) client.terminate_model(model_uid=model_uid)
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "New Feature Additions" }
sympy__sympy-27102@4a3836b
sympy/sympy
Python
27,102
Handle multiple equations and inequalities for `parse_expr`
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> Fixes https://github.com/sympy/sympy/issues/26834 #### Brief description of what is fixed or changed Non-monotonic comparisons like `0 < x > 100` are possible with Python parser, so SymPy could keep them working regardless of they are possible mathematically or not. I note that `And` sorts the arguments regardless of `evaluate=False`, maybe this can be bug in SymPy as well. I also note that `in` or `is` are also valid Python comparison operators, and even multiple comparison is supported (for example `1 in [1] in [[1]]`), but I don't think that we have anything decided about set expressions, so I have left them raise errors. The other thing that can lead to discussion is about raising the correctors errors, for example, we can exclude line 1135 and raise `KeyError` from dictionary naturally, but that may not be specified errors from https://docs.python.org/3/library/ast.html#ast-helpers. I also can't easily figure out which types of errors we should use for parsing because the documentation does not give very good hint. #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> - parsing - Fixed an issue that `parse_expr` with `evaluate=False` did not handle chained equation or inequality comparisons. For example, `parse_expr("0 < x < 100", evaluate=False)` should now work. <!-- END RELEASE NOTES -->
2024-09-22T05:37:46Z
[Potential Bug] parse_expr fails to parse complete expression Hi, I am new to sympy, and trying to use sympy for programatic algebraic simplification. I find that it is very useful, and works in most cases, however I found the following case where it deviates from my expected behaviour: ```python3 from sympy import parse_expr parse_expr("0 <= x <= 100", evaluate=False) # Output: 0 <= x ``` I believe that trying to parse such an expression should either parse the full expression, or fail with an exception if unable to parse completely. Could anybody with more experience please point out if this is expected, and if yes, is there any way to achieve my desired behaviour?
The parser can be made more strict with the fix ```diff diff --git a/sympy/parsing/sympy_parser.py b/sympy/parsing/sympy_parser.py index 17919fe01a..3779107edf 100644 --- a/sympy/parsing/sympy_parser.py +++ b/sympy/parsing/sympy_parser.py @@ -1128,7 +1128,7 @@ class EvaluateFalseTransformer(ast.NodeTransformer): ast.Eq: 'Eq' } def visit_Compare(self, node): - if node.ops[0].__class__ in self.relational_operators: + if len(node.ops) == 1 and node.ops[0].__class__ in self.relational_operators: sympy_class = self.relational_operators[node.ops[0].__class__] right = self.visit(node.comparators[0]) left = self.visit(node.left) ``` However, I'd like to see more proper fix to handle multiple inequality than throwing error. But I don't know much details about Python AST for handling chained equation/inequality
[ { "body": "Hi, I am new to sympy, and trying to use sympy for programatic algebraic simplification. I find that it is very useful, and works in most cases, however I found the following case where it deviates from my expected behaviour:\r\n\r\n```python3\r\nfrom sympy import parse_expr\r\nparse_expr(\"0 <= x <= 100\", evaluate=False)\r\n# Output: 0 <= x\r\n```\r\n\r\nI believe that trying to parse such an expression should either parse the full expression, or fail with an exception if unable to parse completely. Could anybody with more experience please point out if this is expected, and if yes, is there any way to achieve my desired behaviour?", "number": 26834, "title": "[Potential Bug] parse_expr fails to parse complete expression" } ]
1e56b3fb44e5398a3ef8a3cdd25dd7a7b87cd762
{ "head_commit": "4a3836b87f6c70c8663645d3304e620c15fb3c95", "head_commit_message": "Implement fallback behavior", "patch_to_review": "diff --git a/sympy/parsing/sympy_parser.py b/sympy/parsing/sympy_parser.py\nindex 17919fe01ae6..6f4747b2571a 100644\n--- a/sympy/parsing/sympy_parser.py\n+++ b/sympy/parsing/sympy_parser.py\n@@ -12,7 +12,7 @@\n import types\n from typing import Tuple as tTuple, Dict as tDict, Any, Callable, \\\n List, Optional, Union as tUnion\n-\n+from functools import reduce\n from sympy.assumptions.ask import AssumptionKeys\n from sympy.core.basic import Basic\n from sympy.core import Symbol\n@@ -1128,17 +1128,31 @@ class EvaluateFalseTransformer(ast.NodeTransformer):\n ast.Eq: 'Eq'\n }\n def visit_Compare(self, node):\n- if node.ops[0].__class__ in self.relational_operators:\n- sympy_class = self.relational_operators[node.ops[0].__class__]\n- right = self.visit(node.comparators[0])\n- left = self.visit(node.left)\n- new_node = ast.Call(\n- func=ast.Name(id=sympy_class, ctx=ast.Load()),\n- args=[left, right],\n- keywords=[ast.keyword(arg='evaluate', value=ast.Constant(value=False))]\n+ if any(op.__class__ not in self.relational_operators for op in node.ops):\n+ return node\n+\n+ def reducer(acc, op_right):\n+ result, left = acc\n+ op, right = op_right\n+ new = ast.Call(\n+ func=ast.Name(\n+ id=self.relational_operators[op.__class__], ctx=ast.Load()\n+ ),\n+ args=[self.visit(left), self.visit(right)],\n+ keywords=[ast.keyword(arg=\"evaluate\", value=ast.Constant(value=False))],\n )\n- return new_node\n- return node\n+ return result + [new], right\n+\n+ args, _ = reduce(\n+ reducer, zip(node.ops, node.comparators), ([], node.left)\n+ )\n+ if len(args) == 1:\n+ return args[0]\n+ return ast.Call(\n+ func=ast.Name(id=self.operators[ast.BitAnd], ctx=ast.Load()),\n+ args=args,\n+ keywords=[ast.keyword(arg=\"evaluate\", value=ast.Constant(value=False))],\n+ )\n \n def flatten(self, args, func):\n result = []\ndiff --git a/sympy/parsing/tests/test_sympy_parser.py b/sympy/parsing/tests/test_sympy_parser.py\nindex e4ea2d415648..78e9ebebb072 100644\n--- a/sympy/parsing/tests/test_sympy_parser.py\n+++ b/sympy/parsing/tests/test_sympy_parser.py\n@@ -280,16 +280,21 @@ def test_parse_function_issue_3539():\n assert parse_expr('f(x)') == f(x)\n \n def test_issue_24288():\n- inputs = {\n- \"1 < 2\": Lt(1, 2, evaluate=False),\n- \"1 <= 2\": Le(1, 2, evaluate=False),\n- \"1 > 2\": Gt(1, 2, evaluate=False),\n- \"1 >= 2\": Ge(1, 2, evaluate=False),\n- \"1 != 2\": Ne(1, 2, evaluate=False),\n- \"1 == 2\": Eq(1, 2, evaluate=False)\n- }\n- for text, result in inputs.items():\n- assert parse_expr(text, evaluate=False) == result\n+ assert parse_expr(\"1 < 2\", evaluate=False) == Lt(1, 2, evaluate=False)\n+ assert parse_expr(\"1 <= 2\", evaluate=False) == Le(1, 2, evaluate=False)\n+ assert parse_expr(\"1 > 2\", evaluate=False) == Gt(1, 2, evaluate=False)\n+ assert parse_expr(\"1 >= 2\", evaluate=False) == Ge(1, 2, evaluate=False)\n+ assert parse_expr(\"1 != 2\", evaluate=False) == Ne(1, 2, evaluate=False)\n+ assert parse_expr(\"1 == 2\", evaluate=False) == Eq(1, 2, evaluate=False)\n+ assert parse_expr(\"1 < 2 < 3\", evaluate=False) == And(Lt(1, 2, evaluate=False), Lt(2, 3, evaluate=False), evaluate=False)\n+ assert parse_expr(\"1 <= 2 <= 3\", evaluate=False) == And(Le(1, 2, evaluate=False), Le(2, 3, evaluate=False), evaluate=False)\n+ assert parse_expr(\"1 < 2 <= 3 < 4\", evaluate=False) == \\\n+ And(Lt(1, 2, evaluate=False), Le(2, 3, evaluate=False), Lt(3, 4, evaluate=False), evaluate=False)\n+ # parse_expr with evaluate=True also falls back to Python comparison for them\n+ assert parse_expr(r\"1 in {1}\", evaluate=False)\n+ assert parse_expr(r\"1 not in {2}\", evaluate=False)\n+ assert parse_expr(r\"1 is 1\", evaluate=False)\n+ assert parse_expr(r\"1 is not 2\", evaluate=False)\n \n def test_split_symbols_numeric():\n transformations = (\n" }
[ { "diff_hunk": "@@ -280,16 +280,21 @@ def test_parse_function_issue_3539():\n assert parse_expr('f(x)') == f(x)\n \n def test_issue_24288():\n- inputs = {\n- \"1 < 2\": Lt(1, 2, evaluate=False),\n- \"1 <= 2\": Le(1, 2, evaluate=False),\n- \"1 > 2\": Gt(1, 2, evaluate=False),\n- \"1 >= 2\": Ge(1, 2, evaluate=False),\n- \"1 != 2\": Ne(1, 2, evaluate=False),\n- \"1 == 2\": Eq(1, 2, evaluate=False)\n- }\n- for text, result in inputs.items():\n- assert parse_expr(text, evaluate=False) == result\n+ assert parse_expr(\"1 < 2\", evaluate=False) == Lt(1, 2, evaluate=False)\n+ assert parse_expr(\"1 <= 2\", evaluate=False) == Le(1, 2, evaluate=False)\n+ assert parse_expr(\"1 > 2\", evaluate=False) == Gt(1, 2, evaluate=False)\n+ assert parse_expr(\"1 >= 2\", evaluate=False) == Ge(1, 2, evaluate=False)\n+ assert parse_expr(\"1 != 2\", evaluate=False) == Ne(1, 2, evaluate=False)\n+ assert parse_expr(\"1 == 2\", evaluate=False) == Eq(1, 2, evaluate=False)\n+ assert parse_expr(\"1 < 2 < 3\", evaluate=False) == And(Lt(1, 2, evaluate=False), Lt(2, 3, evaluate=False), evaluate=False)\n+ assert parse_expr(\"1 <= 2 <= 3\", evaluate=False) == And(Le(1, 2, evaluate=False), Le(2, 3, evaluate=False), evaluate=False)\n+ assert parse_expr(\"1 < 2 <= 3 < 4\", evaluate=False) == \\\n+ And(Lt(1, 2, evaluate=False), Le(2, 3, evaluate=False), Lt(3, 4, evaluate=False), evaluate=False)\n+ # parse_expr with evaluate=True also falls back to Python comparison for them\n+ assert parse_expr(r\"1 in {1}\", evaluate=False)", "line": null, "original_line": 294, "original_start_line": 293, "path": "sympy/parsing/tests/test_sympy_parser.py", "start_line": null, "text": "@user1:\nI'm not sure if this should be allowed. It uses the Python set containment checking:\r\n```python\r\nIn [1]: parse_expr('x in {y}')\r\nOut[1]: False\r\n\r\nIn [2]: x in S({y})\r\n...\r\nTypeError: did not evaluate to a bool: Eq(y, x)\r\n\r\nIn [3]: Contains(x, {y})\r\nOut[3]: x ∈ {y}\r\n```\r\nIn principle `parse_expr` could use `Contains` but I don't know whether this should be treated as semantic containment checking or just that `parse_expr` represents what the Python code would do.\n\n@author:\nOkay, I'd revert the commit and please revert from there.\r\nI also think that supporting the sets is difficult.\r\nFor example, we also have to fix `parse_expr(\"{1} | {2}\", evaluate=False)`, however, that requires other complicated implementation that distinguishes which operator to use (`Or, Union`) based on the pattern.\n\n@user1:\nI guess if `{1}` was parsed as `FiniteSet(1)` then the parser could depend on `__or__` to choose this but I would like to get away from having the parser evaluate arbitrary code in the long run." } ]
f9e063b0e39fcc5df727163162216a2be0fa73c9
diff --git a/sympy/parsing/sympy_parser.py b/sympy/parsing/sympy_parser.py index 17919fe01ae6..be613dee4d03 100644 --- a/sympy/parsing/sympy_parser.py +++ b/sympy/parsing/sympy_parser.py @@ -12,7 +12,7 @@ import types from typing import Tuple as tTuple, Dict as tDict, Any, Callable, \ List, Optional, Union as tUnion - +from functools import reduce from sympy.assumptions.ask import AssumptionKeys from sympy.core.basic import Basic from sympy.core import Symbol @@ -1128,17 +1128,30 @@ class EvaluateFalseTransformer(ast.NodeTransformer): ast.Eq: 'Eq' } def visit_Compare(self, node): - if node.ops[0].__class__ in self.relational_operators: - sympy_class = self.relational_operators[node.ops[0].__class__] - right = self.visit(node.comparators[0]) - left = self.visit(node.left) - new_node = ast.Call( - func=ast.Name(id=sympy_class, ctx=ast.Load()), - args=[left, right], - keywords=[ast.keyword(arg='evaluate', value=ast.Constant(value=False))] + def reducer(acc, op_right): + result, left = acc + op, right = op_right + if op.__class__ not in self.relational_operators: + raise ValueError("Only equation or inequality operators are supported") + new = ast.Call( + func=ast.Name( + id=self.relational_operators[op.__class__], ctx=ast.Load() + ), + args=[self.visit(left), self.visit(right)], + keywords=[ast.keyword(arg="evaluate", value=ast.Constant(value=False))], ) - return new_node - return node + return result + [new], right + + args, _ = reduce( + reducer, zip(node.ops, node.comparators), ([], node.left) + ) + if len(args) == 1: + return args[0] + return ast.Call( + func=ast.Name(id=self.operators[ast.BitAnd], ctx=ast.Load()), + args=args, + keywords=[ast.keyword(arg="evaluate", value=ast.Constant(value=False))], + ) def flatten(self, args, func): result = [] diff --git a/sympy/parsing/tests/test_sympy_parser.py b/sympy/parsing/tests/test_sympy_parser.py index e4ea2d415648..2bba80229215 100644 --- a/sympy/parsing/tests/test_sympy_parser.py +++ b/sympy/parsing/tests/test_sympy_parser.py @@ -280,16 +280,21 @@ def test_parse_function_issue_3539(): assert parse_expr('f(x)') == f(x) def test_issue_24288(): - inputs = { - "1 < 2": Lt(1, 2, evaluate=False), - "1 <= 2": Le(1, 2, evaluate=False), - "1 > 2": Gt(1, 2, evaluate=False), - "1 >= 2": Ge(1, 2, evaluate=False), - "1 != 2": Ne(1, 2, evaluate=False), - "1 == 2": Eq(1, 2, evaluate=False) - } - for text, result in inputs.items(): - assert parse_expr(text, evaluate=False) == result + assert parse_expr("1 < 2", evaluate=False) == Lt(1, 2, evaluate=False) + assert parse_expr("1 <= 2", evaluate=False) == Le(1, 2, evaluate=False) + assert parse_expr("1 > 2", evaluate=False) == Gt(1, 2, evaluate=False) + assert parse_expr("1 >= 2", evaluate=False) == Ge(1, 2, evaluate=False) + assert parse_expr("1 != 2", evaluate=False) == Ne(1, 2, evaluate=False) + assert parse_expr("1 == 2", evaluate=False) == Eq(1, 2, evaluate=False) + assert parse_expr("1 < 2 < 3", evaluate=False) == And(Lt(1, 2, evaluate=False), Lt(2, 3, evaluate=False), evaluate=False) + assert parse_expr("1 <= 2 <= 3", evaluate=False) == And(Le(1, 2, evaluate=False), Le(2, 3, evaluate=False), evaluate=False) + assert parse_expr("1 < 2 <= 3 < 4", evaluate=False) == \ + And(Lt(1, 2, evaluate=False), Le(2, 3, evaluate=False), Lt(3, 4, evaluate=False), evaluate=False) + # Valid Python relational operators that SymPy does not decide how to handle them yet + raises(ValueError, lambda: parse_expr("1 in 2", evaluate=False)) + raises(ValueError, lambda: parse_expr("1 is 2", evaluate=False)) + raises(ValueError, lambda: parse_expr("1 not in 2", evaluate=False)) + raises(ValueError, lambda: parse_expr("1 is not 2", evaluate=False)) def test_split_symbols_numeric(): transformations = (
{ "difficulty": "medium", "estimated_review_effort": 4, "problem_domain": "Bug Fixes" }
sympy__sympy-27100@b974add
sympy/sympy
Python
27,100
give SparseMatrix own reshape routine
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> closes #13828 #### Brief description of what is fixed or changed #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> * matrices * SparseMatrix reshaping is now more efficient <!-- END RELEASE NOTES -->
2024-09-20T13:47:41Z
SparseMatrix should have its own reshape routine The following should be about a million times faster with a dedicated reshape routine that just moves around existing elements and does nothing with 0s: ``` >>> s = SparseMatrix.zeros(1000, 1000) >>> s[0] = 1 >>> s.reshape(1, 10**6) ```
@smichr I want to work on this issue. I need some help that routine should create another zero matrix and substitute non zero element or it should modify existing one? The same thing happens with this `>>> s = eye(10**5)` `Traceback (most recent call last):` ` File "<string>", line 1, in <module>` ` File "/base/data/home/apps/s~sympy-live-hrd/49.400254913747479351/sympy/sympy/matrices/dense.py", line 1275, in eye` `return cls.eye(n)` `File "/base/data/home/apps/s~sympy-live-hrd/49.400254913747479351/sympy/sympy/matrices/dense.py", line 529, in eye` ` mat = [cls._sympify(0)]*n*n MemoryError ` Also, in this case, take very long time `>>> M = eye(10**4) ` should we try to improve some MemoryError so that it can be used for large data such as for machine learning problems and optimizations? > The same thing happens with this ``` >>> s = eye(10**5) Traceback (most recent call last): File "<string>", line 1, in <module> File "/base/data/home/apps/s~sympy-live-hrd/49.400254913747479351/sympy/sympy/matrices/dense.py", line 1275, in eye return cls.eye(n) File "/base/data/home/apps/s~sympy-live-hrd/49.400254913747479351/sympy/sympy/matrices/dense.py", line 529, in eye mat = [cls._sympify(0)]*n*n MemoryError ``` I wonder if we should have a FormulaMatrix (kind of like default dict) which uses a formula to return elements and doesn't autoexpand, so the following would take very little space yet logically correspond to `10**6` elements (like it does now) of the MutableDenseMatrix. ``` eye1000 = Matrix(10**3,10**3,lambda (i,j): 1 if i==j else 0) ``` There is already FunctionMatrix in the matrix expressions: ```py >>> FunctionMatrix(10**3, 10**3, Lambda((i, j), Piecewise((1, Eq(i, j)), (0, True)))) FunctionMatrix(1000, 1000, Lambda((i, j), Piecewise((1, Eq(i, j)), (0, True)))) ``` Of course, matrix expressions already have Identity directly: ```py >>> Identity(10**3) I ``` Matrix expressions are a much better approach to this problem because they can represent the expression symbolically, meaning it's possible to construct further matrices without evaluating. They are also unevaluated by default meaning you don't have to worry about an operation accidentally trying to evaluate a million matrix entries. The OP issue is not hard to fix: ```python In [12]: M = zeros(1000, 1000) In [13]: M[0,0] = 1 In [14]: M.todok() Out[14]: {(0, 0): 1} In [15]: Matrix.from_dok(*M.shape, M.todok()) == M Out[15]: True ``` A reshape function can be added that uses `todok`. I haven't added it at the DomainMatrix level because reshaping a matrix isn't generally useful.
[ { "body": "The following should be about a million times faster with a dedicated reshape routine that just moves around existing elements and does nothing with 0s:\r\n\r\n```\r\n>>> s = SparseMatrix.zeros(1000, 1000)\r\n>>> s[0] = 1\r\n>>> s.reshape(1, 10**6)\r\n```", "number": 13828, "title": "SparseMatrix should have its own reshape routine" } ]
a8b17a9a9262890601c06fdc2c264189b5b07e7c
{ "head_commit": "b974addf04db066905ed5a6218c408882f172f1f", "head_commit_message": "give SparseMatrix own reshape routine", "patch_to_review": "diff --git a/sympy/matrices/matrixbase.py b/sympy/matrices/matrixbase.py\nindex 7080b19e8a03..3ac1c8c87697 100644\n--- a/sympy/matrices/matrixbase.py\n+++ b/sympy/matrices/matrixbase.py\n@@ -505,7 +505,9 @@ def reshape(self, rows, cols):\n \"\"\"\n if self.rows * self.cols != rows * cols:\n raise ValueError(\"Invalid reshape parameters %d %d\" % (rows, cols))\n- return self._new(rows, cols, lambda i, j: self[i * cols + j])\n+ dok = {divmod(i*self.cols + j, cols):\n+ v for (i, j), v in self.todok().items()}\n+ return self._eval_from_dok(rows, cols, dok)\n \n def row_del(self, row):\n \"\"\"Delete the specified row.\"\"\"\ndiff --git a/sympy/matrices/tests/test_sparse.py b/sympy/matrices/tests/test_sparse.py\nindex 4d257c8062f2..6b76c8caf07c 100644\n--- a/sympy/matrices/tests/test_sparse.py\n+++ b/sympy/matrices/tests/test_sparse.py\n@@ -316,6 +316,7 @@ def sparse_zeros(n):\n SparseMatrix([(0, 1, 2), (3, 1, 2), (3, 4, 2), (3, 4, 5)])\n assert m1.reshape(2, 6) == \\\n SparseMatrix([(0, 1, 2, 3, 1, 2), (3, 4, 2, 3, 4, 5)])\n+ assert sparse_eye(10**3).reshape(1, 10**6).shape == (1, 10**6) # is not slow\n \n # test_applyfunc\n m0 = sparse_eye(3)\n" }
[ { "diff_hunk": "@@ -316,6 +316,7 @@ def sparse_zeros(n):\n SparseMatrix([(0, 1, 2), (3, 1, 2), (3, 4, 2), (3, 4, 5)])\n assert m1.reshape(2, 6) == \\\n SparseMatrix([(0, 1, 2, 3, 1, 2), (3, 4, 2, 3, 4, 5)])\n+ assert sparse_eye(10**3).reshape(1, 10**6).shape == (1, 10**6) # is not slow", "line": null, "original_line": 319, "original_start_line": null, "path": "sympy/matrices/tests/test_sparse.py", "start_line": null, "text": "@user1:\nI think it would be better just to remove this test. It isn't slow enough on master that it would get noticed in a PR. We can use benchmarks if we want to catch performance regressions." } ]
e851df9af6fcbccf926b66446137029f5ff20f13
diff --git a/sympy/matrices/matrixbase.py b/sympy/matrices/matrixbase.py index 7080b19e8a03..3ac1c8c87697 100644 --- a/sympy/matrices/matrixbase.py +++ b/sympy/matrices/matrixbase.py @@ -505,7 +505,9 @@ def reshape(self, rows, cols): """ if self.rows * self.cols != rows * cols: raise ValueError("Invalid reshape parameters %d %d" % (rows, cols)) - return self._new(rows, cols, lambda i, j: self[i * cols + j]) + dok = {divmod(i*self.cols + j, cols): + v for (i, j), v in self.todok().items()} + return self._eval_from_dok(rows, cols, dok) def row_del(self, row): """Delete the specified row."""
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "New Feature Additions" }
sympy__sympy-27085@c408d1f
sympy/sympy
Python
27,085
DM uses copy
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> closes #26973 #### Brief description of what is fixed or changed #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> * matrices * domain matrix no longer modifies instantiating list of lists <!-- END RELEASE NOTES -->
2024-09-18T22:44:46Z
DomainMatrix will modify lists used for its construction -- intended? I am not sure that the mutation of the instantiating lists for a DomainMatrix is intended: ```python >>> l = [1,2] >>> DomainMatrix([l],(1,2),ZZ) DomainMatrix([[1, 2]], (1, 2), ZZ) >>> _[0,0]=42 >>> l [42, 2] compare to Matrix >>> l = [1,2] >>> Matrix([l]) Matrix([[1, 2]]) >>> _[0,0]=42 >>> l [1, 2] ```
That could be changed in the constructor. I think at the time that I made it I was very concerned about the performance impact of things like copying a list. Some time later and after lots of performance measurements I don't think it would have any significant impact. The `DomainMatrix` constructor is not really used internally anyway. It might be a nice feature but the fact that it modifies the original should be prominently advertised. The work on `smith_decomp` in #17451 comment would have benefitted from knowing this. Specifically, if `view` is made as a copy of `m[1:,1:]` in the routine, then it will fail, as I recall. > It might be a nice feature but the fact that it modifies the original should be prominently advertised. No, it should not. We should not let anyone depend on this because then we would be tied to that forever. We should change this now to copy the lists. Note that the behaviour here will already be different if python-flint is installed: ```python In [14]: l = [1, 2] In [15]: dm = DomainMatrix([l], (1, 2), ZZ) In [16]: dm Out[16]: DomainMatrix([[1, 2]], (1, 2), ZZ) In [17]: dm.rep Out[17]: DFM([[1, 2]], (1, 2), ZZ) In [18]: dm.rep.rep Out[18]: [1, 2] In [19]: type(dm.rep.rep) Out[19]: flint.types.fmpz_mat.fmpz_mat In [20]: dm[0,0]=42 --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[20], line 1 ----> 1 dm[0,0]=42 File ~/current/active/sympy/sympy/polys/matrices/domainmatrix.py:219, in DomainMatrix.__setitem__(self, key, value) 217 i, j = key 218 if not self.domain.of_type(value): --> 219 raise TypeError 220 if isinstance(i, int) and isinstance(j, int): 221 self.rep.setitem(i, j, value) TypeError: In [21]: dm[0,0]=ZZ(42) In [22]: dm Out[22]: DomainMatrix([[42, 2]], (1, 2), ZZ) In [23]: l Out[23]: [1, 2] ``` > No, it should not. Documentation (in a docstring) has no bearing on whether something is public or private. If this is intended to be private-can-change-at-will, that's fine. But it should be clearly documented so someone other than the original author knows how to use it in its current state. > Documentation (in a docstring) has no bearing on whether something is public or private. It does have a bearing on these things. We should not just randomly document unintended aspects of an implementation as "features" because then we encourage people to use them. Here is an example: https://github.com/sympy/sympy/blob/31e957f47a636df9b7e62cd3951ec0afcc266369/sympy/core/numbers.py#L1282-L1290 That was documented by you 233e209 for no reason at all (and naturally self-merged as usual). That documented statement needs to be broken if we want to e.g. use python-flint or gmpy2 as the internal implementation of Rational. No other implementation of rational numbers supports the notion of an "unevaluated rational" so supporting that makes it impossible to use other implementations. Why document something if it is a bad idea to allow users to depend on it and if it is something that we will likely want to change in future? You even added unit tests to assure that future implementation would satisfy the broken behaviour. The fact that an implementation does something now does not make it a feature. In this case what you have identified with DomainMatrix could be described as a bug but definitely not a feature. It was never intended that the implementation would ensure the described behaviour. The expectation when writing the code was that the caller of DomainMatrix relinquishes ownership of the lists they pass in or otherwise it was just overlooked that a copy needed to be made. There is absolutely no way that I would intended for users to take advantage of DomainMatrix mutating some external lists because that is an absurd design and attempting to satisfy guarantees like that is an absurd constraint that complicates the process of doing actually useful things like improving reasonable use of matrices. The fix now is definitely not documenting this as a feature: the lists should be copied so that the caller's list are not mutated. The other issue I find while testing out this is that having `python-flint` installed, even does not allow setting elements by python `int` directly. ```python3 from sympy import * from sympy.polys.matrices import DomainMatrix l = [[1, 2]] M = DomainMatrix(l, (1, 2), ZZ) M[0, 0] = 3 ``` ``` File [~/documents/sympy/sympy/polys/matrices/domainmatrix.py:219](http://localhost:8888/home/sylee957/documents/sympy/sympy/polys/matrices/domainmatrix.py#line=218), in DomainMatrix.__setitem__(self, key, value) 217 i, j = key 218 if not self.domain.of_type(value): --> 219 raise TypeError 220 if isinstance(i, int) and isinstance(j, int): 221 self.rep.setitem(i, j, value) TypeError: ``` Using `M[0, 0] = ZZ(3)` works. I indeed think that mutating the list is bug, than the intended behavior, because using flint backend have different behavior. Maybe flint uses its own low level implementation of list, rather than getting handle of Python objects directly, so copying list objects are necessary for that. So to get the consistency between python and flint, the old behavior may be considered a bug rather than implicit feature > The other issue I find while testing out this is that having `python-flint` installed, even does not allow setting elements by python `int` directly. This is intentional. DomainMatrix is a lower-level type and so while it may have some type checks to prevent errors it does not do things like implicit coercion. We should make a higher-level type that wraps DomainMatrix and is friendlier to use like Poly that uses sympify and has implicit conversion to/from Expr etc. The problem is once you start allowing something like setting an `int` it becomes a slippery slope where it is hard to define which types should be acceptable and you end up needing to do the equivalent of sympify everywhere which makes everything slow and complicates every public method. Better just to require that the caller use the correct type. Note that this is already wrong: ```python dm = DomainMatrix([[1, 2]], (1, 2), ZZ) ``` It should be ```python dm = DomainMatrix([[ZZ(1), ZZ(2)]], (1, 2), ZZ) ``` There just isn't much error checking here. There is a friendlier constructor: ```python In [4]: from sympy.polys.matrices import DM In [5]: DM([[1, 2], [3, 4]], ZZ) Out[5]: DomainMatrix([[1, 2], [3, 4]], (2, 2), ZZ) ``` That version can infer the shape and will convert all elements like `ZZ(1)` etc for you. The best way usually though is just to convert a Matrix like: ```python In [6]: M = Matrix([[1, 2], [3, 4]]) In [7]: M.to_DM() Out[7]: DomainMatrix({0: {0: 1, 1: 2}, 1: {0: 3, 1: 4}}, (2, 2), ZZ) ``` Note that this automatically creates a sparse matrix which is often a better choice than dense. The `__setitem__` case is a bit different. I did not originally want DomainMatrix to be mutable at all. I had to concede some mutating operations just to make it usable as the internal representation of Matrix but I did not want to encourage anyone to use mutating operations with a DomainMatrix. It is typically better just to use a list of lists or some other data structure if you are going to mutate things. > You even added unit tests to assure that future implementation would satisfy the broken behaviour. I general add tests defensively so that when they fail that someone will know that the implicit behavior has changed. In this case, the change must be added very carefully after doing an audit of the current use to see if it will be impactful in ways that may not have been tested. The intention of the test is not to cook in a behavior, it is to highlight an assumptions and to bring it to an author's attention. > because then we encourage people to use them. If there is code in the codebase it (by definition) can get used. Those using it should know how it behaves. Telling an author how something behaves is not a guarantee that it will not change. Not telling an author how something behaves is a kind of coding alchemy that leaves, as undocumented, features that will need to be understood by someone wanting to use it. It hinders use. Issues like mutability are important aspects of how a routine behaves -- I suspect that searching for "*in place*" should yield more than a few hits. Not documenting key behavior is a hinderance to people using the code well. If it is decided that this is to change, then an audit has to be done to make sure that the new behavior does not cause issues. For example, in the code I left for the `smith_decomp` in https://github.com/sympy/sympy/pull/17451 used to assign `m = clear_column(m)`; that is not necessary if the change is happening in-place; it is necessary (and the code will break) if that feature is changed. Maybe a test will catch that. But maybe it won't. But a simple test that asserts that mutability will serve as a caution that the change must be made only after doing a careful review to make sure that there are no hidden logical failures that might not be exhibited as actual failures. > for no reason at all I wish you could trust that this is never the case. The comments I left above may give you some insight into the rationale I have for adding comments. And, of course, we know how to handle such flags which are incompatible with some other underlying number representation: raise an error indicating that the use of that flag is not compatible in a given context. (Or, decide that any educator that wants to show an unsimplified fraction must do so by using something like `Symbol('2')/Symbol('4')` instead of `Rational(2,4,gcd=1)`.) Uses of flint numbers where it matters is something that is being handled well even though this `gcd` flag exists and is documented: where it matters, we don't need Flint numbers and when we do want special number representations, we don't need the flag. > Not telling an author how something behaves is a kind of coding alchemy that leaves, as undocumented, features that will need to be understood by someone wanting to use it. It hinders use. I don't agree with this. The author here needs to be told not how the behaviour is now but rather what aspects of the behaviour they should be able to depend on and that requires some understanding that some things might change in future. If someone wants to use private/undocumented features then they can but we should not encourage use unless we want something to be used (and therefore not changed in future). > Uses of flint numbers where it matters is something that is being handled well We should be using Flint/gmpy2 for Integer/Rational but we are not. I looked into it a few times but each time gave up because there were documented examples that were incompatible with the change. > I don't agree with this. That is fine if you don't agree, but historically SymPy documents how things work, regardless of any author's intentions for the future of code they wrote. The idea that you state here (we shouldn't document how things work because an author intends to change it in the future or because they didn't have an original intention for it to work that way) is new to me. This seems to presume the original author of the code will be present in the future. In this case, that author is you and you do have the full intent to be present in the future, but we (the general SymPy development team) can't assume that. The appropriate assumption is that any given author will not be present in the future to carry forward any intentions they may have. Every <s>line of code</s> module/class/function in SymPy is public, whether we like that or not, and the only thing that has made code in SymPy private in the past (with reasonable consistency) is prepending a single underscore to function/class/module names. Public also generally means that all unintended behavior are features, until we deprecate the offending behavior. > (and naturally self-merged as usual) I've also noticed this happening much more often these days, but not just from @smichr. As far as I understand, we do not allow merging without sign off from at least one reviewer. It would be nice if we move back to that understanding. Maybe it was lost in the development documentation transfer from the wiki to the sphinx docs (as were many other details about our historical development practices). There are many SymPy development practices that have been in place for decades now and if we want to change these things (like what makes something private or public OR whether we avoid documenting existing behavior OR whether self-merging is allowed OR whether we allow squashing OR whether we include type hints) need to go through the SYMPEP system to change them. Otherwise we are always in a quagmire of misalignment of how different authors think things should be. Most other major scientific python teams use formal methods to keep everyone aligned (PEPS, periodic video meetings, governance boards, voting, etc.). My opinion is that SymPy also needs these things.
[ { "body": "I am not sure that the mutation of the instantiating lists for a DomainMatrix is intended:\r\n```python\r\n>>> l = [1,2]\r\n>>> DomainMatrix([l],(1,2),ZZ)\r\nDomainMatrix([[1, 2]], (1, 2), ZZ)\r\n>>> _[0,0]=42\r\n>>> l\r\n[42, 2]\r\n\r\ncompare to Matrix\r\n\r\n>>> l = [1,2]\r\n>>> Matrix([l])\r\nMatrix([[1, 2]])\r\n>>> _[0,0]=42\r\n>>> l\r\n[1, 2]\r\n```", "number": 26973, "title": "DomainMatrix will modify lists used for its construction -- intended?" } ]
c870d17c2638f5061800e344b72ff80086a0b41d
{ "head_commit": "c408d1f3da8300eb2bd60a0381ea00c3f37fb03f", "head_commit_message": "DM uses copy", "patch_to_review": "diff --git a/sympy/polys/matrices/ddm.py b/sympy/polys/matrices/ddm.py\nindex 2425aaffba64..8f90a3305367 100644\n--- a/sympy/polys/matrices/ddm.py\n+++ b/sympy/polys/matrices/ddm.py\n@@ -117,7 +117,7 @@ def __init__(self, rowslist, shape, domain):\n if len(rowslist) != m or any(len(row) != n for row in rowslist):\n raise DMBadInputError(\"Inconsistent row-list/shape\")\n \n- super().__init__(rowslist)\n+ super().__init__([i.copy() for i in rowslist])\n self.shape = (m, n)\n self.rows = m\n self.cols = n\ndiff --git a/sympy/polys/matrices/tests/test_domainmatrix.py b/sympy/polys/matrices/tests/test_domainmatrix.py\nindex b7fe91c574ba..ba3e4952ce02 100644\n--- a/sympy/polys/matrices/tests/test_domainmatrix.py\n+++ b/sympy/polys/matrices/tests/test_domainmatrix.py\n@@ -60,6 +60,11 @@ def test_DomainMatrix_init():\n \n raises(DMBadInputError, lambda: DomainMatrix([[ZZ(1), ZZ(2)]], (2, 2), ZZ))\n \n+ # uses copy\n+ was = [i.copy() for i in lol]\n+ A[0,0] = 42\n+ assert was == lol\n+\n \n def test_DomainMatrix_from_rep():\n ddm = DDM([[ZZ(1), ZZ(2)], [ZZ(3), ZZ(4)]], (2, 2), ZZ)\n" }
[ { "diff_hunk": "@@ -60,6 +60,11 @@ def test_DomainMatrix_init():\n \n raises(DMBadInputError, lambda: DomainMatrix([[ZZ(1), ZZ(2)]], (2, 2), ZZ))\n \n+ # uses copy\n+ was = [i.copy() for i in lol]\n+ A[0,0] = 42", "line": null, "original_line": 65, "original_start_line": null, "path": "sympy/polys/matrices/tests/test_domainmatrix.py", "start_line": null, "text": "@author:\n```suggestion\r\n A[0,0] = ZZ(42)\r\n```" } ]
b39fa6fffeef5a1c71c546817bffac4dcebb06ea
diff --git a/sympy/polys/matrices/ddm.py b/sympy/polys/matrices/ddm.py index 2425aaffba64..8f90a3305367 100644 --- a/sympy/polys/matrices/ddm.py +++ b/sympy/polys/matrices/ddm.py @@ -117,7 +117,7 @@ def __init__(self, rowslist, shape, domain): if len(rowslist) != m or any(len(row) != n for row in rowslist): raise DMBadInputError("Inconsistent row-list/shape") - super().__init__(rowslist) + super().__init__([i.copy() for i in rowslist]) self.shape = (m, n) self.rows = m self.cols = n diff --git a/sympy/polys/matrices/tests/test_domainmatrix.py b/sympy/polys/matrices/tests/test_domainmatrix.py index b7fe91c574ba..2b59d76a9d46 100644 --- a/sympy/polys/matrices/tests/test_domainmatrix.py +++ b/sympy/polys/matrices/tests/test_domainmatrix.py @@ -60,6 +60,11 @@ def test_DomainMatrix_init(): raises(DMBadInputError, lambda: DomainMatrix([[ZZ(1), ZZ(2)]], (2, 2), ZZ)) + # uses copy + was = [i.copy() for i in lol] + A[0,0] = ZZ(42) + assert was == lol + def test_DomainMatrix_from_rep(): ddm = DDM([[ZZ(1), ZZ(2)], [ZZ(3), ZZ(4)]], (2, 2), ZZ)
{ "difficulty": "low", "estimated_review_effort": 2, "problem_domain": "Bug Fixes" }
sympy__sympy-27090@24365bc
sympy/sympy
Python
27,090
_unevaluated_Mul watches for nc args
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> fix #27091 #### Brief description of what is fixed or changed #### Other comments The function can now be used without reservation by `separatevars` when there are noncommutative objects; the `if` was guarding against using the older form of the routine which failed for nc elements. #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> NO ENTRY <!-- END RELEASE NOTES -->
2024-09-19T13:11:14Z
_unevaluated_Mul re-orders non-commutative factors ```python from sympy import S from sympy.core.mul import _unevaluated_Mul as f from sympy.abc import x A, B = symbols("A B", commutative=False) >>> f(x, A, B, S(2), A).args (2, x, A, A, B) # should be (2, x, A, B, A) ```
[ { "body": "```python\r\nfrom sympy import S\r\nfrom sympy.core.mul import _unevaluated_Mul as f\r\nfrom sympy.abc import x\r\nA, B = symbols(\"A B\", commutative=False)\r\n>>> f(x, A, B, S(2), A).args\r\n(2, x, A, A, B) # should be (2, x, A, B, A)\r\n```", "number": 27091, "title": "_unevaluated_Mul re-orders non-commutative factors" } ]
eeb8b8b063c52c4cc58ccf6585cb2b6631b2b2ae
{ "head_commit": "24365bca644ee533e9de85bfc650f32ac0231bcc", "head_commit_message": "use _unevaluated_Mul with as_independent", "patch_to_review": "diff --git a/sympy/core/expr.py b/sympy/core/expr.py\nindex 7c4015e0f0e5..97112c6390eb 100644\n--- a/sympy/core/expr.py\n+++ b/sympy/core/expr.py\n@@ -1889,9 +1889,7 @@ def has(e):\n depend.extend(nc[i:])\n break\n indep.append(n)\n- return Mul(*indep), (\n- Mul(*depend, evaluate=False) if nc else\n- _unevaluated_Mul(*depend))\n+ return _unevaluated_Mul(*indep), _unevaluated_Mul(*depend)\n \n def as_real_imag(self, deep=True, **hints):\n \"\"\"Performs complex expansion on 'self' and returns a tuple\ndiff --git a/sympy/core/mul.py b/sympy/core/mul.py\nindex 2881402950a6..1e5197e04af5 100644\n--- a/sympy/core/mul.py\n+++ b/sympy/core/mul.py\n@@ -15,9 +15,9 @@\n from .parameters import global_parameters\n from .kind import KindDispatcher\n from .traversal import bottom_up\n-\n from sympy.utilities.iterables import sift\n \n+\n # internal marker to indicate:\n # \"there are still non-commutative objects -- don't forget to process them\"\n class NC_Marker:\n@@ -65,27 +65,25 @@ def _unevaluated_Mul(*args):\n False\n \n \"\"\"\n- args = list(args)\n- newargs = []\n+ cargs = []\n ncargs = []\n+ args = list(args)\n co = S.One\n- while args:\n- a = args.pop()\n+ for a in args:\n if a.is_Mul:\n- c, nc = a.args_cnc()\n- args.extend(c)\n- if nc:\n- ncargs.append(Mul._from_args(nc))\n+ a_c, a_nc = a.args_cnc()\n+ args.extend(a_c) # grow args\n+ ncargs.extend(a_nc)\n elif a.is_Number:\n co *= a\n+ elif a.is_commutative:\n+ cargs.append(a)\n else:\n- newargs.append(a)\n- _mulsort(newargs)\n+ ncargs.append(a)\n+ _mulsort(cargs)\n if co is not S.One:\n- newargs.insert(0, co)\n- if ncargs:\n- newargs.append(Mul._from_args(ncargs))\n- return Mul._from_args(newargs)\n+ cargs.insert(0, co)\n+ return Mul._from_args(cargs+ncargs)\n \n \n class Mul(Expr, AssocOp):\ndiff --git a/sympy/core/tests/test_expr.py b/sympy/core/tests/test_expr.py\nindex a37c7818228a..233e00a557e5 100644\n--- a/sympy/core/tests/test_expr.py\n+++ b/sympy/core/tests/test_expr.py\n@@ -7,7 +7,7 @@\n UnevaluatedExpr)\n from sympy.core.function import (Function, expand, WildFunction,\n AppliedUndef, Derivative, diff, Subs)\n-from sympy.core.mul import Mul\n+from sympy.core.mul import Mul, _unevaluated_Mul\n from sympy.core.numbers import (NumberSymbol, E, zoo, oo, Float, I,\n Rational, nan, Integer, Number, pi, _illegal)\n from sympy.core.power import Pow\n@@ -2291,3 +2291,9 @@ def test_format():\n \n def test_issue_24045():\n assert powsimp(exp(a)/((c*a - c*b)*(Float(1.0)*c*a - Float(1.0)*c*b))) # doesn't raise\n+\n+\n+def test__unevaluated_Mul():\n+ A, B = symbols('A B', commutative=False)\n+ assert _unevaluated_Mul(x, A, B, S(2), A).args == (2, x, A, B, A)\n+ assert _unevaluated_Mul(-x*A*B, S(2), A).args == (-2, x, A, B, A)\ndiff --git a/sympy/polys/partfrac.py b/sympy/polys/partfrac.py\nindex a04e5de9b78a..dedc1bf0fba4 100644\n--- a/sympy/polys/partfrac.py\n+++ b/sympy/polys/partfrac.py\n@@ -18,7 +18,7 @@ def apart(f, x=None, full=False, **options):\n \n Given a rational function ``f``, computes the partial fraction\n decomposition of ``f``. Two algorithms are available: One is based on the\n- undertermined coefficients method, the other is Bronstein's full partial\n+ undetermined coefficients method, the other is Bronstein's full partial\n fraction decomposition algorithm.\n \n The undetermined coefficients method (selected by ``full=False``) uses\n" }
[ { "diff_hunk": "@@ -1889,9 +1889,7 @@ def has(e):\n depend.extend(nc[i:])\n break\n indep.append(n)\n- return Mul(*indep), (\n- Mul(*depend, evaluate=False) if nc else\n- _unevaluated_Mul(*depend))\n+ return _unevaluated_Mul(*indep), _unevaluated_Mul(*depend)", "line": null, "original_line": 1892, "original_start_line": null, "path": "sympy/core/expr.py", "start_line": null, "text": "@author:\n```suggestion\r\n return Mul(*indep), _unevaluated_Mul(*depend)\r\n```" } ]
3060a841bd281ca13275dda7714d1f0393ebf3df
diff --git a/sympy/core/expr.py b/sympy/core/expr.py index 7c4015e0f0e5..c90a05b94eea 100644 --- a/sympy/core/expr.py +++ b/sympy/core/expr.py @@ -1889,9 +1889,7 @@ def has(e): depend.extend(nc[i:]) break indep.append(n) - return Mul(*indep), ( - Mul(*depend, evaluate=False) if nc else - _unevaluated_Mul(*depend)) + return Mul(*indep), _unevaluated_Mul(*depend) def as_real_imag(self, deep=True, **hints): """Performs complex expansion on 'self' and returns a tuple diff --git a/sympy/core/mul.py b/sympy/core/mul.py index 2881402950a6..1e5197e04af5 100644 --- a/sympy/core/mul.py +++ b/sympy/core/mul.py @@ -15,9 +15,9 @@ from .parameters import global_parameters from .kind import KindDispatcher from .traversal import bottom_up - from sympy.utilities.iterables import sift + # internal marker to indicate: # "there are still non-commutative objects -- don't forget to process them" class NC_Marker: @@ -65,27 +65,25 @@ def _unevaluated_Mul(*args): False """ - args = list(args) - newargs = [] + cargs = [] ncargs = [] + args = list(args) co = S.One - while args: - a = args.pop() + for a in args: if a.is_Mul: - c, nc = a.args_cnc() - args.extend(c) - if nc: - ncargs.append(Mul._from_args(nc)) + a_c, a_nc = a.args_cnc() + args.extend(a_c) # grow args + ncargs.extend(a_nc) elif a.is_Number: co *= a + elif a.is_commutative: + cargs.append(a) else: - newargs.append(a) - _mulsort(newargs) + ncargs.append(a) + _mulsort(cargs) if co is not S.One: - newargs.insert(0, co) - if ncargs: - newargs.append(Mul._from_args(ncargs)) - return Mul._from_args(newargs) + cargs.insert(0, co) + return Mul._from_args(cargs+ncargs) class Mul(Expr, AssocOp): diff --git a/sympy/core/tests/test_expr.py b/sympy/core/tests/test_expr.py index a37c7818228a..233e00a557e5 100644 --- a/sympy/core/tests/test_expr.py +++ b/sympy/core/tests/test_expr.py @@ -7,7 +7,7 @@ UnevaluatedExpr) from sympy.core.function import (Function, expand, WildFunction, AppliedUndef, Derivative, diff, Subs) -from sympy.core.mul import Mul +from sympy.core.mul import Mul, _unevaluated_Mul from sympy.core.numbers import (NumberSymbol, E, zoo, oo, Float, I, Rational, nan, Integer, Number, pi, _illegal) from sympy.core.power import Pow @@ -2291,3 +2291,9 @@ def test_format(): def test_issue_24045(): assert powsimp(exp(a)/((c*a - c*b)*(Float(1.0)*c*a - Float(1.0)*c*b))) # doesn't raise + + +def test__unevaluated_Mul(): + A, B = symbols('A B', commutative=False) + assert _unevaluated_Mul(x, A, B, S(2), A).args == (2, x, A, B, A) + assert _unevaluated_Mul(-x*A*B, S(2), A).args == (-2, x, A, B, A) diff --git a/sympy/polys/partfrac.py b/sympy/polys/partfrac.py index a04e5de9b78a..dedc1bf0fba4 100644 --- a/sympy/polys/partfrac.py +++ b/sympy/polys/partfrac.py @@ -18,7 +18,7 @@ def apart(f, x=None, full=False, **options): Given a rational function ``f``, computes the partial fraction decomposition of ``f``. Two algorithms are available: One is based on the - undertermined coefficients method, the other is Bronstein's full partial + undetermined coefficients method, the other is Bronstein's full partial fraction decomposition algorithm. The undetermined coefficients method (selected by ``full=False``) uses
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Bug Fixes" }
xorbitsai__inference-171@72966ac
xorbitsai/inference
Python
171
ENH: Implement RESTful API stream generate
Resolves #166.
2023-07-13T10:18:15Z
ENH: RESTful API stream generate ### Is your feature request related to a problem? Please describe Provide stream generateion in RESTful API. ### Describe the solution you'd like A clear and concise description of what you want to happen. ### Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered. ### Additional context Add any other context or screenshots about the feature request here.
[ { "body": "### Is your feature request related to a problem? Please describe\r\nProvide stream generateion in RESTful API.\r\n\r\n### Describe the solution you'd like\r\nA clear and concise description of what you want to happen.\r\n\r\n### Describe alternatives you've considered\r\nA clear and concise description of any alternative solutions or features you've considered.\r\n\r\n### Additional context\r\nAdd any other context or screenshots about the feature request here.\r\n", "number": 166, "title": "ENH: RESTful API stream generate" } ]
63e4133dac14e581201987f6166e8ea6691e83cb
{ "head_commit": "72966ace5748d461ff8e3cc31e0e7889f24f2ddc", "head_commit_message": "exception handling", "patch_to_review": "diff --git a/setup.cfg b/setup.cfg\nindex 8c1a49e68d..627bbb8d12 100644\n--- a/setup.cfg\n+++ b/setup.cfg\n@@ -34,6 +34,7 @@ install_requires =\n pydantic\n fastapi\n uvicorn\n+ sse_starlette\n \n [options.packages.find]\n exclude =\ndiff --git a/xinference/client.py b/xinference/client.py\nindex 2d99035cff..c3461f957c 100644\n--- a/xinference/client.py\n+++ b/xinference/client.py\n@@ -13,6 +13,7 @@\n # limitations under the License.\n \n import asyncio\n+import json\n import uuid\n from typing import TYPE_CHECKING, Any, Dict, Iterator, List, Optional, Tuple, Union\n \n@@ -80,6 +81,34 @@ def chat(\n return self._isolation.call(coro)\n \n \n+def streaming_response_iterator(response_lines):\n+ for line in response_lines:\n+ line = line.strip()\n+ if line:\n+ if line == b\"data: [DONE]\":\n+ yield {\"data\": \"End of Response\"}\n+ break\n+ try:\n+ data = json.loads(line.decode(\"utf-8\").replace(\"data: \", \"\", 1))\n+ yield {\"data\": data}\n+ except json.JSONDecodeError:\n+ print(\"Error decoding JSON for line:\", line)\n+\n+\n+def chat_streaming_response_iterator(response_lines):\n+ for line in response_lines:\n+ line = line.strip()\n+ if line:\n+ if line == b\"data: [DONE]\":\n+ yield {\"data\": \"End of Response\"}\n+ break\n+ try:\n+ data = json.loads(line.decode(\"utf-8\").replace(\"data: \", \"\", 1))\n+ yield {\"data\": data}\n+ except json.JSONDecodeError:\n+ print(\"Error decoding JSON for line:\", line)\n+\n+\n class RESTfulModelHandle:\n \"\"\"\n A sync model interface (for RESTful client) which provides type hints that makes it much easier to use xinference\n@@ -113,6 +142,10 @@ def generate(\n raise RuntimeError(\n f\"Failed to generate completion, detail: {response.json()['detail']}\"\n )\n+\n+ if generate_config and generate_config.get(\"stream\"):\n+ return streaming_response_iterator(response.iter_lines())\n+\n response_data = response.json()\n return response_data\n \n@@ -157,6 +190,10 @@ def chat(\n raise RuntimeError(\n f\"Failed to generate chat completion, detail: {response.json()['detail']}\"\n )\n+\n+ if generate_config and generate_config.get(\"stream\"):\n+ return chat_streaming_response_iterator(response.iter_lines())\n+\n response_data = response.json()\n return response_data\n \n@@ -192,6 +229,10 @@ def chat(\n raise RuntimeError(\n f\"Failed to generate chat completion, detail: {response.json()['detail']}\"\n )\n+\n+ if generate_config and generate_config.get(\"stream\"):\n+ return chat_streaming_response_iterator(response.iter_lines())\n+\n response_data = response.json()\n return response_data\n \n@@ -270,7 +311,7 @@ def list_models(self) -> Dict[str, Dict[str, Any]]:\n response = requests.get(url)\n if response.status_code != 200:\n raise RuntimeError(\n- f\"Failed to launch model, detail: {response.json()['detail']}\"\n+ f\"Failed to list model, detail: {response.json()['detail']}\"\n )\n \n response_data = response.json()\ndiff --git a/xinference/core/restful_api.py b/xinference/core/restful_api.py\nindex 7453857c24..21a44fbc7b 100644\n--- a/xinference/core/restful_api.py\n+++ b/xinference/core/restful_api.py\n@@ -12,17 +12,22 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n \n+import json\n import logging\n import socket\n import threading\n+from functools import partial\n from typing import Any, Dict, List, Literal, Optional, Union\n \n+import anyio\n import gradio as gr\n import xoscar as xo\n+from anyio.streams.memory import MemoryObjectSendStream\n from fastapi import APIRouter, FastAPI, HTTPException, Request\n from fastapi.middleware.cors import CORSMiddleware\n from fastapi.responses import JSONResponse\n from pydantic import BaseModel, Field\n+from sse_starlette.sse import EventSourceResponse\n from typing_extensions import NotRequired, TypedDict\n from uvicorn import Config, Server\n \n@@ -294,6 +299,11 @@ async def list_models(self) -> Dict[str, Dict[str, Any]]:\n async def describe_model(self, model_uid: str):\n try:\n return await self._supervisor_ref.describe_model(model_uid)\n+\n+ except ValueError as ve:\n+ logger.error(str(ve), exc_info=True)\n+ raise HTTPException(status_code=400, detail=str(ve))\n+\n except Exception as e:\n logger.error(e, exc_info=True)\n raise HTTPException(status_code=500, detail=str(e))\n@@ -307,6 +317,12 @@ async def launch_model(self, request: Request) -> JSONResponse:\n quantization = payload.get(\"quantization\")\n kwargs = payload.get(\"kwargs\", {}) or {}\n \n+ if model_uid is None or model_uid is None:\n+ raise HTTPException(\n+ status_code=400,\n+ detail=\"Invalid input. Please specify the model UID and the model name\",\n+ )\n+\n try:\n await self._supervisor_ref.launch_builtin_model(\n model_uid=model_uid,\n@@ -316,14 +332,29 @@ async def launch_model(self, request: Request) -> JSONResponse:\n quantization=quantization,\n **kwargs,\n )\n+\n+ except ValueError as ve:\n+ logger.error(str(ve), exc_info=True)\n+ raise HTTPException(status_code=400, detail=str(ve))\n+\n+ except RuntimeError as re:\n+ logger.error(str(re), exc_info=True)\n+ raise HTTPException(status_code=503, detail=str(re))\n+\n except Exception as e:\n- logger.error(e, exc_info=True)\n+ logger.error(str(e), exc_info=True)\n raise HTTPException(status_code=500, detail=str(e))\n+\n return JSONResponse(content={\"model_uid\": model_uid})\n \n async def terminate_model(self, model_uid: str):\n try:\n await self._supervisor_ref.terminate_model(model_uid)\n+\n+ except ValueError as ve:\n+ logger.error(str(ve), exc_info=True)\n+ raise HTTPException(status_code=400, detail=str(ve))\n+\n except Exception as e:\n logger.error(e, exc_info=True)\n raise HTTPException(status_code=500, detail=str(e))\n@@ -344,17 +375,49 @@ async def create_completion(self, request: Request, body: CreateCompletionReques\n kwargs = body.dict(exclude=exclude)\n \n if body.logit_bias is not None:\n- raise NotImplementedError\n+ raise HTTPException(status_code=501, detail=\"Not implemented\")\n+\n model_uid = body.model\n \n try:\n model = await self._supervisor_ref.get_model(model_uid)\n+\n+ except ValueError as ve:\n+ logger.error(str(ve), exc_info=True)\n+ raise HTTPException(status_code=400, detail=str(ve))\n+\n except Exception as e:\n logger.error(e, exc_info=True)\n raise HTTPException(status_code=500, detail=str(e))\n \n if body.stream:\n- raise NotImplementedError\n+ # create a pair of memory object streams\n+ send_chan, recv_chan = anyio.create_memory_object_stream(10)\n+\n+ async def event_publisher(inner_send_chan: MemoryObjectSendStream):\n+ async with inner_send_chan:\n+ try:\n+ iterator = await model.generate(body.prompt, kwargs)\n+ async for chunk in iterator:\n+ await inner_send_chan.send(dict(data=json.dumps(chunk)))\n+ if await request.is_disconnected():\n+ raise anyio.get_cancelled_exc_class()()\n+ await inner_send_chan.send(dict(data=\"[DONE]\"))\n+ except anyio.get_cancelled_exc_class() as e:\n+ print(\"disconnected\")\n+ with anyio.move_on_after(1, shield=True):\n+ print(\n+ f\"Disconnected from client (via refresh/close) {request.client}\"\n+ )\n+ await inner_send_chan.send(dict(closing=True))\n+ raise e\n+ except Exception as e:\n+ raise HTTPException(status_code=500, detail=str(e))\n+\n+ return EventSourceResponse(\n+ recv_chan, data_sender_callable=partial(event_publisher, send_chan)\n+ )\n+\n else:\n try:\n return await model.generate(body.prompt, kwargs)\n@@ -363,7 +426,7 @@ async def create_completion(self, request: Request, body: CreateCompletionReques\n raise HTTPException(status_code=500, detail=str(e))\n \n async def create_embedding(self, request: CreateEmbeddingRequest):\n- raise NotImplementedError\n+ raise HTTPException(status_code=501, detail=\"Not implemented\")\n \n async def create_chat_completion(\n self,\n@@ -381,15 +444,19 @@ async def create_chat_completion(\n kwargs = body.dict(exclude=exclude)\n \n if body.logit_bias is not None:\n- raise NotImplementedError\n+ raise HTTPException(status_code=501, detail=\"Not implemented\")\n+\n+ if (\n+ not body.messages\n+ or body.messages[-1].get(\"role\") != \"user\"\n+ or not body.messages[-1].get(\"content\")\n+ ):\n+ raise HTTPException(\n+ status_code=400, detail=\"Invalid input. Please specify the prompt\"\n+ )\n+\n+ prompt = body.messages[-1][\"content\"]\n \n- user_messages = [\n- msg[\"content\"] for msg in body.messages if msg[\"role\"] == \"user\"\n- ]\n- if user_messages:\n- prompt = user_messages[-1]\n- else:\n- raise HTTPException(status_code=400, detail=\"No prompt given\")\n system_prompt = next(\n (msg[\"content\"] for msg in body.messages if msg[\"role\"] == \"system\"), None\n )\n@@ -397,14 +464,47 @@ async def create_chat_completion(\n chat_history = body.messages\n \n model_uid = body.model\n+\n try:\n model = await self._supervisor_ref.get_model(model_uid)\n+\n+ except ValueError as ve:\n+ logger.error(str(ve), exc_info=True)\n+ raise HTTPException(status_code=400, detail=str(ve))\n except Exception as e:\n logger.error(e, exc_info=True)\n raise HTTPException(status_code=500, detail=str(e))\n \n if body.stream:\n- raise NotImplementedError\n+ # create a pair of memory object streams\n+ send_chan, recv_chan = anyio.create_memory_object_stream(10)\n+\n+ async def event_publisher(inner_send_chan: MemoryObjectSendStream):\n+ async with inner_send_chan:\n+ try:\n+ iterator = await model.chat(\n+ prompt, system_prompt, chat_history, kwargs\n+ )\n+ async for chunk in iterator:\n+ await inner_send_chan.send(dict(data=json.dumps(chunk)))\n+ if await request.is_disconnected():\n+ raise anyio.get_cancelled_exc_class()()\n+ await inner_send_chan.send(dict(data=\"[DONE]\"))\n+ except anyio.get_cancelled_exc_class() as e:\n+ print(\"disconnected\")\n+ with anyio.move_on_after(1, shield=True):\n+ print(\n+ f\"Disconnected from client (via refresh/close) {request.client}\"\n+ )\n+ await inner_send_chan.send(dict(closing=True))\n+ raise e\n+ except Exception as e:\n+ raise HTTPException(status_code=500, detail=str(e))\n+\n+ return EventSourceResponse(\n+ recv_chan, data_sender_callable=partial(event_publisher, send_chan)\n+ )\n+\n else:\n try:\n return await model.chat(prompt, system_prompt, chat_history, kwargs)\ndiff --git a/xinference/core/service.py b/xinference/core/service.py\nindex c7d424a515..bf8452f592 100644\n--- a/xinference/core/service.py\n+++ b/xinference/core/service.py\n@@ -111,7 +111,8 @@ async def launch_builtin_model(\n quantization,\n )\n \n- assert model_uid not in self._model_uid_to_worker\n+ if model_uid in self._model_uid_to_worker:\n+ raise ValueError(f\"Model is already in the model list, uid: {model_uid}\")\n \n worker_ref = await self._choose_worker()\n model_ref = yield worker_ref.launch_builtin_model(\n@@ -146,7 +147,8 @@ async def _check_dead_nodes(self):\n \n @log\n async def terminate_model(self, model_uid: str):\n- assert model_uid in self._model_uid_to_worker\n+ if model_uid not in self._model_uid_to_worker:\n+ raise ValueError(f\"Model not found in the model list, uid: {model_uid}\")\n \n worker_ref = self._model_uid_to_worker[model_uid]\n await worker_ref.terminate_model(model_uid=model_uid)\n@@ -154,11 +156,17 @@ async def terminate_model(self, model_uid: str):\n \n @log\n async def get_model(self, model_uid: str) -> xo.ActorRefType[\"ModelActor\"]:\n+ if model_uid not in self._model_uid_to_worker:\n+ raise ValueError(f\"Model not found in the model list, uid: {model_uid}\")\n+\n worker_ref = self._model_uid_to_worker[model_uid]\n return await worker_ref.get_model(model_uid=model_uid)\n \n @log\n async def describe_model(self, model_uid: str):\n+ if model_uid not in self._model_uid_to_worker:\n+ raise ValueError(f\"Model not found in the model list, uid: {model_uid}\")\n+\n worker_ref = self._model_uid_to_worker[model_uid]\n return await worker_ref.describe_model(model_uid=model_uid)\n \n@@ -288,9 +296,11 @@ async def launch_builtin_model(\n \n @log\n async def terminate_model(self, model_uid: str):\n- assert model_uid in self._model_uid_to_model\n+ if model_uid not in self._model_uid_to_model:\n+ raise ValueError(f\"Model not found in the model list, uid: {model_uid}\")\n \n model_ref = self._model_uid_to_model[model_uid]\n+\n await xo.destroy_actor(model_ref)\n del self._model_uid_to_model[model_uid]\n del self._model_uid_to_model_spec[model_uid]\n@@ -307,10 +317,16 @@ async def list_models(self) -> List[Tuple[str, ModelSpec]]:\n \n @log\n async def get_model(self, model_uid: str) -> xo.ActorRefType[\"ModelActor\"]:\n+ if model_uid not in self._model_uid_to_model:\n+ raise ValueError(f\"Model not found in the model list, uid: {model_uid}\")\n+\n return self._model_uid_to_model[model_uid]\n \n @log\n async def describe_model(self, model_uid: str) -> ModelSpec:\n+ if model_uid not in self._model_uid_to_model:\n+ raise ValueError(f\"Model not found in the model list, uid: {model_uid}\")\n+\n return self._model_uid_to_model_spec[model_uid]\n \n async def report_status(self):\ndiff --git a/xinference/core/tests/test_restful_api.py b/xinference/core/tests/test_restful_api.py\nindex 13824b0847..5e656bafc1 100644\n--- a/xinference/core/tests/test_restful_api.py\n+++ b/xinference/core/tests/test_restful_api.py\n@@ -35,6 +35,14 @@ async def test_restful_api(setup):\n model_uid_res = response_data[\"model_uid\"]\n assert model_uid_res == \"test\"\n \n+ payload = {\"model_uid\": \"test\", \"model_name\": \"orca\", \"quantization\": \"q4_0\"}\n+ response = requests.post(url, json=payload)\n+ assert response.status_code == 400\n+\n+ payload = {\"model_name\": \"orca\", \"quantization\": \"q4_0\"}\n+ response = requests.post(url, json=payload)\n+ assert response.status_code == 400\n+\n # list\n response = requests.get(url)\n response_data = response.json()\n@@ -43,9 +51,11 @@ async def test_restful_api(setup):\n # describe\n response = requests.get(f\"{endpoint}/v1/models/test\")\n response_data = response.json()\n- print(response_data)\n assert response_data[\"model_name\"] == \"orca\"\n \n+ response = requests.delete(f\"{endpoint}/v1/models/bogus\")\n+ assert response.status_code == 400\n+\n # generate\n url = f\"{endpoint}/v1/completions\"\n payload = {\n@@ -56,6 +66,19 @@ async def test_restful_api(setup):\n completion = response.json()\n assert \"text\" in completion[\"choices\"][0]\n \n+ payload = {\n+ \"model\": \"bogus\",\n+ \"prompt\": \"Once upon a time, there was a very old computer.\",\n+ }\n+ response = requests.post(url, json=payload)\n+ assert response.status_code == 400\n+\n+ payload = {\n+ \"prompt\": \"Once upon a time, there was a very old computer.\",\n+ }\n+ response = requests.post(url, json=payload)\n+ assert response.status_code == 422\n+\n # chat\n url = f\"{endpoint}/v1/chat/completions\"\n payload = {\n@@ -71,6 +94,40 @@ async def test_restful_api(setup):\n completion = response.json()\n assert \"content\" in completion[\"choices\"][0][\"message\"]\n \n+ payload = {\n+ \"messages\": [\n+ {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n+ {\"role\": \"user\", \"content\": \"Hello!\"},\n+ {\"role\": \"assistant\", \"content\": \"Hi what can I help you?\"},\n+ {\"role\": \"user\", \"content\": \"What is the capital of France?\"},\n+ ],\n+ }\n+ response = requests.post(url, json=payload)\n+ assert response.status_code == 422\n+\n+ payload = {\n+ \"model\": \"bogus\",\n+ \"messages\": [\n+ {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n+ {\"role\": \"user\", \"content\": \"Hello!\"},\n+ {\"role\": \"assistant\", \"content\": \"Hi what can I help you?\"},\n+ {\"role\": \"user\", \"content\": \"What is the capital of France?\"},\n+ ],\n+ }\n+ response = requests.post(url, json=payload)\n+ assert response.status_code == 400\n+\n+ payload = {\n+ \"model\": model_uid_res,\n+ \"messages\": [\n+ {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n+ {\"role\": \"user\", \"content\": \"Hello!\"},\n+ {\"role\": \"assistant\", \"content\": \"Hi what can I help you?\"},\n+ ],\n+ }\n+ response = requests.post(url, json=payload)\n+ assert response.status_code == 400\n+\n # delete\n url = f\"{endpoint}/v1/models/test\"\n response = requests.delete(url)\n@@ -83,4 +140,4 @@ async def test_restful_api(setup):\n # delete again\n url = f\"{endpoint}/v1/models/test\"\n response = requests.delete(url)\n- assert response.status_code != 200\n+ assert response.status_code == 400\ndiff --git a/xinference/tests/test_client.py b/xinference/tests/test_client.py\nindex 23d02c4ed6..9d489b0b09 100644\n--- a/xinference/tests/test_client.py\n+++ b/xinference/tests/test_client.py\n@@ -50,17 +50,45 @@ async def test_RESTful_client(setup):\n \n model = client.get_model(model_uid=model_uid)\n \n+ with pytest.raises(RuntimeError):\n+ model = client.get_model(model_uid=\"test\")\n+\n+ with pytest.raises(RuntimeError):\n+ completion = model.generate({\"max_tokens\": 64})\n+\n completion = model.generate(\"Once upon a time, there was a very old computer\")\n assert \"text\" in completion[\"choices\"][0]\n \n completion = model.generate(\n- \"Once upon a time, there was a very old computer\", {\"max_tokens\": 256}\n+ \"Once upon a time, there was a very old computer\", {\"max_tokens\": 64}\n )\n assert \"text\" in completion[\"choices\"][0]\n \n- completion = model.chat(\"write a poem.\")\n+ streaming_response = model.generate(\n+ \"Once upon a time, there was a very old computer\",\n+ {\"max_tokens\": 64, \"stream\": True},\n+ )\n+\n+ for chunk in streaming_response:\n+ assert (\n+ chunk[\"data\"] == \"End of Response\" or \"text\" in chunk[\"data\"][\"choices\"][0]\n+ )\n+\n+ with pytest.raises(RuntimeError):\n+ completion = model.chat({\"max_tokens\": 64})\n+\n+ completion = model.chat(\"What is the capital of France?\")\n assert \"content\" in completion[\"choices\"][0][\"message\"]\n \n+ streaming_response = model.chat(\n+ prompt=\"What is the capital of France?\", generate_config={\"stream\": True}\n+ )\n+\n+ for chunk in streaming_response:\n+ assert chunk[\"data\"] == \"End of Response\" or (\n+ \"content\" or \"role\" in chunk[\"data\"][\"choices\"][0][\"delta\"]\n+ )\n+\n client.terminate_model(model_uid=model_uid)\n assert len(client.list_models()) == 0\n \n" }
[ { "diff_hunk": "@@ -80,6 +81,34 @@ def chat(\n return self._isolation.call(coro)\n \n \n+def streaming_response_iterator(response_lines):\n+ for line in response_lines:\n+ line = line.strip()\n+ if line:\n+ if line == b\"data: [DONE]\":", "line": null, "original_line": 88, "original_start_line": null, "path": "xinference/client.py", "start_line": null, "text": "@user1:\nThe ending message is not necessary." }, { "diff_hunk": "@@ -80,6 +81,34 @@ def chat(\n return self._isolation.call(coro)\n \n \n+def streaming_response_iterator(response_lines):\n+ for line in response_lines:\n+ line = line.strip()\n+ if line:\n+ if line == b\"data: [DONE]\":\n+ yield {\"data\": \"End of Response\"}\n+ break\n+ try:\n+ data = json.loads(line.decode(\"utf-8\").replace(\"data: \", \"\", 1))\n+ yield {\"data\": data}\n+ except json.JSONDecodeError:\n+ print(\"Error decoding JSON for line:\", line)", "line": null, "original_line": 95, "original_start_line": null, "path": "xinference/client.py", "start_line": null, "text": "@user1:\nThe error should not be captured here." }, { "diff_hunk": "@@ -316,14 +332,29 @@ async def launch_model(self, request: Request) -> JSONResponse:\n quantization=quantization,\n **kwargs,\n )\n+\n+ except ValueError as ve:\n+ logger.error(str(ve), exc_info=True)\n+ raise HTTPException(status_code=400, detail=str(ve))\n+\n+ except RuntimeError as re:\n+ logger.error(str(re), exc_info=True)\n+ raise HTTPException(status_code=503, detail=str(re))\n+\n except Exception as e:\n- logger.error(e, exc_info=True)\n+ logger.error(str(e), exc_info=True)\n raise HTTPException(status_code=500, detail=str(e))\n+\n return JSONResponse(content={\"model_uid\": model_uid})\n \n async def terminate_model(self, model_uid: str):\n try:\n await self._supervisor_ref.terminate_model(model_uid)\n+", "line": null, "original_line": 353, "original_start_line": null, "path": "xinference/core/restful_api.py", "start_line": null, "text": "@user1:\n```suggestion\r\n```" }, { "diff_hunk": "@@ -344,17 +375,49 @@ async def create_completion(self, request: Request, body: CreateCompletionReques\n kwargs = body.dict(exclude=exclude)\n \n if body.logit_bias is not None:\n- raise NotImplementedError\n+ raise HTTPException(status_code=501, detail=\"Not implemented\")\n+\n model_uid = body.model\n \n try:\n model = await self._supervisor_ref.get_model(model_uid)\n+\n+ except ValueError as ve:\n+ logger.error(str(ve), exc_info=True)\n+ raise HTTPException(status_code=400, detail=str(ve))\n+\n except Exception as e:\n logger.error(e, exc_info=True)\n raise HTTPException(status_code=500, detail=str(e))\n \n if body.stream:\n- raise NotImplementedError\n+ # create a pair of memory object streams\n+ send_chan, recv_chan = anyio.create_memory_object_stream(10)\n+\n+ async def event_publisher(inner_send_chan: MemoryObjectSendStream):\n+ async with inner_send_chan:\n+ try:\n+ iterator = await model.generate(body.prompt, kwargs)\n+ async for chunk in iterator:\n+ await inner_send_chan.send(dict(data=json.dumps(chunk)))\n+ if await request.is_disconnected():\n+ raise anyio.get_cancelled_exc_class()()\n+ await inner_send_chan.send(dict(data=\"[DONE]\"))", "line": null, "original_line": 405, "original_start_line": null, "path": "xinference/core/restful_api.py", "start_line": null, "text": "@user1:\nThe ending message is not compatible with the OpenAI API." }, { "diff_hunk": "@@ -344,17 +375,49 @@ async def create_completion(self, request: Request, body: CreateCompletionReques\n kwargs = body.dict(exclude=exclude)\n \n if body.logit_bias is not None:\n- raise NotImplementedError\n+ raise HTTPException(status_code=501, detail=\"Not implemented\")\n+\n model_uid = body.model\n \n try:\n model = await self._supervisor_ref.get_model(model_uid)\n+", "line": null, "original_line": 384, "original_start_line": null, "path": "xinference/core/restful_api.py", "start_line": null, "text": "@user1:\n```suggestion\r\n```" }, { "diff_hunk": "@@ -80,6 +81,34 @@ def chat(\n return self._isolation.call(coro)\n \n \n+def streaming_response_iterator(response_lines):", "line": null, "original_line": 84, "original_start_line": null, "path": "xinference/client.py", "start_line": null, "text": "@user1:\nType hint." } ]
3689d14a05ff62974ad0c1f940c0d906739130e5
diff --git a/setup.cfg b/setup.cfg index 082ee4bded..1b9fef7e22 100644 --- a/setup.cfg +++ b/setup.cfg @@ -34,6 +34,7 @@ install_requires = pydantic fastapi uvicorn + sse_starlette [options.packages.find] exclude = diff --git a/xinference/client.py b/xinference/client.py index 60b751d994..b7d9c7cdfa 100644 --- a/xinference/client.py +++ b/xinference/client.py @@ -13,6 +13,7 @@ # limitations under the License. import asyncio +import json import uuid from typing import TYPE_CHECKING, Any, Dict, Iterator, List, Optional, Tuple, Union @@ -87,6 +88,27 @@ def chat( return self._isolation.call(coro) +def streaming_response_iterator( + response_lines: Iterator[bytes], +) -> Iterator["CompletionChunk"]: + for line in response_lines: + line = line.strip() + if line.startswith(b"data:"): + data = json.loads(line.decode("utf-8").replace("data: ", "", 1)) + yield data + + +# Duplicate code due to type hint issues +def chat_streaming_response_iterator( + response_lines: Iterator[bytes], +) -> Iterator["ChatCompletionChunk"]: + for line in response_lines: + line = line.strip() + if line.startswith(b"data:"): + data = json.loads(line.decode("utf-8").replace("data: ", "", 1)) + yield data + + class RESTfulModelHandle: """ A sync model interface (for RESTful client) which provides type hints that makes it much easier to use xinference @@ -124,6 +146,10 @@ def generate( raise RuntimeError( f"Failed to generate completion, detail: {response.json()['detail']}" ) + + if generate_config and generate_config.get("stream"): + return streaming_response_iterator(response.iter_lines()) + response_data = response.json() return response_data @@ -170,6 +196,10 @@ def chat( raise RuntimeError( f"Failed to generate chat completion, detail: {response.json()['detail']}" ) + + if generate_config and generate_config.get("stream"): + return chat_streaming_response_iterator(response.iter_lines()) + response_data = response.json() return response_data @@ -205,6 +235,10 @@ def chat( raise RuntimeError( f"Failed to generate chat completion, detail: {response.json()['detail']}" ) + + if generate_config and generate_config.get("stream"): + return chat_streaming_response_iterator(response.iter_lines()) + response_data = response.json() return response_data @@ -286,7 +320,7 @@ def list_models(self) -> Dict[str, Dict[str, Any]]: response = requests.get(url) if response.status_code != 200: raise RuntimeError( - f"Failed to launch model, detail: {response.json()['detail']}" + f"Failed to list model, detail: {response.json()['detail']}" ) response_data = response.json() diff --git a/xinference/core/restful_api.py b/xinference/core/restful_api.py index 12ad8f3803..ff7af393c6 100644 --- a/xinference/core/restful_api.py +++ b/xinference/core/restful_api.py @@ -12,17 +12,22 @@ # See the License for the specific language governing permissions and # limitations under the License. +import json import logging import socket import threading +from functools import partial from typing import Any, Dict, List, Literal, Optional, Union +import anyio import gradio as gr import xoscar as xo +from anyio.streams.memory import MemoryObjectSendStream from fastapi import APIRouter, FastAPI, HTTPException, Request from fastapi.middleware.cors import CORSMiddleware from fastapi.responses import JSONResponse from pydantic import BaseModel, Field +from sse_starlette.sse import EventSourceResponse from typing_extensions import NotRequired, TypedDict from uvicorn import Config, Server @@ -294,6 +299,11 @@ async def list_models(self) -> Dict[str, Dict[str, Any]]: async def describe_model(self, model_uid: str): try: return await self._supervisor_ref.describe_model(model_uid) + + except ValueError as ve: + logger.error(str(ve), exc_info=True) + raise HTTPException(status_code=400, detail=str(ve)) + except Exception as e: logger.error(e, exc_info=True) raise HTTPException(status_code=500, detail=str(e)) @@ -307,6 +317,12 @@ async def launch_model(self, request: Request) -> JSONResponse: quantization = payload.get("quantization") kwargs = payload.get("kwargs", {}) or {} + if model_uid is None or model_uid is None: + raise HTTPException( + status_code=400, + detail="Invalid input. Please specify the model UID and the model name", + ) + try: await self._supervisor_ref.launch_builtin_model( model_uid=model_uid, @@ -316,14 +332,26 @@ async def launch_model(self, request: Request) -> JSONResponse: quantization=quantization, **kwargs, ) + + except ValueError as ve: + logger.error(str(ve), exc_info=True) + raise HTTPException(status_code=400, detail=str(ve)) + except RuntimeError as re: + logger.error(str(re), exc_info=True) + raise HTTPException(status_code=503, detail=str(re)) except Exception as e: - logger.error(e, exc_info=True) + logger.error(str(e), exc_info=True) raise HTTPException(status_code=500, detail=str(e)) + return JSONResponse(content={"model_uid": model_uid}) async def terminate_model(self, model_uid: str): try: await self._supervisor_ref.terminate_model(model_uid) + except ValueError as ve: + logger.error(str(ve), exc_info=True) + raise HTTPException(status_code=400, detail=str(ve)) + except Exception as e: logger.error(e, exc_info=True) raise HTTPException(status_code=500, detail=str(e)) @@ -344,17 +372,47 @@ async def create_completion(self, request: Request, body: CreateCompletionReques kwargs = body.dict(exclude=exclude) if body.logit_bias is not None: - raise NotImplementedError + raise HTTPException(status_code=501, detail="Not implemented") + model_uid = body.model try: model = await self._supervisor_ref.get_model(model_uid) + except ValueError as ve: + logger.error(str(ve), exc_info=True) + raise HTTPException(status_code=400, detail=str(ve)) + except Exception as e: logger.error(e, exc_info=True) raise HTTPException(status_code=500, detail=str(e)) if body.stream: - raise NotImplementedError + # create a pair of memory object streams + send_chan, recv_chan = anyio.create_memory_object_stream(10) + + async def event_publisher(inner_send_chan: MemoryObjectSendStream): + async with inner_send_chan: + try: + iterator = await model.generate(body.prompt, kwargs) + async for chunk in iterator: + await inner_send_chan.send(dict(data=json.dumps(chunk))) + if await request.is_disconnected(): + raise anyio.get_cancelled_exc_class()() + except anyio.get_cancelled_exc_class() as e: + logger.warning("disconnected") + with anyio.move_on_after(1, shield=True): + logger.warning( + f"Disconnected from client (via refresh/close) {request.client}" + ) + await inner_send_chan.send(dict(closing=True)) + raise e + except Exception as e: + raise HTTPException(status_code=500, detail=str(e)) + + return EventSourceResponse( + recv_chan, data_sender_callable=partial(event_publisher, send_chan) + ) + else: try: return await model.generate(body.prompt, kwargs) @@ -363,7 +421,7 @@ async def create_completion(self, request: Request, body: CreateCompletionReques raise HTTPException(status_code=500, detail=str(e)) async def create_embedding(self, request: CreateEmbeddingRequest): - raise NotImplementedError + raise HTTPException(status_code=501, detail="Not implemented") async def create_chat_completion( self, @@ -381,15 +439,19 @@ async def create_chat_completion( kwargs = body.dict(exclude=exclude) if body.logit_bias is not None: - raise NotImplementedError + raise HTTPException(status_code=501, detail="Not implemented") + + if ( + not body.messages + or body.messages[-1].get("role") != "user" + or not body.messages[-1].get("content") + ): + raise HTTPException( + status_code=400, detail="Invalid input. Please specify the prompt." + ) + + prompt = body.messages[-1]["content"] - user_messages = [ - msg["content"] for msg in body.messages if msg["role"] == "user" - ] - if user_messages: - prompt = user_messages[-1] - else: - raise HTTPException(status_code=400, detail="No prompt given") system_prompt = next( (msg["content"] for msg in body.messages if msg["role"] == "system"), None ) @@ -397,14 +459,46 @@ async def create_chat_completion( chat_history = body.messages model_uid = body.model + try: model = await self._supervisor_ref.get_model(model_uid) + + except ValueError as ve: + logger.error(str(ve), exc_info=True) + raise HTTPException(status_code=400, detail=str(ve)) except Exception as e: logger.error(e, exc_info=True) raise HTTPException(status_code=500, detail=str(e)) if body.stream: - raise NotImplementedError + # create a pair of memory object streams + send_chan, recv_chan = anyio.create_memory_object_stream(10) + + async def event_publisher(inner_send_chan: MemoryObjectSendStream): + async with inner_send_chan: + try: + iterator = await model.chat( + prompt, system_prompt, chat_history, kwargs + ) + async for chunk in iterator: + await inner_send_chan.send(dict(data=json.dumps(chunk))) + if await request.is_disconnected(): + raise anyio.get_cancelled_exc_class()() + except anyio.get_cancelled_exc_class() as e: + logger.warning("disconnected") + with anyio.move_on_after(1, shield=True): + logger.warning( + f"Disconnected from client (via refresh/close) {request.client}" + ) + await inner_send_chan.send(dict(closing=True)) + raise e + except Exception as e: + raise HTTPException(status_code=500, detail=str(e)) + + return EventSourceResponse( + recv_chan, data_sender_callable=partial(event_publisher, send_chan) + ) + else: try: return await model.chat(prompt, system_prompt, chat_history, kwargs) diff --git a/xinference/core/service.py b/xinference/core/service.py index c7d424a515..bf8452f592 100644 --- a/xinference/core/service.py +++ b/xinference/core/service.py @@ -111,7 +111,8 @@ async def launch_builtin_model( quantization, ) - assert model_uid not in self._model_uid_to_worker + if model_uid in self._model_uid_to_worker: + raise ValueError(f"Model is already in the model list, uid: {model_uid}") worker_ref = await self._choose_worker() model_ref = yield worker_ref.launch_builtin_model( @@ -146,7 +147,8 @@ async def _check_dead_nodes(self): @log async def terminate_model(self, model_uid: str): - assert model_uid in self._model_uid_to_worker + if model_uid not in self._model_uid_to_worker: + raise ValueError(f"Model not found in the model list, uid: {model_uid}") worker_ref = self._model_uid_to_worker[model_uid] await worker_ref.terminate_model(model_uid=model_uid) @@ -154,11 +156,17 @@ async def terminate_model(self, model_uid: str): @log async def get_model(self, model_uid: str) -> xo.ActorRefType["ModelActor"]: + if model_uid not in self._model_uid_to_worker: + raise ValueError(f"Model not found in the model list, uid: {model_uid}") + worker_ref = self._model_uid_to_worker[model_uid] return await worker_ref.get_model(model_uid=model_uid) @log async def describe_model(self, model_uid: str): + if model_uid not in self._model_uid_to_worker: + raise ValueError(f"Model not found in the model list, uid: {model_uid}") + worker_ref = self._model_uid_to_worker[model_uid] return await worker_ref.describe_model(model_uid=model_uid) @@ -288,9 +296,11 @@ async def launch_builtin_model( @log async def terminate_model(self, model_uid: str): - assert model_uid in self._model_uid_to_model + if model_uid not in self._model_uid_to_model: + raise ValueError(f"Model not found in the model list, uid: {model_uid}") model_ref = self._model_uid_to_model[model_uid] + await xo.destroy_actor(model_ref) del self._model_uid_to_model[model_uid] del self._model_uid_to_model_spec[model_uid] @@ -307,10 +317,16 @@ async def list_models(self) -> List[Tuple[str, ModelSpec]]: @log async def get_model(self, model_uid: str) -> xo.ActorRefType["ModelActor"]: + if model_uid not in self._model_uid_to_model: + raise ValueError(f"Model not found in the model list, uid: {model_uid}") + return self._model_uid_to_model[model_uid] @log async def describe_model(self, model_uid: str) -> ModelSpec: + if model_uid not in self._model_uid_to_model: + raise ValueError(f"Model not found in the model list, uid: {model_uid}") + return self._model_uid_to_model_spec[model_uid] async def report_status(self): diff --git a/xinference/core/tests/test_restful_api.py b/xinference/core/tests/test_restful_api.py index 13824b0847..5e656bafc1 100644 --- a/xinference/core/tests/test_restful_api.py +++ b/xinference/core/tests/test_restful_api.py @@ -35,6 +35,14 @@ async def test_restful_api(setup): model_uid_res = response_data["model_uid"] assert model_uid_res == "test" + payload = {"model_uid": "test", "model_name": "orca", "quantization": "q4_0"} + response = requests.post(url, json=payload) + assert response.status_code == 400 + + payload = {"model_name": "orca", "quantization": "q4_0"} + response = requests.post(url, json=payload) + assert response.status_code == 400 + # list response = requests.get(url) response_data = response.json() @@ -43,9 +51,11 @@ async def test_restful_api(setup): # describe response = requests.get(f"{endpoint}/v1/models/test") response_data = response.json() - print(response_data) assert response_data["model_name"] == "orca" + response = requests.delete(f"{endpoint}/v1/models/bogus") + assert response.status_code == 400 + # generate url = f"{endpoint}/v1/completions" payload = { @@ -56,6 +66,19 @@ async def test_restful_api(setup): completion = response.json() assert "text" in completion["choices"][0] + payload = { + "model": "bogus", + "prompt": "Once upon a time, there was a very old computer.", + } + response = requests.post(url, json=payload) + assert response.status_code == 400 + + payload = { + "prompt": "Once upon a time, there was a very old computer.", + } + response = requests.post(url, json=payload) + assert response.status_code == 422 + # chat url = f"{endpoint}/v1/chat/completions" payload = { @@ -71,6 +94,40 @@ async def test_restful_api(setup): completion = response.json() assert "content" in completion["choices"][0]["message"] + payload = { + "messages": [ + {"role": "system", "content": "You are a helpful assistant."}, + {"role": "user", "content": "Hello!"}, + {"role": "assistant", "content": "Hi what can I help you?"}, + {"role": "user", "content": "What is the capital of France?"}, + ], + } + response = requests.post(url, json=payload) + assert response.status_code == 422 + + payload = { + "model": "bogus", + "messages": [ + {"role": "system", "content": "You are a helpful assistant."}, + {"role": "user", "content": "Hello!"}, + {"role": "assistant", "content": "Hi what can I help you?"}, + {"role": "user", "content": "What is the capital of France?"}, + ], + } + response = requests.post(url, json=payload) + assert response.status_code == 400 + + payload = { + "model": model_uid_res, + "messages": [ + {"role": "system", "content": "You are a helpful assistant."}, + {"role": "user", "content": "Hello!"}, + {"role": "assistant", "content": "Hi what can I help you?"}, + ], + } + response = requests.post(url, json=payload) + assert response.status_code == 400 + # delete url = f"{endpoint}/v1/models/test" response = requests.delete(url) @@ -83,4 +140,4 @@ async def test_restful_api(setup): # delete again url = f"{endpoint}/v1/models/test" response = requests.delete(url) - assert response.status_code != 200 + assert response.status_code == 400 diff --git a/xinference/tests/test_client.py b/xinference/tests/test_client.py index 23d02c4ed6..0fc94d5d1f 100644 --- a/xinference/tests/test_client.py +++ b/xinference/tests/test_client.py @@ -50,17 +50,41 @@ async def test_RESTful_client(setup): model = client.get_model(model_uid=model_uid) + with pytest.raises(RuntimeError): + model = client.get_model(model_uid="test") + + with pytest.raises(RuntimeError): + completion = model.generate({"max_tokens": 64}) + completion = model.generate("Once upon a time, there was a very old computer") assert "text" in completion["choices"][0] completion = model.generate( - "Once upon a time, there was a very old computer", {"max_tokens": 256} + "Once upon a time, there was a very old computer", {"max_tokens": 64} ) assert "text" in completion["choices"][0] - completion = model.chat("write a poem.") + streaming_response = model.generate( + "Once upon a time, there was a very old computer", + {"max_tokens": 64, "stream": True}, + ) + + for chunk in streaming_response: + assert "text" in chunk["choices"][0] + + with pytest.raises(RuntimeError): + completion = model.chat({"max_tokens": 64}) + + completion = model.chat("What is the capital of France?") assert "content" in completion["choices"][0]["message"] + streaming_response = model.chat( + prompt="What is the capital of France?", generate_config={"stream": True} + ) + + for chunk in streaming_response: + assert "content" or "role" in chunk["choices"][0]["delta"] + client.terminate_model(model_uid=model_uid) assert len(client.list_models()) == 0
{ "difficulty": "medium", "estimated_review_effort": 4, "problem_domain": "New Feature Additions" }
sympy__sympy-27020@42493e3
sympy/sympy
Python
27,020
fail when no subset solves poly system; better removal of redundant solutions
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> Until the Groebner approach is used to solve systems of polynomials, it might be advantageous to try solve the equations with the traditional methods (instead of a system-of-equations approach). The added test fails to solve as a system, but a solution can be found. #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> closes #27001 #### Brief description of what is fixed or changed #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> * solvers * better removal of redundant solutions from systems of equations * positive-dimensional systems of polynomials which raised an error may now report a solution <!-- END RELEASE NOTES -->
2024-08-30T03:15:15Z
Possible bug with sympy.solve For some systems of equations, sympy.solve can succeed when solving with respect to a subset of variables but raises an error when asked to solve with respect to a large set containing that variable set (containing that subset). This seems like a bug, as it should at least be able to find the same solutions it could find with the smaller variable set. Here is a minimal example. The following code can be executed without an error: ``` from sympy import symbols, solve a1,a2,a3=symbols(['a1','a2','a3']) eqnSystem=[a1, a1**2] solve(eqnSystem,[a1,a2],dict=True) ``` But an error will be raised if the following line is then executed: ``` solve(eqnSystem,[a1,a2,a3],dict=True) ``` The error raise is *no valid subset found*. If solve can solve the system with respect to `[a1,a2]`, shouldn't it also be able to find some solution with respect to `[a1,a2,a3]`?
This comes from SO: https://stackoverflow.com/questions/78913691/why-might-sympy-solve-succeed-with-respect-to-fewer-variables-but-fail-w-r-t-mo This is the error: ```python In [2]: solve([x, x**2], [x, y, z]) --------------------------------------------------------------------------- NotImplementedError Traceback (most recent call last) Cell In[2], line 1 ----> 1 solve([x, x**2], [x, y, z]) File ~/current/active/sympy/sympy/solvers/solvers.py:1172, in solve(f, *symbols, **flags) 1170 solution = _solve(f[0], *symbols, **flags) 1171 else: -> 1172 linear, solution = _solve_system(f, symbols, **flags) 1173 assert type(solution) is list 1174 assert not solution or type(solution[0]) is dict, solution File ~/current/active/sympy/sympy/solvers/solvers.py:1896, in _solve_system(exprs, symbols, **flags) 1894 solved_syms = list(got_s) 1895 else: -> 1896 raise NotImplementedError('no valid subset found') 1897 else: 1898 try: NotImplementedError: no valid subset found ``` The problem comes from the overly complicated looking code here: https://github.com/sympy/sympy/blob/9ae79a9b5802d2697e0644895e91734c33be5a1b/sympy/solvers/solvers.py#L1869-L1896 I don't know what that is doing but it should be rewritten some other way. Calling `solve_poly_system` in a loop of combinatoric size should not happen... The following still fails: subsets should additionally not exceed the length of equations. ``` from sympy import symbols, solve s=a1,a2,a3,a4,a5=symbols(['a1','a2','a3','a4','a5']) eqnSystem3=[8*a1**4*a2 + 4*a1**2*a2**3 - 8*a1**2*a2*a4 + a2**5/2 - 2*a2**3*a4 + 8*a2*a3**2 + 2*a2*a4**2 + 8*a2*a5, 12*a1**4 + 6*a1**2*a2**2 - 8*a1**2*a4 + 3*a2**4/4 - 2*a2**2*a4 + 4*a3**2 + a4**2 + 4*a5, 16*a1**3 + 4*a1*a2**2 - 8*a1*a4, -8*a1**2*a2 - 2*a2**3 + 4*a2*a4] solve(eqnSystem3,s,dict=True) ```
[ { "body": "For some systems of equations, sympy.solve can succeed when solving with respect to a subset of variables but raises an error when asked to solve with respect to a large set containing that variable set (containing that subset). This seems like a bug, as it should at least be able to find the same solutions it could find with the smaller variable set.\r\n\r\nHere is a minimal example.\r\nThe following code can be executed without an error:\r\n```\r\nfrom sympy import symbols, solve\r\na1,a2,a3=symbols(['a1','a2','a3'])\r\neqnSystem=[a1, a1**2]\r\nsolve(eqnSystem,[a1,a2],dict=True)\r\n```\r\n\r\nBut an error will be raised if the following line is then executed:\r\n```\r\nsolve(eqnSystem,[a1,a2,a3],dict=True)\r\n```\r\nThe error raise is *no valid subset found*.\r\n\r\nIf solve can solve the system with respect to `[a1,a2]`, shouldn't it also be able to find some solution with respect to `[a1,a2,a3]`?", "number": 27001, "title": "Possible bug with sympy.solve" } ]
95a8175eb35ca7c8d84afadafb8df4bfb93fd1d2
{ "head_commit": "42493e3fa9eec2ed12a797e74ce7a8f091c75532", "head_commit_message": "update existing soln\n\nWhen testing to see if a solution is a superset of another, update the other with the values of the solution being tested, first.", "patch_to_review": "diff --git a/sympy/solvers/solvers.py b/sympy/solvers/solvers.py\nindex 869c8aa7fdc0..b1210a2baaf6 100644\n--- a/sympy/solvers/solvers.py\n+++ b/sympy/solvers/solvers.py\n@@ -1893,7 +1893,7 @@ def _solve_system(exprs, symbols, **flags):\n if got_s:\n solved_syms = list(got_s)\n else:\n- raise NotImplementedError('no valid subset found')\n+ failed.extend([g.as_expr() for g in polys])\n else:\n try:\n result = solve_poly_system(polys, *symbols)\n@@ -1985,10 +1985,13 @@ def key(sym):\n # check that it is independent of previous solutions\n iset = set(rnew.items())\n for i in newresult:\n- if len(i) < len(iset) and not set(i.items()) - iset:\n- # this is a superset of a known solution that\n- # is smaller\n- break\n+ if len(i) < len(iset):\n+ # update i with what is known\n+ i_items_updated = {(k, v.xreplace(rnew)) for k, v in i.items()}\n+ if not i_items_updated - iset:\n+ # this is a superset of a known solution that\n+ # is smaller\n+ break\n else:\n # keep it\n newresult.append(rnew)\ndiff --git a/sympy/solvers/tests/test_solvers.py b/sympy/solvers/tests/test_solvers.py\nindex 6f8275ee3197..b93f6e9f8e78 100644\n--- a/sympy/solvers/tests/test_solvers.py\n+++ b/sympy/solvers/tests/test_solvers.py\n@@ -2571,6 +2571,14 @@ def test_issue_20747():\n \n def test_issue_27001():\n assert solve((x, x**2), (x, y, z), dict=True) == [{x: 0}]\n+ s = a1, a2, a3, a4, a5 = symbols('a1:6')\n+ eqs = [8*a1**4*a2 + 4*a1**2*a2**3 - 8*a1**2*a2*a4 + a2**5/2 - 2*a2**3*a4 +\n+ 8*a2*a3**2 + 2*a2*a4**2 + 8*a2*a5, 12*a1**4 + 6*a1**2*a2**2 -\n+ 8*a1**2*a4 + 3*a2**4/4 - 2*a2**2*a4 + 4*a3**2 + a4**2 + 4*a5, 16*a1**3\n+ + 4*a1*a2**2 - 8*a1*a4, -8*a1**2*a2 - 2*a2**3 + 4*a2*a4]\n+ assert solve(eqs, s, dict=True) == [\n+ {a4: 2*a1**2 + a2**2/2, a5: -a3**2}, {a1: 0, a2: 0, a5: -a3**2 - a4**2/4},\n+ {a1: 0, a4: a2**2/2, a5: -a3**2}]\n \n \n def test_issue_20902():\n" }
[ { "diff_hunk": "@@ -2571,6 +2571,14 @@ def test_issue_20747():\n \n def test_issue_27001():\n assert solve((x, x**2), (x, y, z), dict=True) == [{x: 0}]\n+ s = a1, a2, a3, a4, a5 = symbols('a1:6')\n+ eqs = [8*a1**4*a2 + 4*a1**2*a2**3 - 8*a1**2*a2*a4 + a2**5/2 - 2*a2**3*a4 +\n+ 8*a2*a3**2 + 2*a2*a4**2 + 8*a2*a5, 12*a1**4 + 6*a1**2*a2**2 -\n+ 8*a1**2*a4 + 3*a2**4/4 - 2*a2**2*a4 + 4*a3**2 + a4**2 + 4*a5, 16*a1**3\n+ + 4*a1*a2**2 - 8*a1*a4, -8*a1**2*a2 - 2*a2**3 + 4*a2*a4]\n+ assert solve(eqs, s, dict=True) == [\n+ {a4: 2*a1**2 + a2**2/2, a5: -a3**2}, {a1: 0, a2: 0, a5: -a3**2 - a4**2/4},\n+ {a1: 0, a4: a2**2/2, a5: -a3**2}]", "line": null, "original_line": 2581, "original_start_line": 2580, "path": "sympy/solvers/tests/test_solvers.py", "start_line": null, "text": "@author:\n```suggestion\r\n {a4: 2*a1**2 + a2**2/2, a5: -a3**2}, {a1: 0, a2: 0, a5: -a3**2 - a4**2/4}]\r\n```" } ]
c3c1ace3db4cf0400a58dc797c274bb4542b9715
diff --git a/sympy/solvers/solvers.py b/sympy/solvers/solvers.py index 869c8aa7fdc0..b1210a2baaf6 100644 --- a/sympy/solvers/solvers.py +++ b/sympy/solvers/solvers.py @@ -1893,7 +1893,7 @@ def _solve_system(exprs, symbols, **flags): if got_s: solved_syms = list(got_s) else: - raise NotImplementedError('no valid subset found') + failed.extend([g.as_expr() for g in polys]) else: try: result = solve_poly_system(polys, *symbols) @@ -1985,10 +1985,13 @@ def key(sym): # check that it is independent of previous solutions iset = set(rnew.items()) for i in newresult: - if len(i) < len(iset) and not set(i.items()) - iset: - # this is a superset of a known solution that - # is smaller - break + if len(i) < len(iset): + # update i with what is known + i_items_updated = {(k, v.xreplace(rnew)) for k, v in i.items()} + if not i_items_updated - iset: + # this is a superset of a known solution that + # is smaller + break else: # keep it newresult.append(rnew) diff --git a/sympy/solvers/tests/test_solvers.py b/sympy/solvers/tests/test_solvers.py index 6f8275ee3197..1815ac953ce1 100644 --- a/sympy/solvers/tests/test_solvers.py +++ b/sympy/solvers/tests/test_solvers.py @@ -23,7 +23,7 @@ from sympy.logic.boolalg import (And, Or) from sympy.matrices.dense import Matrix from sympy.matrices import SparseMatrix -from sympy.polys.polytools import Poly +from sympy.polys.polytools import Poly, groebner from sympy.printing.str import sstr from sympy.simplify.radsimp import denom from sympy.solvers.solvers import (nsolve, solve, solve_linear) @@ -2571,6 +2571,14 @@ def test_issue_20747(): def test_issue_27001(): assert solve((x, x**2), (x, y, z), dict=True) == [{x: 0}] + s = a1, a2, a3, a4, a5 = symbols('a1:6') + eqs = [8*a1**4*a2 + 4*a1**2*a2**3 - 8*a1**2*a2*a4 + a2**5/2 - 2*a2**3*a4 + + 8*a2*a3**2 + 2*a2*a4**2 + 8*a2*a5, 12*a1**4 + 6*a1**2*a2**2 - + 8*a1**2*a4 + 3*a2**4/4 - 2*a2**2*a4 + 4*a3**2 + a4**2 + 4*a5, 16*a1**3 + + 4*a1*a2**2 - 8*a1*a4, -8*a1**2*a2 - 2*a2**3 + 4*a2*a4] + sol = [{a4: 2*a1**2 + a2**2/2, a5: -a3**2}, {a1: 0, a2: 0, a5: -a3**2 - a4**2/4}] + assert solve(eqs, s, dict=True) == sol + assert (g:=solve(groebner(eqs, s), dict=True)) == sol, g def test_issue_20902():
{ "difficulty": "high", "estimated_review_effort": 4, "problem_domain": "Bug Fixes" }
sympy__sympy-27013@9a3a4c4
sympy/sympy
Python
27,013
Fixed bug related to limits of abs
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> Fixes #26513 #### Brief description of what is fixed or changed sign function in gruntz gave erroneous results for negative logarithmic arguments. Refactored `set_signs` in limits to let pass expressions returning NotImplementedError for gruntz to operate directly on abs functions . Made this separate PR for this issue as suggested in #26749. #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> * series * Fixed `sign` in gruntz for better handling of terms having negative bases in power expressions with symbolic exponents. <!-- END RELEASE NOTES -->
2024-08-28T17:33:25Z
Wrong limit result for Abs((-n/(n+1))**n) ```python import sympy from sympy import S, Symbol, limit, oo print('sympy version:', sympy.__version__) n = Symbol('n', real=True) expr = (n/(n+1))**n print(limit(expr, n, oo)) # Success expr = (-n/(n+1))**n print(limit(expr, n, oo)) # Wrong expr = Abs((-n/(n+1))**n) print(limit(expr, n, oo)) # Wrong ``` The result: ```python sympy version: 1.12 exp(-1) 0 0 ```
Thanks for raising this. We do perform poorly when it comes to limits (bi-directional, or at infinities too ... you can easily find similar errors for `-oo` too ) involving the abs function. It seems that the result for the third limit (the one with abs) is being returned correctly by gruntz but not for the second one ``` In [6]: gruntz((-n/(n+1))**n, n, oo) Out[6]: 0 In [7]: gruntz(abs((-n/(n+1))**n), n, oo) Out[7]: exp(-1) ``` though for the second one(negative one) we get value 0 though the expression (-n/(n+1))**n is not well defined at infinity, it seems the problem stems from the following lines: https://github.com/sympy/sympy/blob/2565eb35e255f0fee2267f20e9065b0f5eedf562/sympy/series/limits.py#L266-L286 since gruntz returns wrong value for (-n/(n+1))**n i.e 0 the computation for Abs one also gets messed up. Ok these are my thoughts here 1) So technically the gruntz is responsible for all 3 outputs here, which means instead of limit we can straight up call gruntz and if we get it to work, we are done. 2) Talking about the 3rd limit So the code flow tries to solve `limit(Abs((-n/(n+1))**n), n, oo)` through 2 approaches i) calcualte `limit((-n/(n+1))**n), n, oo)` and then make a decision ii) calculate `gruntz(Abs((-n/(n+1))**n), n, oo)` Now as we can see, we don't solve `limit((-n/(n+1))**n), n, oo)` correctly and return `0` for the same, hence the case adding Abs goes wrong too 3) So now we need to solve `limit((-n/(n+1))**n), n, oo)` which further depends on `gruntz((-n/(n+1))**n), n, oo)` which currently returns `0`. 4) Hence `gruntz((-n/(n+1))**n), n, oo)` is the whole issue here. Once that is solved we can backtrack and all other limits would work by themselves. Now we return 0 here but the expression is not well defined at infinity. I think we just need to return self here. ``` >>> limit((-n/(n+1))**n), n, oo) Limit((-x/(x + 1))**x, x, oo) ``` >Hence gruntz((-n/(n+1))**n), n, oo) is the whole issue here. Once that is solved we can backtrack and all other limits would work by themselves. Now we return 0 here but the expression is not well defined at infinity. I think we just need to return self here. Yes exactly! It seems that the following lines are the cause of the problem https://github.com/sympy/sympy/blob/3377fbd8311512529c6d014fd60055d8aab5274b/sympy/series/gruntz.py#L417-L418 the function sign(e,x) computes the sign of e as x tends to infinity. In master sign(log(-n/(n+1), n) computes to -1 which is wrong and should have been I. It seems the above lines were added with the context that taylor series expansion of `log(x+1)` around 0 is ` x -x**2/2+ ...' but since here we are trying to find the sign of expression as x tends to oo so maybe this conditional isn't the right thing to do here. Removing the conditional altogether solves the issue as sign of log expressions are calculated correctly using coefficient obtained from mrv_leadterm, here https://github.com/sympy/sympy/blob/3377fbd8311512529c6d014fd60055d8aab5274b/sympy/series/gruntz.py#L421-L422 > the function sign(e,x) computes the sign of e as x tends to infinity. > In master sign(log(-n/(n+1), n) computes to -1 which is wrong and should have been I. Is this `sign` function supposed to be able to handle non-real inputs? It gets this wrong: ```python In [12]: from sympy.series.gruntz import sign In [13]: sign(exp(I*x), x) Out[13]: 1 ``` Is it that somewhere higher up such non-real expressions should be rejected? Yeah, since scenarios which will require computation of `sign(exp(I*x), x)` will be simply arise from things like `limit(exp(I*x), x,oo)`and in `limitinf` we use the `sign()` on the coefficient(using mrv_leadterm) of rewritten(rewrite in mrv_leadterm) term where it will raise error, https://github.com/sympy/sympy/blob/da2d56142230563bcfe50c564980e7d07fcc0cc9/sympy/series/gruntz.py#L646-L648 even in case of nested exponentiation or power tower scenarios, routines in `mrv` ensure that we dont end up getting exp or logs in exponents. Nevertheless we will never come to the point of having to use this sign function on exponents of any symbolic expression directly I believe.
[ { "body": "```python\r\nimport sympy\r\n\r\nfrom sympy import S, Symbol, limit, oo\r\n\r\nprint('sympy version:', sympy.__version__)\r\n\r\nn = Symbol('n', real=True)\r\n\r\nexpr = (n/(n+1))**n\r\nprint(limit(expr, n, oo)) # Success\r\n\r\nexpr = (-n/(n+1))**n\r\nprint(limit(expr, n, oo)) # Wrong\r\n\r\nexpr = Abs((-n/(n+1))**n)\r\nprint(limit(expr, n, oo)) # Wrong\r\n```\r\n\r\nThe result:\r\n```python\r\nsympy version: 1.12\r\nexp(-1)\r\n0\r\n0\r\n```", "number": 26513, "title": "Wrong limit result for Abs((-n/(n+1))**n)" } ]
01a25577c8acda8f50a7d35eb0971fc0a1e17344
{ "head_commit": "9a3a4c4cef825970af045829e2791bb09d705eec", "head_commit_message": "added and modified tests\n\nSigned-off-by: arnabnandikgp <[email protected]>", "patch_to_review": "diff --git a/sympy/series/gruntz.py b/sympy/series/gruntz.py\nindex 0bf3bf3e50b1..cad37a988757 100644\n--- a/sympy/series/gruntz.py\n+++ b/sympy/series/gruntz.py\n@@ -411,7 +411,7 @@ def sign(e, x):\n return 1\n if e.exp.is_Integer:\n return s**e.exp\n- elif isinstance(e, log):\n+ elif isinstance(e, log) and e.args[0].is_positive:\n return sign(e.args[0] - 1, x)\n \n # if all else fails, do it the hard way\ndiff --git a/sympy/series/limits.py b/sympy/series/limits.py\nindex b3976e551227..041996fdb13d 100644\n--- a/sympy/series/limits.py\n+++ b/sympy/series/limits.py\n@@ -273,16 +273,20 @@ def set_signs(expr):\n arg_flag = isinstance(expr, arg)\n sign_flag = isinstance(expr, sign)\n if abs_flag or sign_flag or arg_flag:\n- sig = limit(expr.args[0], z, z0, dir)\n- if sig.is_zero:\n- sig = limit(1/expr.args[0], z, z0, dir)\n- if sig.is_extended_real:\n- if (sig < 0) == True:\n- return (-expr.args[0] if abs_flag else\n- S.NegativeOne if sign_flag else S.Pi)\n- elif (sig > 0) == True:\n- return (expr.args[0] if abs_flag else\n- S.One if sign_flag else S.Zero)\n+ try:\n+ sig = limit(expr.args[0], z, z0, dir)\n+ if sig.is_zero:\n+ sig = limit(1/expr.args[0], z, z0, dir)\n+ except NotImplementedError:\n+ pass\n+ else:\n+ if sig.is_extended_real:\n+ if (sig < 0) == True:\n+ return (-expr.args[0] if abs_flag else\n+ S.NegativeOne if sign_flag else S.Pi)\n+ elif (sig > 0) == True:\n+ return (expr.args[0] if abs_flag else\n+ S.One if sign_flag else S.Zero)\n return expr\n \n if e.has(Float):\ndiff --git a/sympy/series/tests/test_limits.py b/sympy/series/tests/test_limits.py\nindex 4f2760f8e82a..a2dce871b1e7 100644\n--- a/sympy/series/tests/test_limits.py\n+++ b/sympy/series/tests/test_limits.py\n@@ -1414,6 +1414,12 @@ def test_issue_26250():\n assert limit(e1/e2, x, 0) == -S(1)/8\n \n \n+def test_issue_26513():\n+ assert limit(abs((-x/(x+1))**x), x ,oo) == exp(-1)\n+ assert limit((x/(x + 1))**x, x, oo) == exp(-1)\n+ raises (NotImplementedError, lambda: limit((-x/(x+1))**x, x, oo))\n+\n+\n def test_issue_26916():\n assert limit(Ei(x)*exp(-x), x, +oo) == 0\n assert limit(Ei(x)*exp(-x), x, -oo) == 0\n" }
[ { "diff_hunk": "@@ -273,16 +273,20 @@ def set_signs(expr):\n arg_flag = isinstance(expr, arg)\n sign_flag = isinstance(expr, sign)\n if abs_flag or sign_flag or arg_flag:\n- sig = limit(expr.args[0], z, z0, dir)\n- if sig.is_zero:\n- sig = limit(1/expr.args[0], z, z0, dir)\n- if sig.is_extended_real:\n- if (sig < 0) == True:\n- return (-expr.args[0] if abs_flag else\n- S.NegativeOne if sign_flag else S.Pi)\n- elif (sig > 0) == True:\n- return (expr.args[0] if abs_flag else\n- S.One if sign_flag else S.Zero)\n+ try:\n+ sig = limit(expr.args[0], z, z0, dir)\n+ if sig.is_zero:\n+ sig = limit(1/expr.args[0], z, z0, dir)\n+ except NotImplementedError:\n+ pass", "line": null, "original_line": 281, "original_start_line": null, "path": "sympy/series/limits.py", "start_line": null, "text": "@user1:\nJust a doubt. Are we sure we want to use pass here ?\r\nDo we have any other options and like what result do we get if we don't have a try except block here ?" } ]
3ef9001c15fbb2b61a24f8bbc3915b136f33962c
diff --git a/sympy/series/gruntz.py b/sympy/series/gruntz.py index 0bf3bf3e50b1..cad37a988757 100644 --- a/sympy/series/gruntz.py +++ b/sympy/series/gruntz.py @@ -411,7 +411,7 @@ def sign(e, x): return 1 if e.exp.is_Integer: return s**e.exp - elif isinstance(e, log): + elif isinstance(e, log) and e.args[0].is_positive: return sign(e.args[0] - 1, x) # if all else fails, do it the hard way diff --git a/sympy/series/limits.py b/sympy/series/limits.py index b3976e551227..8abf7fe4a212 100644 --- a/sympy/series/limits.py +++ b/sympy/series/limits.py @@ -273,16 +273,20 @@ def set_signs(expr): arg_flag = isinstance(expr, arg) sign_flag = isinstance(expr, sign) if abs_flag or sign_flag or arg_flag: - sig = limit(expr.args[0], z, z0, dir) - if sig.is_zero: - sig = limit(1/expr.args[0], z, z0, dir) - if sig.is_extended_real: - if (sig < 0) == True: - return (-expr.args[0] if abs_flag else - S.NegativeOne if sign_flag else S.Pi) - elif (sig > 0) == True: - return (expr.args[0] if abs_flag else - S.One if sign_flag else S.Zero) + try: + sig = limit(expr.args[0], z, z0, dir) + if sig.is_zero: + sig = limit(1/expr.args[0], z, z0, dir) + except NotImplementedError: + return expr + else: + if sig.is_extended_real: + if (sig < 0) == True: + return (-expr.args[0] if abs_flag else + S.NegativeOne if sign_flag else S.Pi) + elif (sig > 0) == True: + return (expr.args[0] if abs_flag else + S.One if sign_flag else S.Zero) return expr if e.has(Float): diff --git a/sympy/series/tests/test_limits.py b/sympy/series/tests/test_limits.py index 4f2760f8e82a..a2dce871b1e7 100644 --- a/sympy/series/tests/test_limits.py +++ b/sympy/series/tests/test_limits.py @@ -1414,6 +1414,12 @@ def test_issue_26250(): assert limit(e1/e2, x, 0) == -S(1)/8 +def test_issue_26513(): + assert limit(abs((-x/(x+1))**x), x ,oo) == exp(-1) + assert limit((x/(x + 1))**x, x, oo) == exp(-1) + raises (NotImplementedError, lambda: limit((-x/(x+1))**x, x, oo)) + + def test_issue_26916(): assert limit(Ei(x)*exp(-x), x, +oo) == 0 assert limit(Ei(x)*exp(-x), x, -oo) == 0
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Bug Fixes" }
sympy__sympy-26954@3dca453
sympy/sympy
Python
26,954
Support Ei(z) expansion at -oo
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> Fixes #26916 Closes #26937 #### Brief description of what is fixed or changed The series expansion of `Ei(x)` at `oo` and `-oo` is the same hence we just had to tweak the condition #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> * series * Support Ei(z) expansion at -oo <!-- END RELEASE NOTES -->
2024-08-13T05:31:50Z
Ei(-x) * exp(x) (exponential integral) limit to infinity error out The following limit computes fine: ``` (Ei(x) * exp(-x)).limit(x, oo) # Result: 0 ``` But the following doesn't: ``` (Ei(-x) * exp(x)).limit(x, oo) # Or equivalent: (Ei(x) * exp(-x)).limit(x, -oo) # Expected: 0 ``` Instead, sympy will error out with ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) File ~\AppData\Roaming\Python\Python39\site-packages\sympy\series\gruntz.py:557, in mrv_leadterm(e, x) 556 try: --> 557 lt = f.leadterm(w, logx=logw) 558 except (NotImplementedError, PoleError, ValueError): File ~\AppData\Roaming\Python\Python39\site-packages\sympy\core\expr.py:3533, in Expr.leadterm(self, x, logx, cdir) 3532 if x in c.free_symbols: -> 3533 raise ValueError(filldedent(""" 3534 cannot compute leadterm(%s, %s). The coefficient 3535 should have been free of %s but got %s""" % (self, x, x, c))) 3536 c = c.subs(d, log(x)) ValueError: cannot compute leadterm(_eis(-1/_w), _w). The coefficient should have been free of _w but got _eis(-1/_w) During handling of the above exception, another exception occurred: TypeError Traceback (most recent call last) Cell In[84], line 2 1 (Ei(x) * exp(-x)).limit(x, oo) ----> 2 (Ei(x) * exp(-x)).limit(x, -oo) File ~\AppData\Roaming\Python\Python39\site-packages\sympy\core\expr.py:3418, in Expr.limit(self, x, xlim, dir) 3415 """ Compute limit x->xlim. 3416 """ 3417 from sympy.series.limits import limit -> 3418 return limit(self, x, xlim, dir) File ~\AppData\Roaming\Python\Python39\site-packages\sympy\series\limits.py:64, in limit(e, z, z0, dir) 13 def limit(e, z, z0, dir="+"): 14 """Computes the limit of ``e(z)`` at the point ``z0``. 15 16 Parameters (...) 61 limit_seq : returns the limit of a sequence. 62 """ ---> 64 return Limit(e, z, z0, dir).doit(deep=False) File ~\AppData\Roaming\Python\Python39\site-packages\sympy\series\limits.py:375, in Limit.doit(self, **hints) 372 l = None 374 try: --> 375 r = gruntz(e, z, z0, dir) 376 if r is S.NaN or l is S.NaN: 377 raise PoleError() File ~\AppData\Roaming\Python\Python39\site-packages\sympy\series\gruntz.py:733, in gruntz(e, z, z0, dir) 730 else: 731 raise NotImplementedError("dir must be '+' or '-'") --> 733 r = limitinf(e0, z) 735 # This is a bit of a heuristic for nice results... we always rewrite 736 # tractable functions in terms of familiar intractable ones. 737 # It might be nicer to rewrite the exactly to what they were initially, 738 # but that would take some work to implement. 739 return r.rewrite('intractable', deep=True) File ~\AppData\Roaming\Python\Python39\site-packages\sympy\core\cache.py:72, in __cacheit.<locals>.func_wrapper.<locals>.wrapper(*args, **kwargs) 69 @wraps(func) 70 def wrapper(*args, **kwargs): 71 try: ---> 72 retval = cfunc(*args, **kwargs) 73 except TypeError as e: 74 if not e.args or not e.args[0].startswith('unhashable type:'): File ~\AppData\Roaming\Python\Python39\site-packages\sympy\series\gruntz.py:453, in limitinf(e, x) 451 c0, e0 = mrv_leadterm(e.min, x) 452 else: --> 453 c0, e0 = mrv_leadterm(e, x) 454 sig = sign(e0, x) 455 if sig == 1: File ~\AppData\Roaming\Python\Python39\site-packages\sympy\core\cache.py:72, in __cacheit.<locals>.func_wrapper.<locals>.wrapper(*args, **kwargs) 69 @wraps(func) 70 def wrapper(*args, **kwargs): 71 try: ---> 72 retval = cfunc(*args, **kwargs) 73 except TypeError as e: 74 if not e.args or not e.args[0].startswith('unhashable type:'): File ~\AppData\Roaming\Python\Python39\site-packages\sympy\series\gruntz.py:563, in mrv_leadterm(e, x) 561 incr = S.One 562 while _series.is_Order: --> 563 _series = f._eval_nseries(w, n=n0+incr, logx=logw) 564 incr *= 2 565 series = _series.expand().removeO() File ~\AppData\Roaming\Python\Python39\site-packages\sympy\functions\special\error_functions.py:2782, in _eis._eval_nseries(self, x, n, logx, cdir) 2780 f = self._eval_rewrite_as_intractable(*self.args) 2781 return f._eval_nseries(x, n, logx) -> 2782 return super()._eval_nseries(x, n, logx) File ~\AppData\Roaming\Python\Python39\site-packages\sympy\core\function.py:690, in Function._eval_nseries(self, x, n, logx, cdir) 688 a0 = [t.limit(x, 0) for t in a] 689 if any(t.has(oo, -oo, zoo, nan) for t in a0): --> 690 return self._eval_aseries(n, args0, x, logx) 691 # Careful: the argument goes to oo, but only logarithmically so. We 692 # are supposed to do a power series expansion "around the 693 # logarithmic term". e.g. 694 # f(1+x+log(x)) 695 # -> f(1+logx) + x*f'(1+logx) + O(x**2) 696 # where 'logx' is given in the argument 697 a = [t._eval_nseries(x, n, logx) for t in args] File ~\AppData\Roaming\Python\Python39\site-packages\sympy\functions\special\error_functions.py:2751, in _eis._eval_aseries(self, n, args0, x, logx) 2749 from sympy.series.order import Order 2750 if args0[0] != S.Infinity: -> 2751 return super(_erfs, self)._eval_aseries(n, args0, x, logx) 2753 z = self.args[0] 2754 l = [factorial(k) * (1/z)**(k + 1) for k in range(n)] TypeError: super(type, obj): obj must be an instance or subtype of type ```
The immediate error message results from a bug that is easily fixed: ```diff diff --git a/sympy/functions/special/error_functions.py b/sympy/functions/special/error_functions.py index 09279588b6..ba29c4d7d4 100644 --- a/sympy/functions/special/error_functions.py +++ b/sympy/functions/special/error_functions.py @@ -2748,7 +2748,7 @@ class _eis(Function): def _eval_aseries(self, n, args0, x, logx): from sympy.series.order import Order if args0[0] != S.Infinity: - return super(_erfs, self)._eval_aseries(n, args0, x, logx) + return super()._eval_aseries(n, args0, x, logx) z = self.args[0] l = [factorial(k) * (1/z)**(k + 1) for k in range(n)] ``` In this case the limit does not compute but does not exit with the error: ```python In [1]: (Ei(-x) * exp(x)).limit(x, oo) Out[1]: ⎛ x ⎞ lim ⎝ℯ ⋅Ei(-x)⎠ x─→∞ ``` Great thanks! A little bit unfortunate that it can't compute the limit however. Luckily WolframAlpha didn't seem to have any issues with it. The basic bug is easy to fix. Evaluating the limit would require having an asymptotic expansion for `Ei(x)` at `-oo`: ```python In [5]: Ei(x).series(x, oo) Out[5]: ⎛120 24 6 2 1 ⎛1 ⎞⎞ x ⎜─── + ── + ── + ── + ─ + 1 + O⎜──; x → ∞⎟⎟⋅ℯ ⎜ 5 4 3 2 x ⎜ 6 ⎟⎟ ⎝x x x x ⎝x ⎠⎠ ────────────────────────────────────────────── x In [6]: Ei(x).series(x, -oo) ... PoleError: Asymptotic expansion of Ei around [-oo] is not implemented. ``` There is an asymptotic description on [Wikipedia about Exponential Integrals](https://en.wikipedia.org/wiki/Exponential_integral) which is asymptotic beyond all borders in the complex plain. Would that help? It is possibly just a case of someone needing to implement the formula somewhere.
[ { "body": "The following limit computes fine:\r\n\r\n```\r\n(Ei(x) * exp(-x)).limit(x, oo)\r\n# Result: 0\r\n```\r\n\r\nBut the following doesn't:\r\n\r\n```\r\n(Ei(-x) * exp(x)).limit(x, oo)\r\n# Or equivalent: (Ei(x) * exp(-x)).limit(x, -oo)\r\n# Expected: 0\r\n```\r\n\r\nInstead, sympy will error out with\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\nFile ~\\AppData\\Roaming\\Python\\Python39\\site-packages\\sympy\\series\\gruntz.py:557, in mrv_leadterm(e, x)\r\n 556 try:\r\n--> 557 lt = f.leadterm(w, logx=logw)\r\n 558 except (NotImplementedError, PoleError, ValueError):\r\n\r\nFile ~\\AppData\\Roaming\\Python\\Python39\\site-packages\\sympy\\core\\expr.py:3533, in Expr.leadterm(self, x, logx, cdir)\r\n 3532 if x in c.free_symbols:\r\n-> 3533 raise ValueError(filldedent(\"\"\"\r\n 3534 cannot compute leadterm(%s, %s). The coefficient\r\n 3535 should have been free of %s but got %s\"\"\" % (self, x, x, c)))\r\n 3536 c = c.subs(d, log(x))\r\n\r\nValueError: \r\ncannot compute leadterm(_eis(-1/_w), _w). The coefficient should have\r\nbeen free of _w but got _eis(-1/_w)\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTypeError Traceback (most recent call last)\r\nCell In[84], line 2\r\n 1 (Ei(x) * exp(-x)).limit(x, oo)\r\n----> 2 (Ei(x) * exp(-x)).limit(x, -oo)\r\n\r\nFile ~\\AppData\\Roaming\\Python\\Python39\\site-packages\\sympy\\core\\expr.py:3418, in Expr.limit(self, x, xlim, dir)\r\n 3415 \"\"\" Compute limit x->xlim.\r\n 3416 \"\"\"\r\n 3417 from sympy.series.limits import limit\r\n-> 3418 return limit(self, x, xlim, dir)\r\n\r\nFile ~\\AppData\\Roaming\\Python\\Python39\\site-packages\\sympy\\series\\limits.py:64, in limit(e, z, z0, dir)\r\n 13 def limit(e, z, z0, dir=\"+\"):\r\n 14 \"\"\"Computes the limit of ``e(z)`` at the point ``z0``.\r\n 15 \r\n 16 Parameters\r\n (...)\r\n 61 limit_seq : returns the limit of a sequence.\r\n 62 \"\"\"\r\n---> 64 return Limit(e, z, z0, dir).doit(deep=False)\r\n\r\nFile ~\\AppData\\Roaming\\Python\\Python39\\site-packages\\sympy\\series\\limits.py:375, in Limit.doit(self, **hints)\r\n 372 l = None\r\n 374 try:\r\n--> 375 r = gruntz(e, z, z0, dir)\r\n 376 if r is S.NaN or l is S.NaN:\r\n 377 raise PoleError()\r\n\r\nFile ~\\AppData\\Roaming\\Python\\Python39\\site-packages\\sympy\\series\\gruntz.py:733, in gruntz(e, z, z0, dir)\r\n 730 else:\r\n 731 raise NotImplementedError(\"dir must be '+' or '-'\")\r\n--> 733 r = limitinf(e0, z)\r\n 735 # This is a bit of a heuristic for nice results... we always rewrite\r\n 736 # tractable functions in terms of familiar intractable ones.\r\n 737 # It might be nicer to rewrite the exactly to what they were initially,\r\n 738 # but that would take some work to implement.\r\n 739 return r.rewrite('intractable', deep=True)\r\n\r\nFile ~\\AppData\\Roaming\\Python\\Python39\\site-packages\\sympy\\core\\cache.py:72, in __cacheit.<locals>.func_wrapper.<locals>.wrapper(*args, **kwargs)\r\n 69 @wraps(func)\r\n 70 def wrapper(*args, **kwargs):\r\n 71 try:\r\n---> 72 retval = cfunc(*args, **kwargs)\r\n 73 except TypeError as e:\r\n 74 if not e.args or not e.args[0].startswith('unhashable type:'):\r\n\r\nFile ~\\AppData\\Roaming\\Python\\Python39\\site-packages\\sympy\\series\\gruntz.py:453, in limitinf(e, x)\r\n 451 c0, e0 = mrv_leadterm(e.min, x)\r\n 452 else:\r\n--> 453 c0, e0 = mrv_leadterm(e, x)\r\n 454 sig = sign(e0, x)\r\n 455 if sig == 1:\r\n\r\nFile ~\\AppData\\Roaming\\Python\\Python39\\site-packages\\sympy\\core\\cache.py:72, in __cacheit.<locals>.func_wrapper.<locals>.wrapper(*args, **kwargs)\r\n 69 @wraps(func)\r\n 70 def wrapper(*args, **kwargs):\r\n 71 try:\r\n---> 72 retval = cfunc(*args, **kwargs)\r\n 73 except TypeError as e:\r\n 74 if not e.args or not e.args[0].startswith('unhashable type:'):\r\n\r\nFile ~\\AppData\\Roaming\\Python\\Python39\\site-packages\\sympy\\series\\gruntz.py:563, in mrv_leadterm(e, x)\r\n 561 incr = S.One\r\n 562 while _series.is_Order:\r\n--> 563 _series = f._eval_nseries(w, n=n0+incr, logx=logw)\r\n 564 incr *= 2\r\n 565 series = _series.expand().removeO()\r\n\r\nFile ~\\AppData\\Roaming\\Python\\Python39\\site-packages\\sympy\\functions\\special\\error_functions.py:2782, in _eis._eval_nseries(self, x, n, logx, cdir)\r\n 2780 f = self._eval_rewrite_as_intractable(*self.args)\r\n 2781 return f._eval_nseries(x, n, logx)\r\n-> 2782 return super()._eval_nseries(x, n, logx)\r\n\r\nFile ~\\AppData\\Roaming\\Python\\Python39\\site-packages\\sympy\\core\\function.py:690, in Function._eval_nseries(self, x, n, logx, cdir)\r\n 688 a0 = [t.limit(x, 0) for t in a]\r\n 689 if any(t.has(oo, -oo, zoo, nan) for t in a0):\r\n--> 690 return self._eval_aseries(n, args0, x, logx)\r\n 691 # Careful: the argument goes to oo, but only logarithmically so. We\r\n 692 # are supposed to do a power series expansion \"around the\r\n 693 # logarithmic term\". e.g.\r\n 694 # f(1+x+log(x))\r\n 695 # -> f(1+logx) + x*f'(1+logx) + O(x**2)\r\n 696 # where 'logx' is given in the argument\r\n 697 a = [t._eval_nseries(x, n, logx) for t in args]\r\n\r\nFile ~\\AppData\\Roaming\\Python\\Python39\\site-packages\\sympy\\functions\\special\\error_functions.py:2751, in _eis._eval_aseries(self, n, args0, x, logx)\r\n 2749 from sympy.series.order import Order\r\n 2750 if args0[0] != S.Infinity:\r\n-> 2751 return super(_erfs, self)._eval_aseries(n, args0, x, logx)\r\n 2753 z = self.args[0]\r\n 2754 l = [factorial(k) * (1/z)**(k + 1) for k in range(n)]\r\n\r\nTypeError: super(type, obj): obj must be an instance or subtype of type\r\n```", "number": 26916, "title": "Ei(-x) * exp(x) (exponential integral) limit to infinity error out" } ]
823065c6d82ef2ceb9c0c78ef19ae94685dbdfef
{ "head_commit": "3dca4530178450dd519daf9b57f5f08eb673f084", "head_commit_message": "Support Ei(z) expansion at -oo", "patch_to_review": "diff --git a/sympy/functions/special/error_functions.py b/sympy/functions/special/error_functions.py\nindex fd616c49af6f..945bad4c4384 100644\n--- a/sympy/functions/special/error_functions.py\n+++ b/sympy/functions/special/error_functions.py\n@@ -1263,7 +1263,7 @@ def _eval_aseries(self, n, args0, x, logx):\n from sympy.series.order import Order\n point = args0[0]\n \n- if point is S.Infinity:\n+ if point in (S.Infinity, S.NegativeInfinity):\n z = self.args[0]\n s = [factorial(k) / (z)**k for k in range(n)] + \\\n [Order(1/z**n, x)]\n@@ -2766,8 +2766,8 @@ class _eis(Function):\n \n def _eval_aseries(self, n, args0, x, logx):\n from sympy.series.order import Order\n- if args0[0] != S.Infinity:\n- return super(_erfs, self)._eval_aseries(n, args0, x, logx)\n+ if args0[0] not in (S.Infinity, S.NegativeInfinity):\n+ return super()._eval_aseries(n, args0, x, logx)\n \n z = self.args[0]\n l = [factorial(k) * (1/z)**(k + 1) for k in range(n)]\ndiff --git a/sympy/functions/special/tests/test_error_functions.py b/sympy/functions/special/tests/test_error_functions.py\nindex 051e1af4200b..dc699c330b8c 100644\n--- a/sympy/functions/special/tests/test_error_functions.py\n+++ b/sympy/functions/special/tests/test_error_functions.py\n@@ -437,6 +437,11 @@ def test_ei():\n assert Ei(x).series(x, 1, 3) == Ei(1) + E*(x - 1) + O((x - 1)**3, (x, 1))\n assert Ei(x).series(x, oo) == \\\n (120/x**5 + 24/x**4 + 6/x**3 + 2/x**2 + 1/x + 1 + O(x**(-6), (x, oo)))*exp(x)/x\n+ assert Ei(x).series(x, -oo) == \\\n+ (120/x**5 + 24/x**4 + 6/x**3 + 2/x**2 + 1/x + 1 + O(x**(-6), (x, -oo)))*exp(x)/x\n+ assert Ei(-x).series(x, oo).expand() == \\\n+ 120*exp(-x)/x**6 - 24*exp(-x)/x**5 + 6*exp(-x)/x**4 - 2*exp(-x)/x**3 + \\\n+ exp(-x)/x**2 - exp(-x)/x + O(exp(-x)/x**7, (x, oo))\n \n assert str(Ei(cos(2)).evalf(n=10)) == '-0.6760647401'\n raises(ArgumentIndexError, lambda: Ei(x).fdiff(2))\ndiff --git a/sympy/series/tests/test_limits.py b/sympy/series/tests/test_limits.py\nindex 21777c15e65c..ac28ad3eafbf 100644\n--- a/sympy/series/tests/test_limits.py\n+++ b/sympy/series/tests/test_limits.py\n@@ -1412,3 +1412,8 @@ def test_issue_26250():\n e1 = ((1-3*x**2)*e**2/2 - (x**2-2*x+1)*e*k/2)\n e2 = pi**2*(x**8 - 2*x**7 - x**6 + 4*x**5 - x**4 - 2*x**3 + x**2)\n assert limit(e1/e2, x, 0) == -S(1)/8\n+\n+\n+def test_issue_26916():\n+ assert limit(Ei(x)*exp(-x), x, +oo) == 0\n+ assert limit(Ei(x)*exp(-x), x, -oo) == 0\n" }
[ { "diff_hunk": "@@ -437,6 +437,11 @@ def test_ei():\n assert Ei(x).series(x, 1, 3) == Ei(1) + E*(x - 1) + O((x - 1)**3, (x, 1))\n assert Ei(x).series(x, oo) == \\\n (120/x**5 + 24/x**4 + 6/x**3 + 2/x**2 + 1/x + 1 + O(x**(-6), (x, oo)))*exp(x)/x\n+ assert Ei(x).series(x, -oo) == \\\n+ (120/x**5 + 24/x**4 + 6/x**3 + 2/x**2 + 1/x + 1 + O(x**(-6), (x, -oo)))*exp(x)/x\n+ assert Ei(-x).series(x, oo).expand() == \\\n+ 120*exp(-x)/x**6 - 24*exp(-x)/x**5 + 6*exp(-x)/x**4 - 2*exp(-x)/x**3 + \\\n+ exp(-x)/x**2 - exp(-x)/x + O(exp(-x)/x**7, (x, oo))", "line": null, "original_line": 444, "original_start_line": 440, "path": "sympy/functions/special/tests/test_error_functions.py", "start_line": null, "text": "@user1:\nWhy is `expand` being called in one case but not the other?\r\n```python\r\nIn [1]: Ei(-x).series(x, oo)\r\nOut[1]: \r\n ⎛ 120 24 6 2 1 ⎛1 ⎞⎞ -x \r\n-⎜- ─── + ── - ── + ── - ─ + 1 + O⎜──; x → ∞⎟⎟⋅ℯ \r\n ⎜ 5 4 3 2 x ⎜ 6 ⎟⎟ \r\n ⎝ x x x x ⎝x ⎠⎠ \r\n───────────────────────────────────────────────────\r\n x \r\n\r\nIn [2]: Ei(-x).series(x, -oo)\r\nOut[2]: \r\n ⎛ 120 24 6 2 1 ⎛1 ⎞⎞ -x \r\n-⎜- ─── + ── - ── + ── - ─ + 1 + O⎜──; x → -∞⎟⎟⋅ℯ \r\n ⎜ 5 4 3 2 x ⎜ 6 ⎟⎟ \r\n ⎝ x x x x ⎝x ⎠⎠ \r\n────────────────────────────────────────────────────\r\n x \r\n\r\nIn [3]: Ei(-x).series(x, -oo).expand()\r\nOut[3]: \r\n -x -x -x -x -x -x ⎛ -x ⎞\r\n120⋅ℯ 24⋅ℯ 6⋅ℯ 2⋅ℯ ℯ ℯ ⎜-ℯ ⎟\r\n─────── - ────── + ───── - ───── + ─── - ─── + O⎜─────; x → -∞⎟\r\n 6 5 4 3 2 x ⎜ 7 ⎟\r\n x x x x x ⎝ x ⎠\r\n```\n\n@author:\nWell cause I wasn't able to figure out why something like fails\r\n```\r\n>>> Ei(-x).series(x, oo)\r\n-(-120/x**5 + 24/x**4 - 6/x**3 + 2/x**2 - 1/x + 1 + O(x**(-6), (x, oo)))*exp(-x)/x\r\n>>> -(-120/x**5 + 24/x**4 - 6/x**3 + 2/x**2 - 1/x + 1 + O(x**(-6), (x, oo)))*exp(-x)/x\r\n(120/x**5 - 24/x**4 + 6/x**3 - 2/x**2 + 1/x - 1 + O(x**(-6), (x, oo)))*exp(-x)/x\r\n>>> assert Ei(-x).series(x, oo) == (120/x**5 - 24/x**4 + 6/x**3 - 2/x**2 + 1/x - 1 + O(x**(-6), (x, oo)))*exp(-x)/x\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\nAssertionError\r\n```\r\n\r\nAnd hence wasn't sure how could I add a test for this and hence I ended up using expand !\r\n\r\nThe result is as expected. Am I missing something here ?\n\n@user1:\nBrackets are needed:\r\n```python\r\nIn [3]: -(-120/x**5 + 24/x**4 - 6/x**3 + 2/x**2 - 1/x + 1 + O(x**(-6), (x, oo)))*exp(-x)/x\r\nOut[3]: \r\n⎛120 24 6 2 1 ⎛1 ⎞⎞ -x\r\n⎜─── - ── + ── - ── + ─ - 1 + O⎜──; x → ∞⎟⎟⋅ℯ \r\n⎜ 5 4 3 2 x ⎜ 6 ⎟⎟ \r\n⎝x x x x ⎝x ⎠⎠ \r\n───────────────────────────────────────────────\r\n x \r\n\r\nIn [4]: -((-120/x**5 + 24/x**4 - 6/x**3 + 2/x**2 - 1/x + 1 + O(x**(-6), (x, oo)))*exp(-x)/x)\r\nOut[4]: \r\n ⎛ 120 24 6 2 1 ⎛1 ⎞⎞ -x \r\n-⎜- ─── + ── - ── + ── - ─ + 1 + O⎜──; x → ∞⎟⎟⋅ℯ \r\n ⎜ 5 4 3 2 x ⎜ 6 ⎟⎟ \r\n ⎝ x x x x ⎝x ⎠⎠ \r\n───────────────────────────────────────────────────\r\n x \r\n```\r\nThis is because of automatic distribution in a 2-arg Mul (a longstanding issue):\r\n```python\r\nIn [5]: 2*(x + y)*z\r\nOut[5]: z⋅(2⋅x + 2⋅y)\r\n\r\nIn [6]: 2*((x + y)*z)\r\nOut[6]: 2⋅z⋅(x + y)\r\n```\n\n@author:\nThanks for the help !\n\n@user1:\nProbably in this case distribution is the right thing though and the series code should be changed to return the distributed result.\n\n@author:\nArghh I just committed the change with the brackets :|\n\n@user1:\nThat's fine. This PR is good. Just a future PR could maybe handle this. Ideally we want the series to be in a well-defined canonical form so that tests like this are not so fragile." } ]
b973d2f3ab285e7525f9932f430645a67717455a
diff --git a/sympy/functions/special/error_functions.py b/sympy/functions/special/error_functions.py index fd616c49af6f..945bad4c4384 100644 --- a/sympy/functions/special/error_functions.py +++ b/sympy/functions/special/error_functions.py @@ -1263,7 +1263,7 @@ def _eval_aseries(self, n, args0, x, logx): from sympy.series.order import Order point = args0[0] - if point is S.Infinity: + if point in (S.Infinity, S.NegativeInfinity): z = self.args[0] s = [factorial(k) / (z)**k for k in range(n)] + \ [Order(1/z**n, x)] @@ -2766,8 +2766,8 @@ class _eis(Function): def _eval_aseries(self, n, args0, x, logx): from sympy.series.order import Order - if args0[0] != S.Infinity: - return super(_erfs, self)._eval_aseries(n, args0, x, logx) + if args0[0] not in (S.Infinity, S.NegativeInfinity): + return super()._eval_aseries(n, args0, x, logx) z = self.args[0] l = [factorial(k) * (1/z)**(k + 1) for k in range(n)] diff --git a/sympy/functions/special/tests/test_error_functions.py b/sympy/functions/special/tests/test_error_functions.py index 051e1af4200b..b3085e8e92c1 100644 --- a/sympy/functions/special/tests/test_error_functions.py +++ b/sympy/functions/special/tests/test_error_functions.py @@ -437,6 +437,10 @@ def test_ei(): assert Ei(x).series(x, 1, 3) == Ei(1) + E*(x - 1) + O((x - 1)**3, (x, 1)) assert Ei(x).series(x, oo) == \ (120/x**5 + 24/x**4 + 6/x**3 + 2/x**2 + 1/x + 1 + O(x**(-6), (x, oo)))*exp(x)/x + assert Ei(x).series(x, -oo) == \ + (120/x**5 + 24/x**4 + 6/x**3 + 2/x**2 + 1/x + 1 + O(x**(-6), (x, -oo)))*exp(x)/x + assert Ei(-x).series(x, oo) == \ + -((-120/x**5 + 24/x**4 - 6/x**3 + 2/x**2 - 1/x + 1 + O(x**(-6), (x, oo)))*exp(-x)/x) assert str(Ei(cos(2)).evalf(n=10)) == '-0.6760647401' raises(ArgumentIndexError, lambda: Ei(x).fdiff(2)) diff --git a/sympy/series/tests/test_limits.py b/sympy/series/tests/test_limits.py index 21777c15e65c..ac28ad3eafbf 100644 --- a/sympy/series/tests/test_limits.py +++ b/sympy/series/tests/test_limits.py @@ -1412,3 +1412,8 @@ def test_issue_26250(): e1 = ((1-3*x**2)*e**2/2 - (x**2-2*x+1)*e*k/2) e2 = pi**2*(x**8 - 2*x**7 - x**6 + 4*x**5 - x**4 - 2*x**3 + x**2) assert limit(e1/e2, x, 0) == -S(1)/8 + + +def test_issue_26916(): + assert limit(Ei(x)*exp(-x), x, +oo) == 0 + assert limit(Ei(x)*exp(-x), x, -oo) == 0
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Bug Fixes" }
sympy__sympy-26843@df899c7
sympy/sympy
Python
26,843
fix(polys): fix DMP.cancel for GF(p) with Flint
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> Fixes gh-26804 #### Brief description of what is fixed or changed #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> NO ENTRY <!-- END RELEASE NOTES -->
2024-07-21T12:03:04Z
Poly.cancel fails over GF with ground types flint This is after gh-25940 ```python In [1]: Poly(x, modulus=11).cancel(Poly(x, modulus=11)) --------------------------------------------------------------------------- AssertionError Traceback (most recent call last) Cell In[1], line 1 ----> 1 Poly(x, modulus=11).cancel(Poly(x, modulus=11)) File ~/current/active/sympy/sympy/polys/polytools.py:3890, in Poly.cancel(f, g, include) 3887 dom, per, F, G = f._unify(g) 3889 if hasattr(F, 'cancel'): -> 3890 result = F.cancel(G, include=include) 3891 else: # pragma: no cover 3892 raise OperationNotSupported(f, 'cancel') File ~/current/active/sympy/sympy/polys/polyclasses.py:849, in DMP.cancel(f, g, include) 847 return F._cancel_include(G) 848 else: --> 849 return F._cancel(G) File ~/current/active/sympy/sympy/polys/polyclasses.py:2143, in DUP_Flint._cancel(f, g) 2140 """Cancel common factors in a rational function ``f/g``. """ 2141 # Think carefully about how to handle denominators and coefficient 2142 # canonicalisation if more domains are permitted... -> 2143 assert f.dom == g.dom in (ZZ, QQ) 2145 if f.dom.is_QQ: 2146 cG, F = f.clear_denoms() AssertionError: ```
[ { "body": "This is after gh-25940\r\n```python\r\nIn [1]: Poly(x, modulus=11).cancel(Poly(x, modulus=11))\r\n---------------------------------------------------------------------------\r\nAssertionError Traceback (most recent call last)\r\nCell In[1], line 1\r\n----> 1 Poly(x, modulus=11).cancel(Poly(x, modulus=11))\r\n\r\nFile ~/current/active/sympy/sympy/polys/polytools.py:3890, in Poly.cancel(f, g, include)\r\n 3887 dom, per, F, G = f._unify(g)\r\n 3889 if hasattr(F, 'cancel'):\r\n-> 3890 result = F.cancel(G, include=include)\r\n 3891 else: # pragma: no cover\r\n 3892 raise OperationNotSupported(f, 'cancel')\r\n\r\nFile ~/current/active/sympy/sympy/polys/polyclasses.py:849, in DMP.cancel(f, g, include)\r\n 847 return F._cancel_include(G)\r\n 848 else:\r\n--> 849 return F._cancel(G)\r\n\r\nFile ~/current/active/sympy/sympy/polys/polyclasses.py:2143, in DUP_Flint._cancel(f, g)\r\n 2140 \"\"\"Cancel common factors in a rational function ``f/g``. \"\"\"\r\n 2141 # Think carefully about how to handle denominators and coefficient\r\n 2142 # canonicalisation if more domains are permitted...\r\n-> 2143 assert f.dom == g.dom in (ZZ, QQ)\r\n 2145 if f.dom.is_QQ:\r\n 2146 cG, F = f.clear_denoms()\r\n\r\nAssertionError: \r\n```", "number": 26804, "title": "Poly.cancel fails over GF with ground types flint" } ]
958cc95aebc35ced70d870586e68d0468684c1a0
{ "head_commit": "df899c7be13e3b9767736462092ddea8cf2416c8", "head_commit_message": "fix(polys): fix DMP.cancel for GF(p) with Flint", "patch_to_review": "diff --git a/sympy/polys/polyclasses.py b/sympy/polys/polyclasses.py\nindex 55780f086709..eab3b39034a7 100644\n--- a/sympy/polys/polyclasses.py\n+++ b/sympy/polys/polyclasses.py\n@@ -2138,19 +2138,28 @@ def _lcm(f, g):\n \n def _cancel(f, g):\n \"\"\"Cancel common factors in a rational function ``f/g``. \"\"\"\n+ assert f.dom == g.dom\n+ R = f.dom\n+\n # Think carefully about how to handle denominators and coefficient\n # canonicalisation if more domains are permitted...\n- assert f.dom == g.dom in (ZZ, QQ)\n+ assert R.is_ZZ or R.is_QQ or R.is_FiniteField\n \n- if f.dom.is_QQ:\n+ if R.is_FiniteField:\n+ h = f._gcd(g)\n+ F, G = f.exquo(h), g.exquo(h)\n+ return R.one, R.one, F, G\n+\n+ if R.is_QQ:\n cG, F = f.clear_denoms()\n cF, G = g.clear_denoms()\n else:\n- cG, F = f.dom.one, f\n- cF, G = g.dom.one, g\n+ cG, F = R.one, f\n+ cF, G = R.one, g\n \n- cH = cF.gcd(cG)\n- cF, cG = cF // cH, cG // cH\n+ if R.is_ZZ or R.is_QQ:\n+ cH = cF.gcd(cG)\n+ cF, cG = cF // cH, cG // cH\n \n H = F._gcd(G)\n F, G = F.exquo(H), G.exquo(H)\ndiff --git a/sympy/polys/tests/test_polytools.py b/sympy/polys/tests/test_polytools.py\nindex 28ece86006aa..be81aac66a5e 100644\n--- a/sympy/polys/tests/test_polytools.py\n+++ b/sympy/polys/tests/test_polytools.py\n@@ -3405,6 +3405,12 @@ def test_cancel():\n assert cancel((x**2 + 1)/(x - I)) == x + I\n \n \n+def test_cancel_modulus():\n+ assert cancel((x**2 - 1)/(x + 1), modulus=2) == x + 1\n+ assert Poly(x**2 - 1, modulus=2).cancel(Poly(x + 1, modulus=2)) ==\\\n+ (1, Poly(x + 1, modulus=2), Poly(1, x, modulus=2))\n+\n+\n def test_make_monic_over_integers_by_scaling_roots():\n f = Poly(x**2 + 3*x + 4, x, domain='ZZ')\n g, c = f.make_monic_over_integers_by_scaling_roots()\n" }
[ { "diff_hunk": "@@ -2138,19 +2138,28 @@ def _lcm(f, g):\n \n def _cancel(f, g):\n \"\"\"Cancel common factors in a rational function ``f/g``. \"\"\"\n+ assert f.dom == g.dom\n+ R = f.dom\n+\n # Think carefully about how to handle denominators and coefficient\n # canonicalisation if more domains are permitted...\n- assert f.dom == g.dom in (ZZ, QQ)\n+ assert R.is_ZZ or R.is_QQ or R.is_FiniteField\n \n- if f.dom.is_QQ:\n+ if R.is_FiniteField:\n+ h = f._gcd(g)\n+ F, G = f.exquo(h), g.exquo(h)\n+ return R.one, R.one, F, G\n+\n+ if R.is_QQ:\n cG, F = f.clear_denoms()\n cF, G = g.clear_denoms()\n else:\n- cG, F = f.dom.one, f\n- cF, G = g.dom.one, g\n+ cG, F = R.one, f\n+ cF, G = R.one, g\n \n- cH = cF.gcd(cG)\n- cF, cG = cF // cH, cG // cH\n+ if R.is_ZZ or R.is_QQ:", "line": null, "original_line": 2160, "original_start_line": null, "path": "sympy/polys/polyclasses.py", "start_line": null, "text": "@user1:\nIs it necessary to check `is_ZZ or is_QQ` here? `is_FiniteField` is narrowed down from above\n\n@author:\nOh, good point" } ]
71ce1acad71decbda804ed6c121da9dec20ab87a
diff --git a/sympy/polys/domains/finitefield.py b/sympy/polys/domains/finitefield.py index 92ecbaeb52dd..d21e0035a35b 100644 --- a/sympy/polys/domains/finitefield.py +++ b/sympy/polys/domains/finitefield.py @@ -31,51 +31,78 @@ flint = None -def _modular_int_factory(mod, dom, symmetric, self): +def _modular_int_factory_nmod(mod): + # nmod only recognises int + index = operator.index + mod = index(mod) + nmod = flint.nmod + nmod_poly = flint.nmod_poly + + # flint's nmod is only for moduli up to 2^64-1 (on a 64-bit machine) + try: + nmod(0, mod) + except OverflowError: + return None, None + + def ctx(x): + try: + return nmod(x, mod) + except TypeError: + return nmod(index(x), mod) - # Use flint if available - if flint is not None: + def poly_ctx(cs): + return nmod_poly(cs, mod) - nmod = flint.nmod - fmpz_mod_ctx = flint.fmpz_mod_ctx - index = operator.index + return ctx, poly_ctx - try: - mod = dom.convert(mod) - except CoercionFailed: - raise ValueError('modulus must be an integer, got %s' % mod) - # mod might be e.g. Integer +def _modular_int_factory_fmpz_mod(mod): + index = operator.index + fctx = flint.fmpz_mod_ctx(mod) + fctx_poly = flint.fmpz_mod_poly_ctx(mod) + fmpz_mod_poly = flint.fmpz_mod_poly + + def ctx(x): try: - fmpz_mod_ctx(mod) + return fctx(x) except TypeError: - mod = index(mod) + # x might be Integer + return fctx(index(x)) + + def poly_ctx(cs): + return fmpz_mod_poly(cs, fctx_poly) + + return ctx, poly_ctx - # flint's nmod is only for moduli up to 2^64-1 (on a 64-bit machine) - try: - nmod(0, mod) - except OverflowError: - # Use fmpz_mod - fctx = fmpz_mod_ctx(mod) - - def ctx(x): - try: - return fctx(x) - except TypeError: - # x might be Integer - return fctx(index(x)) - else: - # Use nmod - def ctx(x): - try: - return nmod(x, mod) - except TypeError: - return nmod(index(x), mod) - return ctx +def _modular_int_factory(mod, dom, symmetric, self): + # Convert the modulus to ZZ + try: + mod = dom.convert(mod) + except CoercionFailed: + raise ValueError('modulus must be an integer, got %s' % mod) + + ctx, poly_ctx, is_flint = None, None, False + + # Don't use flint if the modulus is not prime as it often crashes. + if flint is not None and mod.is_prime(): + + is_flint = True + + # Try to use flint's nmod first + ctx, poly_ctx = _modular_int_factory_nmod(mod) - # Use the Python implementation - return ModularIntegerFactory(mod, dom, symmetric, self) + if ctx is None: + # Use fmpz_mod for larger moduli + ctx, poly_ctx = _modular_int_factory_fmpz_mod(mod) + + if ctx is None: + # Use the Python implementation if flint is not available or the + # modulus is not prime. + ctx = ModularIntegerFactory(mod, dom, symmetric, self) + poly_ctx = None # not used + + return ctx, poly_ctx, is_flint @public @@ -188,7 +215,12 @@ def __init__(self, mod, symmetric=True): if mod <= 0: raise ValueError('modulus must be a positive integer, got %s' % mod) - self.dtype = _modular_int_factory(mod, dom, symmetric, self) + ctx, poly_ctx, is_flint = _modular_int_factory(mod, dom, symmetric, self) + + self.dtype = ctx + self._poly_ctx = poly_ctx + self._is_flint = is_flint + self.zero = self.dtype(0) self.one = self.dtype(1) self.dom = dom diff --git a/sympy/polys/matrices/_dfm.py b/sympy/polys/matrices/_dfm.py index ef55138b3bcd..c2f6f16922e0 100644 --- a/sympy/polys/matrices/_dfm.py +++ b/sympy/polys/matrices/_dfm.py @@ -151,7 +151,7 @@ def _check(cls, rep, shape, domain): @classmethod def _supports_domain(cls, domain): """Return True if the given domain is supported by DFM.""" - return domain in (ZZ, QQ) or domain.is_FF + return domain in (ZZ, QQ) or domain.is_FF and domain._is_flint @classmethod def _get_flint_func(cls, domain): diff --git a/sympy/polys/polyclasses.py b/sympy/polys/polyclasses.py index 55780f086709..7b677510c85f 100644 --- a/sympy/polys/polyclasses.py +++ b/sympy/polys/polyclasses.py @@ -128,7 +128,7 @@ if GROUND_TYPES == 'flint': import flint def _supported_flint_domain(D): - return D.is_ZZ or D.is_QQ or D.is_FF + return D.is_ZZ or D.is_QQ or D.is_FF and D._is_flint else: flint = None def _supported_flint_domain(D): @@ -1762,10 +1762,7 @@ def _get_flint_poly_cls(cls, dom): elif dom.is_QQ: return flint.fmpq_poly elif dom.is_FF: - if type(dom.one) is flint.nmod: - return lambda rep: flint.nmod_poly(rep, dom.characteristic()) - else: - return lambda rep: flint.fmpz_mod_poly(rep, dom.characteristic()) + return dom._poly_ctx else: raise RuntimeError("Domain %s is not supported with flint" % dom) @@ -1913,7 +1910,7 @@ def _quo_ground(f, c): return f.from_rep(f._rep // c, f.dom) def _exquo_ground(f, c): - """Exact quotient of ``f`` by a an element of the ground domain. """ + """Exact quotient of ``f`` by an element of the ground domain. """ q, r = divmod(f._rep, c) if r: raise ExactQuotientFailed(f, c) @@ -2040,14 +2037,20 @@ def l2_norm_squared(f): def clear_denoms(f): """Clear denominators, but keep the ground domain. """ - denom = f._rep.denom() - numer = f.from_rep(f._cls(f._rep.numer()), f.dom) - return denom, numer + R = f.dom + if R.is_QQ: + denom = f._rep.denom() + numer = f.from_rep(f._cls(f._rep.numer()), f.dom) + return denom, numer + elif R.is_ZZ or R.is_FiniteField: + return R.one, f + else: + raise NotImplementedError def _integrate(f, m=1, j=0): """Computes the ``m``-th order indefinite integral of ``f`` in ``x_j``. """ assert j == 0 - if f.dom.is_QQ: + if f.dom.is_Field: rep = f._rep for i in range(m): rep = rep.integral() @@ -2064,6 +2067,10 @@ def _diff(f, m=1, j=0): return f.from_rep(rep, f.dom) def _eval(f, a): + # XXX: This method is called with many different input types. Ideally + # we could use e.g. fmpz_poly.__call__ here but more thought needs to + # go into which types this is supposed to be called with and what types + # it should return. return f.to_DMP_Python()._eval(a) def _eval_lev(f, a, j): @@ -2082,34 +2089,45 @@ def _gcdex(f, g): def _invert(f, g): """Invert ``f`` modulo ``g``, if possible. """ - if f.dom.is_QQ: + R = f.dom + if R.is_Field: gcd, F_inv, _ = f._rep.xgcd(g._rep) - if gcd != 1: + # XXX: Should be gcd != 1 but nmod_poly does not compare equal to + # other types. + if gcd != 0*gcd + 1: raise NotInvertible("zero divisor") - return f.from_rep(F_inv, f.dom) + return f.from_rep(F_inv, R) else: + # fmpz_poly does not have xgcd or invert and this is not well + # defined in general. return f.to_DMP_Python()._invert(g.to_DMP_Python()).to_DUP_Flint() def _revert(f, n): """Compute ``f**(-1)`` mod ``x**n``. """ + # XXX: Use fmpz_series etc for reversion? + # Maybe python-flint should provide revert for fmpz_poly... return f.to_DMP_Python()._revert(n).to_DUP_Flint() def _subresultants(f, g): """Computes subresultant PRS sequence of ``f`` and ``g``. """ + # XXX: Maybe _fmpz_poly_pseudo_rem_cohen could be used... R = f.to_DMP_Python()._subresultants(g.to_DMP_Python()) return [ g.to_DUP_Flint() for g in R ] def _resultant_includePRS(f, g): """Computes resultant of ``f`` and ``g`` via PRS. """ + # XXX: Maybe _fmpz_poly_pseudo_rem_cohen could be used... res, R = f.to_DMP_Python()._resultant_includePRS(g.to_DMP_Python()) return res, [ g.to_DUP_Flint() for g in R ] def _resultant(f, g): """Computes resultant of ``f`` and ``g``. """ + # XXX: Use fmpz_mpoly etc when possible... return f.to_DMP_Python()._resultant(g.to_DMP_Python()) def discriminant(f): """Computes discriminant of ``f``. """ + # XXX: Use fmpz_mpoly etc when possible... return f.to_DMP_Python().discriminant() def _cofactors(f, g): @@ -2138,16 +2156,24 @@ def _lcm(f, g): def _cancel(f, g): """Cancel common factors in a rational function ``f/g``. """ + assert f.dom == g.dom + R = f.dom + # Think carefully about how to handle denominators and coefficient # canonicalisation if more domains are permitted... - assert f.dom == g.dom in (ZZ, QQ) + assert R.is_ZZ or R.is_QQ or R.is_FiniteField - if f.dom.is_QQ: + if R.is_FiniteField: + h = f._gcd(g) + F, G = f.exquo(h), g.exquo(h) + return R.one, R.one, F, G + + if R.is_QQ: cG, F = f.clear_denoms() cF, G = g.clear_denoms() else: - cG, F = f.dom.one, f - cF, G = g.dom.one, g + cG, F = R.one, f + cF, G = R.one, g cH = cF.gcd(cG) cF, cG = cF // cH, cG // cH @@ -2178,6 +2204,7 @@ def _trunc(f, p): def monic(f): """Divides all coefficients by ``LC(f)``. """ + # XXX: python-flint should add monic return f._exquo_ground(f.LC()) def content(f): @@ -2246,6 +2273,7 @@ def sqf_part(f): def sqf_list(f, all=False): """Returns a list of square-free factors of ``f``. """ + # XXX: python-flint should provide square free factorisation. coeff, factors = f.to_DMP_Python().sqf_list(all=all) return coeff, [ (g.to_DUP_Flint(), k) for g, k in factors ] @@ -2310,6 +2338,9 @@ def _isolate_real_roots_sqf(f, eps, inf, sup, fast): return f.to_DMP_Python()._isolate_real_roots_sqf(eps, inf, sup, fast) def _isolate_all_roots(f, eps, inf, sup, fast): + # fmpz_poly and fmpq_poly have a complex_roots method that could be + # used here. It probably makes more sense to add analogous methods in + # python-flint though. return f.to_DMP_Python()._isolate_all_roots(eps, inf, sup, fast) def _isolate_all_roots_sqf(f, eps, inf, sup, fast): @@ -2354,7 +2385,8 @@ def is_quadratic(f): @property def is_monomial(f): """Returns ``True`` if ``f`` is zero or has only one term. """ - return f.to_DMP_Python().is_monomial + fr = f._rep + return fr.degree() < 0 or not any(fr[n] for n in range(fr.degree())) @property def is_monic(f): @@ -2374,20 +2406,33 @@ def is_homogeneous(f): @property def is_sqf(f): """Returns ``True`` if ``f`` is a square-free polynomial. """ - return f.to_DMP_Python().is_sqf + g = f._rep.gcd(f._rep.derivative()) + return g.degree() <= 0 @property def is_irreducible(f): """Returns ``True`` if ``f`` has no factors over its domain. """ - return f.to_DMP_Python().is_irreducible + _, factors = f._rep.factor() + if len(factors) == 0: + return True + elif len(factors) == 1: + return factors[0][1] == 1 + else: + return False @property def is_cyclotomic(f): """Returns ``True`` if ``f`` is a cyclotomic polynomial. """ + if f.dom.is_QQ: + try: + f = f.convert(ZZ) + except CoercionFailed: + return False if f.dom.is_ZZ: return bool(f._rep.is_cyclotomic()) else: - return f.to_DMP_Python().is_cyclotomic + # This is what dup_cyclotomic_p does... + return False def init_normal_DMF(num, den, lev, dom): diff --git a/sympy/polys/tests/test_polytools.py b/sympy/polys/tests/test_polytools.py index 28ece86006aa..1ba0e5a69b8c 100644 --- a/sympy/polys/tests/test_polytools.py +++ b/sympy/polys/tests/test_polytools.py @@ -445,6 +445,9 @@ def test_Poly__new__(): Poly(3*x**5 + 65536*x**4 + x**3 + 65536*x** 2 + 1, x, modulus=65537, symmetric=False) + N = 10**100 + assert Poly(-1, x, modulus=N, symmetric=False).as_expr() == N - 1 + assert isinstance(Poly(x**2 + x + 1.0).get_domain(), RealField) assert isinstance(Poly(x**2 + x + I + 1.0).get_domain(), ComplexField) @@ -1541,6 +1544,10 @@ def test_Poly_clear_denoms(): assert coeff == 2 and poly == Poly( x + 2, x, domain='QQ') and poly.get_domain() == QQ + coeff, poly = Poly(2*x**2 + 3, modulus=5).clear_denoms() + assert coeff == 1 and poly == Poly( + 2*x**2 + 3, x, modulus=5) and poly.get_domain() == FF(5) + coeff, poly = Poly(x/2 + 1, x).clear_denoms(convert=True) assert coeff == 2 and poly == Poly( x + 2, x, domain='ZZ') and poly.get_domain() == ZZ @@ -3405,6 +3412,12 @@ def test_cancel(): assert cancel((x**2 + 1)/(x - I)) == x + I +def test_cancel_modulus(): + assert cancel((x**2 - 1)/(x + 1), modulus=2) == x + 1 + assert Poly(x**2 - 1, modulus=2).cancel(Poly(x + 1, modulus=2)) ==\ + (1, Poly(x + 1, modulus=2), Poly(1, x, modulus=2)) + + def test_make_monic_over_integers_by_scaling_roots(): f = Poly(x**2 + 3*x + 4, x, domain='ZZ') g, c = f.make_monic_over_integers_by_scaling_roots()
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Bug Fixes" }
sympy__sympy-26796@ac5a022
sympy/sympy
Python
26,796
diophantine: make trivial solution satisfy assumptions
Currently, trivial solutions will always be returned regardless of the assumptions of positiveness. An additional check is now added after a trivial solution is appended. <!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> Fixes #26783 #### Brief description of what is fixed or changed * An assumption check for the trivial solution is added. * 3 more test cases in the test function. * Changes have been made to `diop_solve()` to warn users that the function will not filter out infeasible solutions. #### Other comments I'm not sure if the additional warning is enough for users who care about the variable assumptions. Maybe we should deprecate the `diop_solve()`, which forces the users to directly use `diophantine()` instead? #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> * solvers * Now `diophantine()` only returns a trivial solution under assumptions. <!-- END RELEASE NOTES -->
2024-07-11T15:50:59Z
diophantine and diop_solve ignore assumptions Using `solvers.diophantine` `diophantine` and `diop_solve` functions with integer and positive assumed symbols, also `{(0, 0)}` and negative solutions are returned. For example: ```python from sympy import Symbol from sympy.solvers.diophantine import diophantine, diop_solve x = Symbol("x", integer=True, positive=True) y = Symbol("y", integer=True, positive=True) eq = 10*x**2 + 5*x*y - 3*y print(diophantine(eq)) print(diop_solve(eq)) ``` prints ``` {(0, 0)} {(1, -5), (-3, 5), (0, 0)} ``` while it should print at least an empty solution set.
It seems currently `diop_quadratic()` will always return a trivial solution if applied when no other solutions are found. https://github.com/sympy/sympy/blob/3a812204052176854bdfc1ea9d12097c3f9307ca/sympy/solvers/diophantine/diophantine.py#L1479-L1485 Maybe we should rearrange the logic? By the way, you should use `diophantine()` rather than `diop_solve()` since the latter won't filter out the negative solutions. I'm not inside `sympy` logic, but if a symbol is subjected to certain assumptions, intuitively I expect that functions I call on that symbols are going to adhere to such assumptions. So yes, probably editing `diophantine()` to make it returns an empty set if the trivial solution is not compatible with assumptions will be a better choice. Regarding `diop_solve()`, as it is an internal use function only, maybe adding in the documentation the warning that it does not necessarily respect assumptions (except the obvious `integer=True`) is the way.
[ { "body": "Using `solvers.diophantine` `diophantine` and `diop_solve` functions with integer and positive assumed symbols, also `{(0, 0)}` and negative solutions are returned.\r\nFor example:\r\n```python\r\nfrom sympy import Symbol\r\nfrom sympy.solvers.diophantine import diophantine, diop_solve\r\n\r\nx = Symbol(\"x\", integer=True, positive=True)\r\ny = Symbol(\"y\", integer=True, positive=True)\r\neq = 10*x**2 + 5*x*y - 3*y\r\nprint(diophantine(eq))\r\nprint(diop_solve(eq))\r\n```\r\nprints\r\n```\r\n{(0, 0)}\r\n{(1, -5), (-3, 5), (0, 0)}\r\n```\r\nwhile it should print at least an empty solution set.", "number": 26783, "title": "diophantine and diop_solve ignore assumptions" } ]
e2145b58f528b495d08e0670ad5c2a19c1d1bbe1
{ "head_commit": "ac5a02295beeaccbe3e8d3813ef38b55d910f775", "head_commit_message": "diophantine: trivial solution under assumptions\n\nCurrently trivial solution will always be returned regardless of\nthe assumptions of positiveness. An additional check is now added\nafter a trivial solution is appended.", "patch_to_review": "diff --git a/sympy/solvers/diophantine/diophantine.py b/sympy/solvers/diophantine/diophantine.py\nindex d02d0689d814..e2cbb57509e5 100644\n--- a/sympy/solvers/diophantine/diophantine.py\n+++ b/sympy/solvers/diophantine/diophantine.py\n@@ -1482,7 +1482,14 @@ def diophantine(eq, param=symbols(\"t\", integer=True), syms=None,\n null = tuple([0]*len(var))\n # if there is no solution, return trivial solution\n if not sols and eq.subs(zip(var, null)).is_zero:\n- sols.add(null)\n+ null_feasible = True\n+ for val, symb in zip(null, var):\n+ if check_assumptions(val, **symb.assumptions0) is False:\n+ null_feasible = False\n+ break\n+ if null_feasible:\n+ sols.add(null)\n+\n final_soln = set()\n for sol in sols:\n if all(int_valued(s) for s in sol):\n@@ -1558,7 +1565,8 @@ def diop_solve(eq, param=symbols(\"t\", integer=True)):\n \n Use of ``diophantine()`` is recommended over other helper functions.\n ``diop_solve()`` can return either a set or a tuple depending on the\n- nature of the equation.\n+ nature of the equation. Solutions which DO NOT satisfy the assumptions\n+ such as `positive=True` are returned as well.\n \n Usage\n =====\ndiff --git a/sympy/solvers/diophantine/tests/test_diophantine.py b/sympy/solvers/diophantine/tests/test_diophantine.py\nindex a500903fc06f..e36c4c9ad165 100644\n--- a/sympy/solvers/diophantine/tests/test_diophantine.py\n+++ b/sympy/solvers/diophantine/tests/test_diophantine.py\n@@ -752,6 +752,18 @@ def test_assumptions():\n diof = diophantine(a*b + 2*a + 3*b - 6)\n assert diof == {(-15, -3), (-9, -4), (-7, -5), (-6, -6), (-5, -8), (-4, -14)}\n \n+ x, y = symbols('x y', integer=True)\n+ diof = diophantine(10*x**2 + 5*x*y - 3*y)\n+ assert diof == {(1, -5), (-3, 5), (0, 0)}\n+\n+ x, y = symbols('x y', integer=True, positive=True)\n+ diof = diophantine(10*x**2 + 5*x*y - 3*y)\n+ assert diof == set()\n+\n+ x, y = symbols('x y', integer=True, negative=False)\n+ diof = diophantine(10*x**2 + 5*x*y - 3*y)\n+ assert diof == {(0, 0)}\n+\n \n def check_solutions(eq):\n \"\"\"\n" }
[ { "diff_hunk": "@@ -1482,7 +1482,14 @@ def diophantine(eq, param=symbols(\"t\", integer=True), syms=None,\n null = tuple([0]*len(var))\n # if there is no solution, return trivial solution\n if not sols and eq.subs(zip(var, null)).is_zero:\n- sols.add(null)\n+ null_feasible = True\n+ for val, symb in zip(null, var):\n+ if check_assumptions(val, **symb.assumptions0) is False:\n+ null_feasible = False\n+ break\n+ if null_feasible:", "line": null, "original_line": 1490, "original_start_line": 1485, "path": "sympy/solvers/diophantine/diophantine.py", "start_line": null, "text": "@user1:\n```suggestion\r\n if all(check_assumptions(v, **s.assumptions0) is not False for v, s in zip(mull, var)):\r\n```" } ]
f316763465893a8cce7d2e65a36f24173b7faf4b
diff --git a/sympy/solvers/diophantine/diophantine.py b/sympy/solvers/diophantine/diophantine.py index d02d0689d814..8200ef6da21e 100644 --- a/sympy/solvers/diophantine/diophantine.py +++ b/sympy/solvers/diophantine/diophantine.py @@ -1482,7 +1482,9 @@ def diophantine(eq, param=symbols("t", integer=True), syms=None, null = tuple([0]*len(var)) # if there is no solution, return trivial solution if not sols and eq.subs(zip(var, null)).is_zero: - sols.add(null) + if all(check_assumptions(val, **s.assumptions0) is not False for val, s in zip(null, var)): + sols.add(null) + final_soln = set() for sol in sols: if all(int_valued(s) for s in sol): @@ -1558,7 +1560,8 @@ def diop_solve(eq, param=symbols("t", integer=True)): Use of ``diophantine()`` is recommended over other helper functions. ``diop_solve()`` can return either a set or a tuple depending on the - nature of the equation. + nature of the equation. All non-trivial solutions are returned: assumptions + on symbols are ignored. Usage ===== diff --git a/sympy/solvers/diophantine/tests/test_diophantine.py b/sympy/solvers/diophantine/tests/test_diophantine.py index a500903fc06f..e36c4c9ad165 100644 --- a/sympy/solvers/diophantine/tests/test_diophantine.py +++ b/sympy/solvers/diophantine/tests/test_diophantine.py @@ -752,6 +752,18 @@ def test_assumptions(): diof = diophantine(a*b + 2*a + 3*b - 6) assert diof == {(-15, -3), (-9, -4), (-7, -5), (-6, -6), (-5, -8), (-4, -14)} + x, y = symbols('x y', integer=True) + diof = diophantine(10*x**2 + 5*x*y - 3*y) + assert diof == {(1, -5), (-3, 5), (0, 0)} + + x, y = symbols('x y', integer=True, positive=True) + diof = diophantine(10*x**2 + 5*x*y - 3*y) + assert diof == set() + + x, y = symbols('x y', integer=True, negative=False) + diof = diophantine(10*x**2 + 5*x*y - 3*y) + assert diof == {(0, 0)} + def check_solutions(eq): """
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Bug Fixes" }
sympy__sympy-26883@ceb6266
sympy/sympy
Python
26,883
[PDF] Fix "Missing character" issues when building PDF
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs Fix #26877 <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> #### Brief description of what is fixed or changed Suitable LaTeX font configuration to support all needed characters. #### Other comments We only need two additional fonts from a standard LaTeX installation based on TeXLive. But rather than go check how this is packaged in Ubuntu or on Windows, this is done is completely safe way and will be enacted only if the font are actually there. Most characters where already available in DejaVu Sans, but not in DejaVu Sans Mono. A mark-up glitch in a source found while examining PDF was also fixed. #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> NO ENTRY <!-- END RELEASE NOTES -->
2024-07-30T17:52:19Z
[PDF] Missing characters are reported during PDF via LaTeX build Page numbers from log maybe off because I had a problem with the Chromium svg to pdf conversion so perhaps this induces some shift not sure. Besides only some pages numbers are shown because by luck the warning was the first one on current page, so `[NNNN]` indicates the page *before*. I hope globally this is enough to locate sources. The ones with U+0003 are a bit strange and may indicated corrupted sources (no it was simply `\mathbf{\Lambda}` see next comment) others are more expected. Fixing the latter will require finding fonts with the suitable glyphs and do some preamble additions. ```text latex$ grep -C1 'Missing character' sympy-1.14.dev.log [329] Missing character: There is no ^^C (U+0003) in font DejaVu Serif Bold/OT:script =latn;language=dflt;mapping=tex-text;! Missing character: There is no ^^C (U+0003) in font DejaVu Serif Bold/OT:script =latn;language=dflt;mapping=tex-text;! -- [330] Missing character: There is no ^^C (U+0003) in font DejaVu Serif Bold/OT:script =latn;language=dflt;mapping=tex-text;! Missing character: There is no ^^C (U+0003) in font DejaVu Serif Bold/OT:script =latn;language=dflt;mapping=tex-text;! -- [1135] Missing character: There is no ∊ (U+220A) in font DejaVu Serif/OT:script=latn;l anguage=dflt;mapping=tex-text;! -- [1741] Missing character: There is no 𝑅 (U+1D445) in font DejaVu Serif/OT:script=latn; language=dflt;mapping=tex-text;! -- [2223] Missing character: There is no ⭯ (U+2B6F) in font DejaVu Sans Mono/OT:script=la tn;language=dflt;mapping=tex-text;! -- Missing character: There is no ⭯ (U+2B6F) in font DejaVu Sans Mono/OT:script=la tn;language=dflt;mapping=tex-text;! -- [2231] Missing character: There is no ⭯ (U+2B6F) in font DejaVu Sans Mono/OT:script=la tn;language=dflt;mapping=tex-text;! -- [2236] Missing character: There is no ⭯ (U+2B6F) in font DejaVu Sans Mono/OT:script=la tn;language=dflt;mapping=tex-text;! -- [2545] Missing character: There is no ᵦ (U+1D66) in font DejaVu Sans Mono Bold/OT:scri pt=latn;language=dflt;mapping=tex-text;! Missing character: There is no ᵪ (U+1D6A) in font DejaVu Sans Mono Bold/OT:scri pt=latn;language=dflt;mapping=tex-text;! Missing character: There is no ᵧ (U+1D67) in font DejaVu Sans Mono Bold/OT:scri pt=latn;language=dflt;mapping=tex-text;! Missing character: There is no ᵩ (U+1D69) in font DejaVu Sans Mono Bold/OT:scri pt=latn;language=dflt;mapping=tex-text;! Missing character: There is no ᵨ (U+1D68) in font DejaVu Sans Mono Bold/OT:scri pt=latn;language=dflt;mapping=tex-text;! -- [2546] Missing character: There is no ┬ (U+252C) in font DejaVu Sans Mono Bold/OT:scri pt=latn;language=dflt;mapping=tex-text;! Missing character: There is no ⅆ (U+2146) in font DejaVu Sans Mono Bold/OT:scri pt=latn;language=dflt;mapping=tex-text;! Missing character: There is no 𝕀 (U+1D540) in font DejaVu Sans Mono Bold/OT:scr ipt=latn;language=dflt;mapping=tex-text;! Missing character: There is no ⊼ (U+22BC) in font DejaVu Sans Mono Bold/OT:scri pt=latn;language=dflt;mapping=tex-text;! Missing character: There is no ⊽ (U+22BD) in font DejaVu Sans Mono Bold/OT:scri pt=latn;language=dflt;mapping=tex-text;! Missing character: There is no 𝟙 (U+1D7D9) in font DejaVu Sans Mono Bold/OT:scr ipt=latn;language=dflt;mapping=tex-text;! Missing character: There is no ┴ (U+2534) in font DejaVu Sans Mono Bold/OT:scri pt=latn;language=dflt;mapping=tex-text;! Missing character: There is no ⨂ (U+2A02) in font DejaVu Sans Mono Bold/OT:scri pt=latn;language=dflt;mapping=tex-text;! Missing character: There is no 𝕌 (U+1D54C) in font DejaVu Sans Mono Bold/OT:scr ipt=latn;language=dflt;mapping=tex-text;! Missing character: There is no ⊻ (U+22BB) in font DejaVu Sans Mono Bold/OT:scri pt=latn;language=dflt;mapping=tex-text;! Missing character: There is no 𝟘 (U+1D7D8) in font DejaVu Sans Mono Bold/OT:scr ipt=latn;language=dflt;mapping=tex-text;! -- [3472] Missing character: There is no 彭 (U+5F6D) in font DejaVu Sans Mono/OT:script=la tn;language=dflt;mapping=tex-text;! Missing character: There is no 于 (U+4E8E) in font DejaVu Sans Mono/OT:script=la tn;language=dflt;mapping=tex-text;! Missing character: There is no 斌 (U+658C) in font DejaVu Sans Mono/OT:script=la tn;language=dflt;mapping=tex-text;! -- [3474] Missing character: There is no ❌ (U+274C) in font DejaVu Serif/OT:script=latn;l anguage=dflt;mapping=tex-text;! ```
The `^^C` ones come from `\mathbf{\Lambda_c}` mark-up in `src/explanation/modules/physics/mechanics/lagrange.rst`. Edit: *this next sentence is **wrong** on two counts. Loading `unicode-math` does not solve this issue. At least with the current version one should use `\symbf` here not `\mathbf`. And actually it does not seem to increase build time either (I had seen LaTeX on some other big project with custom set-up almost stalled right after loading unicode-math-table.tex, but on investigating closer today I realize it was stalled due to something done *next*... apologies to unicode-math here).* <strike>Such mark-up may work (untested) if loading package `unicode-math` (which will cause a noticeable addition of a few seconds to the build time of PDF) but perhaps</strike> simpler to use `\bm` in place of `\mathbf` (as package `bm` is loaded via conf.py set-up anyhow). However I don't know if MathJax interprets `\bm` (probably it does?). with `\mathbf` ![Capture d’écran 2024-07-30 à 12 59 44](https://github.com/user-attachments/assets/462d9403-953c-413c-b429-127dff2ead10) with `\bm` ![Capture d’écran 2024-07-30 à 13 00 40](https://github.com/user-attachments/assets/fc1ba46c-f42e-4997-85d7-21bb11337e08) Looking more closely, the problem is also related to fontspec package which modifies `\mathbf` to map to current text font. Using ``\usepackage[no-math]{fontspec}`` *also fixes this*. Indeed `\mathbf` in traditional LaTeX has never worked for *lowercase* Greek (by design) but always worked for the 10 or 11 uppercase Greek letters having their associated macro name such as here `\Lambda` (because `\mathbf` maps to an old-fashioned TeX font in OT1 encoding which happens to have the bold uppercase Greek letters in slots 0 to 10 (or 9, I forget)). The `\usepackage{fontspec}` originates here from old Sympy conf.py "denounced" at PR #26868, but it is also used upstream at Sphinx. I think Sphinx should do `\usepackage[no-math]{fontspec}` and will perhaps raise an issue there. Edit 2: *upstream issue raised at https://github.com/sphinx-doc/sphinx/issues/12714. It is not really an upstream issue currently for Sympy but will become one if #26868 is merged. Currently `\usepackage{fontspec}` originates from Sympy itself. In both cases using the `'passoptionstopackages'`of `latex_elements` could be used to pass `no-math`.* Edit 3: with `\usepackage[no-math]{fontspec}` + (after it) `\usepackage{unicode-math}` one gets another type of error: ``` Missing character: There is no 𝛬 (U+1D6EC) in font DejaVu Serif Bold/OT:script= latn;language=dflt;mapping=tex-text;! ``` Indeed `unicode-math` transforms `\mathbf` into something trying to use Unicode code-points so the error change nature. - If using `lualatex` there is a mechanism of fallback-fonts which could be attempted. - in fact the correct mark-up with `unicode-math` is probably `\symbf`. But it has to work for HTML of course. I tested now and MathJax renders it correctly. My HTML from Sphinx 8 contained ```html <script async="async" src="[https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js](view-source:https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js)"></script> ``` and the boldface Lambda was there on the page. Advantage of unicode-math is to allow literal Unicode in math mode. But in the case at hand as we see `\mathbf` has to be replaced by `\symbf` (and `no-math` is still recommended for fontspec, leave it all to unicode-math to handle "at begin document").
[ { "body": "Page numbers from log maybe off because I had a problem with the Chromium svg to pdf conversion so perhaps this induces some shift not sure. Besides only some pages numbers are shown because by luck the warning was the first one on current page, so `[NNNN]` indicates the page *before*.\r\n\r\nI hope globally this is enough to locate sources. The ones with U+0003 are a bit strange and may indicated corrupted sources (no it was simply `\\mathbf{\\Lambda}` see next comment) others are more expected. Fixing the latter will require finding fonts with the suitable glyphs and do some preamble additions.\r\n\r\n```text\r\nlatex$ grep -C1 'Missing character' sympy-1.14.dev.log\r\n[329]\r\nMissing character: There is no ^^C (U+0003) in font DejaVu Serif Bold/OT:script\r\n=latn;language=dflt;mapping=tex-text;!\r\nMissing character: There is no ^^C (U+0003) in font DejaVu Serif Bold/OT:script\r\n=latn;language=dflt;mapping=tex-text;!\r\n--\r\n[330]\r\nMissing character: There is no ^^C (U+0003) in font DejaVu Serif Bold/OT:script\r\n=latn;language=dflt;mapping=tex-text;!\r\nMissing character: There is no ^^C (U+0003) in font DejaVu Serif Bold/OT:script\r\n=latn;language=dflt;mapping=tex-text;!\r\n--\r\n[1135]\r\nMissing character: There is no ∊ (U+220A) in font DejaVu Serif/OT:script=latn;l\r\nanguage=dflt;mapping=tex-text;!\r\n--\r\n[1741]\r\nMissing character: There is no 𝑅 (U+1D445) in font DejaVu Serif/OT:script=latn;\r\nlanguage=dflt;mapping=tex-text;!\r\n--\r\n[2223]\r\nMissing character: There is no ⭯ (U+2B6F) in font DejaVu Sans Mono/OT:script=la\r\ntn;language=dflt;mapping=tex-text;!\r\n--\r\n\r\nMissing character: There is no ⭯ (U+2B6F) in font DejaVu Sans Mono/OT:script=la\r\ntn;language=dflt;mapping=tex-text;!\r\n--\r\n[2231]\r\nMissing character: There is no ⭯ (U+2B6F) in font DejaVu Sans Mono/OT:script=la\r\ntn;language=dflt;mapping=tex-text;!\r\n--\r\n[2236]\r\nMissing character: There is no ⭯ (U+2B6F) in font DejaVu Sans Mono/OT:script=la\r\ntn;language=dflt;mapping=tex-text;!\r\n--\r\n[2545]\r\nMissing character: There is no ᵦ (U+1D66) in font DejaVu Sans Mono Bold/OT:scri\r\npt=latn;language=dflt;mapping=tex-text;!\r\nMissing character: There is no ᵪ (U+1D6A) in font DejaVu Sans Mono Bold/OT:scri\r\npt=latn;language=dflt;mapping=tex-text;!\r\nMissing character: There is no ᵧ (U+1D67) in font DejaVu Sans Mono Bold/OT:scri\r\npt=latn;language=dflt;mapping=tex-text;!\r\nMissing character: There is no ᵩ (U+1D69) in font DejaVu Sans Mono Bold/OT:scri\r\npt=latn;language=dflt;mapping=tex-text;!\r\nMissing character: There is no ᵨ (U+1D68) in font DejaVu Sans Mono Bold/OT:scri\r\npt=latn;language=dflt;mapping=tex-text;!\r\n--\r\n[2546]\r\nMissing character: There is no ┬ (U+252C) in font DejaVu Sans Mono Bold/OT:scri\r\npt=latn;language=dflt;mapping=tex-text;!\r\nMissing character: There is no ⅆ (U+2146) in font DejaVu Sans Mono Bold/OT:scri\r\npt=latn;language=dflt;mapping=tex-text;!\r\nMissing character: There is no 𝕀 (U+1D540) in font DejaVu Sans Mono Bold/OT:scr\r\nipt=latn;language=dflt;mapping=tex-text;!\r\nMissing character: There is no ⊼ (U+22BC) in font DejaVu Sans Mono Bold/OT:scri\r\npt=latn;language=dflt;mapping=tex-text;!\r\nMissing character: There is no ⊽ (U+22BD) in font DejaVu Sans Mono Bold/OT:scri\r\npt=latn;language=dflt;mapping=tex-text;!\r\nMissing character: There is no 𝟙 (U+1D7D9) in font DejaVu Sans Mono Bold/OT:scr\r\nipt=latn;language=dflt;mapping=tex-text;!\r\nMissing character: There is no ┴ (U+2534) in font DejaVu Sans Mono Bold/OT:scri\r\npt=latn;language=dflt;mapping=tex-text;!\r\nMissing character: There is no ⨂ (U+2A02) in font DejaVu Sans Mono Bold/OT:scri\r\npt=latn;language=dflt;mapping=tex-text;!\r\nMissing character: There is no 𝕌 (U+1D54C) in font DejaVu Sans Mono Bold/OT:scr\r\nipt=latn;language=dflt;mapping=tex-text;!\r\nMissing character: There is no ⊻ (U+22BB) in font DejaVu Sans Mono Bold/OT:scri\r\npt=latn;language=dflt;mapping=tex-text;!\r\nMissing character: There is no 𝟘 (U+1D7D8) in font DejaVu Sans Mono Bold/OT:scr\r\nipt=latn;language=dflt;mapping=tex-text;!\r\n--\r\n[3472]\r\nMissing character: There is no 彭 (U+5F6D) in font DejaVu Sans Mono/OT:script=la\r\ntn;language=dflt;mapping=tex-text;!\r\nMissing character: There is no 于 (U+4E8E) in font DejaVu Sans Mono/OT:script=la\r\ntn;language=dflt;mapping=tex-text;!\r\nMissing character: There is no 斌 (U+658C) in font DejaVu Sans Mono/OT:script=la\r\ntn;language=dflt;mapping=tex-text;!\r\n--\r\n[3474]\r\nMissing character: There is no ❌ (U+274C) in font DejaVu Serif/OT:script=latn;l\r\nanguage=dflt;mapping=tex-text;!\r\n```\r\n\r\n\r\n", "number": 26877, "title": "[PDF] Missing characters are reported during PDF via LaTeX build" } ]
9f339a2056ade589f925b84da7160c0b73fecd2f
{ "head_commit": "ceb62663d844b91d2d5622bccd74cba949a8ad65", "head_commit_message": "[PDF] Fix \"Missing character\" issues when building PDF\n\nFix #26877", "patch_to_review": "diff --git a/doc/src/conf.py b/doc/src/conf.py\nindex cb2a5409fb1f..517557f02e5c 100644\n--- a/doc/src/conf.py\n+++ b/doc/src/conf.py\n@@ -372,7 +372,7 @@\n \n \\usepackage{bm}\n \\usepackage{amssymb}\n-\\usepackage{fontspec}\n+\\usepackage[no-math]{fontspec}\n \\usepackage[english]{babel}\n \\defaultfontfeatures{Mapping=tex-text}\n \\setmainfont{DejaVu Serif}\n@@ -383,7 +383,79 @@\n 'inputenc': '',\n 'utf8extra': '',\n 'preamble': r'''\n-'''\n+\\usepackage{newunicodechar}\n+% Some Unicode characters need some re-mapping:\n+% using \\text to allow usage in math mode\n+% Those turn out to be available in DejaVu Sans, but not Mono,\n+% which caused Missing character.\n+% Attention that if sans-serif font is modified in future, this\n+% may need updates.\n+\\newunicodechar{ᵦ}{\\text{\\sffamily ᵦ}}% (U+1D66)\n+\\newunicodechar{ᵧ}{\\text{\\sffamily ᵧ}}% (U+1D67)\n+\\newunicodechar{ᵨ}{\\text{\\sffamily ᵨ}}% (U+1D68)\n+\\newunicodechar{ᵩ}{\\text{\\sffamily ᵩ}}% (U+1D69\n+\\newunicodechar{ᵪ}{\\text{\\sffamily ᵪ}}% (U+1D6A)\n+\\newunicodechar{∧}{\\text{\\sffamily ∧}}% (U+2227)\n+\\newunicodechar{∪}{\\text{\\sffamily ∪}}% (U+222A)\n+\\newunicodechar{ⅆ}{\\text{\\sffamily ⅆ}}% (U+2146)\n+\\newunicodechar{∊}{\\text{\\sffamily ∊}}% (U+220A)\n+\\newunicodechar{⊻}{\\text{\\sffamily ⊻}}% (U+22BB)\n+\\newunicodechar{⊼}{\\text{\\sffamily ⊼}}% (U+22BC)\n+\\newunicodechar{⊽}{\\text{\\sffamily ⊽}}% (U+22BD)\n+\\newunicodechar{⨂}{\\text{\\sffamily ⨂}}% (U+2A02)\n+% Those next two are not available in DejaVu Sans Bold,\n+% we can find them in boldface in XITS or simply use \\mdseries\n+% Opting for the later here.\n+\\newunicodechar{┬}{\\text{\\sffamily\\mdseries ┬}}% (U+252C)\n+\\newunicodechar{┴}{\\text{\\sffamily\\mdseries ┴}}% (U+2534)\n+% Next one (cross mark) is used only once in sources (not in math mode).\n+% Available in Emoji fonts such as Noto Emoji.\n+% U+2715 is available in DejaVu Sans and DejaVu Sans Mono but not Serif\n+\\newunicodechar{❌}{\\textcolor{red}{\\sffamily\\bfseries ✕}}% (U+274C --> U+2715)\n+%\n+% \\IfFontExistsTF was added to fontspec at v2.5c 2017/01/02.\n+% If it does not exist we take no risk.\n+\\makeatletter\n+\\ifdefined\\IfFontExistsTF\\else\\let\\IfFontExistsTF\\@secondoftwo\\fi\n+\\makeatother\n+%\n+%\n+\\IfFontExistsTF{NewCMMath-Regular.otf}\n+ {%\n+ \\newfontfamily{\\NCMMath}{NewCMMath-Regular}\n+ % This one is available (on TeXLive 2024) only in New Computer Modern Math\n+ % and OldStandard-Math.\n+ \\newunicodechar{⭯}{\\text{\\NCMMath ⭯}}% (U+2B6F)\n+ % Those next few are all available in New Computer Modern Math:\n+ \\newunicodechar{𝑅}{\\text{\\NCMMath 𝑅}}% (U+1D445)\n+ \\newunicodechar{𝕀}{\\text{\\NCMMath 𝕀}}% (U+1D540)\n+ \\newunicodechar{𝕌}{\\text{\\NCMMath 𝕌}}% (U+1D54C)\n+ \\newunicodechar{𝟘}{\\text{\\NCMMath 𝟘}}% (U+1D7D8)\n+ \\newunicodechar{𝟙}{\\text{\\NCMMath 𝟙}}% (U+1D7D9)\n+ }\n+ {\\AtEndDocument{\\typeout{%\n+ 𝑅 and some like characters could not be rendered as^^J%\n+ New Computer Modern Math font could not be found or fontspec^^J\n+ package is too old (we need 2.5c 2017/01/02 or later).%\n+ }%\n+ }%\n+ }%\n+\\IfFontExistsTF{HaranoAjiMincho-Regular.otf}\n+ {%\n+ % A few Asian ideograms:\n+ \\newfontfamily{\\HaranoAjiMincho}{Harano Aji Mincho}\n+ \\newunicodechar{于}{\\text{\\HaranoAjiMincho 于}}% (U+4E8E)\n+ \\newunicodechar{彭}{\\text{\\HaranoAjiMincho 彭}}% (U+5F6D)\n+ \\newunicodechar{斌}{\\text{\\HaranoAjiMincho 斌}}% (U+658C)\n+ }\n+ {\\AtEndDocument{\\typeout{%\n+ 于 and some like characters could not be rendered as^^J%\n+ Harano Aji Mincho font could not be found or fontspec^^J\n+ package is too old (we need 2.5c 2017/01/02 or later).%\n+ }%\n+ }%\n+ }%\n+''',\n }\n \n # SymPy logo on title page\ndiff --git a/doc/src/modules/solvers/solveset.rst b/doc/src/modules/solvers/solveset.rst\nindex 8c83c355e094..189b0e7b3db3 100644\n--- a/doc/src/modules/solvers/solveset.rst\n+++ b/doc/src/modules/solvers/solveset.rst\n@@ -105,7 +105,7 @@ containers in mathematics such as:\n \n Also, the predefined set classes such as:\n \n- * :class:`~.Naturals`, $\\mathbb{N}\n+ * :class:`~.Naturals`, $\\mathbb{N}$\n \n Represents the natural numbers (or counting numbers), which are all\n positive integers starting from 1.\n" }
[ { "diff_hunk": "@@ -372,7 +372,7 @@\n \n \\usepackage{bm}", "line": null, "original_line": 373, "original_start_line": null, "path": "doc/src/conf.py", "start_line": null, "text": "@author:\nThis serves nothing because there is not a single usage of `\\bm` in math mark-up in sources. But who knows it may help in future. (MathJax in its default configuration as set-up by current Sphinx appears to render the `\\bm`; of course the mark-up has primarily to work for HTML output so if `\\bm{}` syntax was not understood by HTML this would be a no-go anyhow).\n\n@user1:\nThis seems to be a holdover from galgebra, which was removed from sympy (see 8a29065982af9e8c512824688ded9b41eacbff56). We can remove it if it is unused. \n\n@author:\nI pushed a commit at PR #26868 which removes `\\usepackage{bm}`." }, { "diff_hunk": "@@ -383,7 +383,79 @@\n 'inputenc': '',\n 'utf8extra': '',\n 'preamble': r'''\n-'''\n+\\usepackage{newunicodechar}\n+% Some Unicode characters need some re-mapping:\n+% using \\text to allow usage in math mode\n+% Those turn out to be available in DejaVu Sans, but not Mono,\n+% which caused Missing character.\n+% Attention that if sans-serif font is modified in future, this\n+% may need updates.\n+\\newunicodechar{ᵦ}{\\text{\\sffamily ᵦ}}% (U+1D66)\n+\\newunicodechar{ᵧ}{\\text{\\sffamily ᵧ}}% (U+1D67)\n+\\newunicodechar{ᵨ}{\\text{\\sffamily ᵨ}}% (U+1D68)\n+\\newunicodechar{ᵩ}{\\text{\\sffamily ᵩ}}% (U+1D69\n+\\newunicodechar{ᵪ}{\\text{\\sffamily ᵪ}}% (U+1D6A)\n+\\newunicodechar{∧}{\\text{\\sffamily ∧}}% (U+2227)\n+\\newunicodechar{∪}{\\text{\\sffamily ∪}}% (U+222A)\n+\\newunicodechar{ⅆ}{\\text{\\sffamily ⅆ}}% (U+2146)\n+\\newunicodechar{∊}{\\text{\\sffamily ∊}}% (U+220A)\n+\\newunicodechar{⊻}{\\text{\\sffamily ⊻}}% (U+22BB)\n+\\newunicodechar{⊼}{\\text{\\sffamily ⊼}}% (U+22BC)\n+\\newunicodechar{⊽}{\\text{\\sffamily ⊽}}% (U+22BD)\n+\\newunicodechar{⨂}{\\text{\\sffamily ⨂}}% (U+2A02)\n+% Those next two are not available in DejaVu Sans Bold,\n+% we can find them in boldface in XITS or simply use \\mdseries\n+% Opting for the later here.\n+\\newunicodechar{┬}{\\text{\\sffamily\\mdseries ┬}}% (U+252C)\n+\\newunicodechar{┴}{\\text{\\sffamily\\mdseries ┴}}% (U+2534)\n+% Next one (cross mark) is used only once in sources (not in math mode).\n+% Available in Emoji fonts such as Noto Emoji.\n+% U+2715 is available in DejaVu Sans and DejaVu Sans Mono but not Serif\n+\\newunicodechar{❌}{\\textcolor{red}{\\sffamily\\bfseries ✕}}% (U+274C --> U+2715)\n+%\n+% \\IfFontExistsTF was added to fontspec at v2.5c 2017/01/02.\n+% If it does not exist we take no risk.\n+\\makeatletter\n+\\ifdefined\\IfFontExistsTF\\else\\let\\IfFontExistsTF\\@secondoftwo\\fi\n+\\makeatother\n+%\n+%\n+\\IfFontExistsTF{NewCMMath-Regular.otf}\n+ {%\n+ \\newfontfamily{\\NCMMath}{NewCMMath-Regular}\n+ % This one is available (on TeXLive 2024) only in New Computer Modern Math\n+ % and OldStandard-Math.\n+ \\newunicodechar{⭯}{\\text{\\NCMMath ⭯}}% (U+2B6F)\n+ % Those next few are all available in New Computer Modern Math:\n+ \\newunicodechar{𝑅}{\\text{\\NCMMath 𝑅}}% (U+1D445)\n+ \\newunicodechar{𝕀}{\\text{\\NCMMath 𝕀}}% (U+1D540)\n+ \\newunicodechar{𝕌}{\\text{\\NCMMath 𝕌}}% (U+1D54C)\n+ \\newunicodechar{𝟘}{\\text{\\NCMMath 𝟘}}% (U+1D7D8)\n+ \\newunicodechar{𝟙}{\\text{\\NCMMath 𝟙}}% (U+1D7D9)\n+ }\n+ {\\AtEndDocument{\\typeout{%\n+ 𝑅 and some like characters could not be rendered as^^J%\n+ New Computer Modern Math font could not be found or fontspec^^J", "line": null, "original_line": 438, "original_start_line": null, "path": "doc/src/conf.py", "start_line": null, "text": "@author:\nshould be `^^J%` to avoid space at start of next line in console output but well.\r\n```suggestion\r\n New Computer Modern Math font could not be found or fontspec^^J%\r\n```" }, { "diff_hunk": "@@ -383,7 +383,79 @@\n 'inputenc': '',\n 'utf8extra': '',\n 'preamble': r'''\n-'''\n+\\usepackage{newunicodechar}\n+% Some Unicode characters need some re-mapping:\n+% using \\text to allow usage in math mode\n+% Those turn out to be available in DejaVu Sans, but not Mono,\n+% which caused Missing character.\n+% Attention that if sans-serif font is modified in future, this\n+% may need updates.\n+\\newunicodechar{ᵦ}{\\text{\\sffamily ᵦ}}% (U+1D66)\n+\\newunicodechar{ᵧ}{\\text{\\sffamily ᵧ}}% (U+1D67)\n+\\newunicodechar{ᵨ}{\\text{\\sffamily ᵨ}}% (U+1D68)\n+\\newunicodechar{ᵩ}{\\text{\\sffamily ᵩ}}% (U+1D69\n+\\newunicodechar{ᵪ}{\\text{\\sffamily ᵪ}}% (U+1D6A)\n+\\newunicodechar{∧}{\\text{\\sffamily ∧}}% (U+2227)\n+\\newunicodechar{∪}{\\text{\\sffamily ∪}}% (U+222A)\n+\\newunicodechar{ⅆ}{\\text{\\sffamily ⅆ}}% (U+2146)\n+\\newunicodechar{∊}{\\text{\\sffamily ∊}}% (U+220A)\n+\\newunicodechar{⊻}{\\text{\\sffamily ⊻}}% (U+22BB)\n+\\newunicodechar{⊼}{\\text{\\sffamily ⊼}}% (U+22BC)\n+\\newunicodechar{⊽}{\\text{\\sffamily ⊽}}% (U+22BD)\n+\\newunicodechar{⨂}{\\text{\\sffamily ⨂}}% (U+2A02)\n+% Those next two are not available in DejaVu Sans Bold,\n+% we can find them in boldface in XITS or simply use \\mdseries\n+% Opting for the later here.\n+\\newunicodechar{┬}{\\text{\\sffamily\\mdseries ┬}}% (U+252C)\n+\\newunicodechar{┴}{\\text{\\sffamily\\mdseries ┴}}% (U+2534)\n+% Next one (cross mark) is used only once in sources (not in math mode).\n+% Available in Emoji fonts such as Noto Emoji.\n+% U+2715 is available in DejaVu Sans and DejaVu Sans Mono but not Serif\n+\\newunicodechar{❌}{\\textcolor{red}{\\sffamily\\bfseries ✕}}% (U+274C --> U+2715)\n+%\n+% \\IfFontExistsTF was added to fontspec at v2.5c 2017/01/02.\n+% If it does not exist we take no risk.\n+\\makeatletter\n+\\ifdefined\\IfFontExistsTF\\else\\let\\IfFontExistsTF\\@secondoftwo\\fi\n+\\makeatother\n+%\n+%\n+\\IfFontExistsTF{NewCMMath-Regular.otf}\n+ {%\n+ \\newfontfamily{\\NCMMath}{NewCMMath-Regular}\n+ % This one is available (on TeXLive 2024) only in New Computer Modern Math\n+ % and OldStandard-Math.\n+ \\newunicodechar{⭯}{\\text{\\NCMMath ⭯}}% (U+2B6F)\n+ % Those next few are all available in New Computer Modern Math:\n+ \\newunicodechar{𝑅}{\\text{\\NCMMath 𝑅}}% (U+1D445)\n+ \\newunicodechar{𝕀}{\\text{\\NCMMath 𝕀}}% (U+1D540)\n+ \\newunicodechar{𝕌}{\\text{\\NCMMath 𝕌}}% (U+1D54C)\n+ \\newunicodechar{𝟘}{\\text{\\NCMMath 𝟘}}% (U+1D7D8)\n+ \\newunicodechar{𝟙}{\\text{\\NCMMath 𝟙}}% (U+1D7D9)\n+ }\n+ {\\AtEndDocument{\\typeout{%\n+ 𝑅 and some like characters could not be rendered as^^J%\n+ New Computer Modern Math font could not be found or fontspec^^J\n+ package is too old (we need 2.5c 2017/01/02 or later).%\n+ }%\n+ }%\n+ }%\n+\\IfFontExistsTF{HaranoAjiMincho-Regular.otf}\n+ {%\n+ % A few Asian ideograms:\n+ \\newfontfamily{\\HaranoAjiMincho}{Harano Aji Mincho}\n+ \\newunicodechar{于}{\\text{\\HaranoAjiMincho 于}}% (U+4E8E)\n+ \\newunicodechar{彭}{\\text{\\HaranoAjiMincho 彭}}% (U+5F6D)\n+ \\newunicodechar{斌}{\\text{\\HaranoAjiMincho 斌}}% (U+658C)\n+ }\n+ {\\AtEndDocument{\\typeout{%\n+ 于 and some like characters could not be rendered as^^J%\n+ Harano Aji Mincho font could not be found or fontspec^^J", "line": null, "original_line": 453, "original_start_line": null, "path": "doc/src/conf.py", "start_line": null, "text": "@author:\nsame here better with `^^J%`.\r\n```suggestion\r\n Harano Aji Mincho font could not be found or fontspec^^J%\r\n```" } ]
c17c93ee37b2c7e64d8fc85ae82542d1cd4b59e4
diff --git a/doc/src/conf.py b/doc/src/conf.py index aac398d9cec8..c553d6c70afe 100644 --- a/doc/src/conf.py +++ b/doc/src/conf.py @@ -364,12 +364,50 @@ latex_engine = 'xelatex' latex_use_xindy = False latex_elements = { + 'passoptionstopackages': r'\PassOptionsToPackage{no-math}{fontspec}', 'fontpkg': r''' \setmainfont{DejaVu Serif} \setsansfont{DejaVu Sans} \setmonofont{DejaVu Sans Mono} ''', 'preamble': r''' +\usepackage{newunicodechar} +% Some Unicode characters need some re-mapping: +% using \text to allow usage in math mode +% Those turn out to be available in DejaVu Sans, but not Mono, +% which caused Missing character. +% Attention that if sans-serif font is modified in future, this +% may need updates. +\newunicodechar{ᵦ}{\text{\sffamily ᵦ}}% (U+1D66) +\newunicodechar{ᵧ}{\text{\sffamily ᵧ}}% (U+1D67) +\newunicodechar{ᵨ}{\text{\sffamily ᵨ}}% (U+1D68) +\newunicodechar{ᵩ}{\text{\sffamily ᵩ}}% (U+1D69 +\newunicodechar{ᵪ}{\text{\sffamily ᵪ}}% (U+1D6A) +\newunicodechar{∧}{\text{\sffamily ∧}}% (U+2227) +\newunicodechar{∪}{\text{\sffamily ∪}}% (U+222A) +\newunicodechar{ⅆ}{\text{\sffamily ⅆ}}% (U+2146) +\newunicodechar{∊}{\text{\sffamily ∊}}% (U+220A) +\newunicodechar{⊻}{\text{\sffamily ⊻}}% (U+22BB) +\newunicodechar{⊼}{\text{\sffamily ⊼}}% (U+22BC) +\newunicodechar{⊽}{\text{\sffamily ⊽}}% (U+22BD) +\newunicodechar{⨂}{\text{\sffamily ⨂}}% (U+2A02) +% Those next two are not available in DejaVu Sans Bold, +% we can find them in boldface in XITS or simply use \mdseries +% Opting for the later here. +\newunicodechar{┬}{\text{\sffamily\mdseries ┬}}% (U+252C) +\newunicodechar{┴}{\text{\sffamily\mdseries ┴}}% (U+2534) +% Next one (cross mark) is used only once in sources (not in math mode). +% Available in Emoji fonts such as Noto Emoji. +% U+2715 is available in DejaVu Sans and DejaVu Sans Mono but not Serif +\newunicodechar{❌}{\textcolor{red}{\sffamily\bfseries ✕}}% (U+274C --> U+2715) +% +\newfontfamily{\TGDejaVuMath}{texgyredejavu-math.otf} + \newunicodechar{𝑅}{\text{\TGDejaVuMath 𝑅}}% (U+1D445) + \newunicodechar{𝕀}{\text{\TGDejaVuMath 𝕀}}% (U+1D540) + \newunicodechar{𝕌}{\text{\TGDejaVuMath 𝕌}}% (U+1D54C) + \newunicodechar{𝟘}{\text{\TGDejaVuMath 𝟘}}% (U+1D7D8) + \newunicodechar{𝟙}{\text{\TGDejaVuMath 𝟙}}% (U+1D7D9) +% % Define version of \LaTeX that is usable in math mode \usepackage{letltxmacro} \LetLtxMacro\OldLaTeX\LaTeX diff --git a/doc/src/contributing/new-contributors-guide/workflow-process.md b/doc/src/contributing/new-contributors-guide/workflow-process.md index 4d3054ba3cdf..4bb54c609840 100644 --- a/doc/src/contributing/new-contributors-guide/workflow-process.md +++ b/doc/src/contributing/new-contributors-guide/workflow-process.md @@ -639,7 +639,6 @@ index 3af6dc1..7fa63b1 100644 @@ -1307,3 +1307,4 @@ zsc347 <[email protected]> Øyvind Jensen <[email protected]> Łukasz Pankowski <[email protected]> - 彭于斌 <[email protected]> +Joe Bloggs <[email protected]> ``` diff --git a/doc/src/modules/physics/continuum_mechanics/beam_problems.rst b/doc/src/modules/physics/continuum_mechanics/beam_problems.rst index 1b1801d7b307..4e0c2e638fe2 100644 --- a/doc/src/modules/physics/continuum_mechanics/beam_problems.rst +++ b/doc/src/modules/physics/continuum_mechanics/beam_problems.rst @@ -65,7 +65,7 @@ point load of 12 kN is applied at the free end of the beam. \\\\|V V V V V V V V V | \\\\|________________|_______________V \\\\| | | - \\\\o - - - - - - - -⭯ 50 kN-m - - - | - - -> x + \\\\o - - - - - - - -↺ 50 kN-m - - - | - - -> x \\\\|________________|_______________| \\\\| : \\\\|----------------|---------------| @@ -255,7 +255,7 @@ deflection is restricted at both the supports. :: - || 8 N ⭯ 120 Nm + || 8 N ↺ 120 Nm \/______________________________________________| |_______________________________________________| /\ /\ @@ -317,7 +317,7 @@ applied from the mid till the end of the beam. ramp load = 1 KN/m/m constant load = 3 KN/m |------------------------| - ⭯ 1.5 KN-m + ↺ 1.5 KN-m ______________________|________________________ |_______________________________________________| o | /\ @@ -550,7 +550,7 @@ overhanging end. 2 KN/m ---> x _________________ | | | | | | | | | | v y - V V V V V V V V V ⭯ 5 KN-m + V V V V V V V V V ↺ 5 KN-m ____________________________________________________| O____________________________________________________| / \ /\ diff --git a/doc/src/modules/solvers/solveset.rst b/doc/src/modules/solvers/solveset.rst index 8c83c355e094..189b0e7b3db3 100644 --- a/doc/src/modules/solvers/solveset.rst +++ b/doc/src/modules/solvers/solveset.rst @@ -105,7 +105,7 @@ containers in mathematics such as: Also, the predefined set classes such as: - * :class:`~.Naturals`, $\mathbb{N} + * :class:`~.Naturals`, $\mathbb{N}$ Represents the natural numbers (or counting numbers), which are all positive integers starting from 1.
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Bug Fixes" }
sympy__sympy-26729@dead56e
sympy/sympy
Python
26,729
use only relative error in control_plots test
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> fixes #26728 #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> #### Brief description of what is fixed or changed #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> NO ENTRY <!-- END RELEASE NOTES -->
2024-06-22T00:55:37Z
Control test fails with numpy 2.0 CC @akshanshbhatt @faze-geek This is the second time I have asked to fix these tests (gh-23128). This test fails with numpy 2.0: ```console $ pytest sympy/physics/control/tests/test_control_plots.py::test_pole_zero ================================================================================================ test session starts ================================================================================================= platform darwin -- Python 3.12.2, pytest-8.1.1, pluggy-1.4.0 architecture: 64-bit cache: yes ground types: flint (python-flint==0.6.0) rootdir: /Users/enojb/work/dev/sympy configfile: pyproject.toml plugins: instafail-0.5.0, doctestplus-1.2.2.dev5+g5176f32, hypothesis-6.99.9, xdist-3.5.0, timeout-2.3.1, split-0.8.2 collected 1 item sympy/physics/control/tests/test_control_plots.py F [100%] ====================================================================================================== FAILURES ====================================================================================================== ___________________________________________________________________________________________________ test_pole_zero ___________________________________________________________________________________________________ def test_pole_zero(): if not numpy: skip("NumPy is required for this test") def pz_tester(sys, expected_value): z, p = pole_zero_numerical_data(sys) z_check = numpy.allclose(z, expected_value[0]) p_check = numpy.allclose(p, expected_value[1]) return p_check and z_check exp1 = [[], [-0.24999999999999994+1.3919410907075054j, -0.24999999999999994-1.3919410907075054j]] exp2 = [[0.0], [-0.25+0.3227486121839514j, -0.25-0.3227486121839514j]] exp3 = [[0.0], [-0.5000000000000004+0.8660254037844395j, -0.5000000000000004-0.8660254037844395j, 0.9999999999999998+0j]] exp4 = [[], [5.0, 0.0, 0.0, 0.0]] exp5 = [[-5.645751311064592, -0.5000000000000008, -0.3542486889354093], [-0.24999999999999986+1.3919410907075052j, -0.24999999999999986-1.3919410907075052j, -0.2499999999999998+0.32274861218395134j, -0.2499999999999998-0.32274861218395134j]] exp6 = [[], [-1.1641600331447917-3.545808351896439j, -0.8358399668552097+2.5458083518964383j]] > assert pz_tester(tf1, exp1) E assert False E + where False = <function test_pole_zero.<locals>.pz_tester at 0x1148e6c00>(TransferFunction(1, p**2 + 0.5*p + 2, p), [[], [(-0.24999999999999994+1.3919410907075054j), (-0.24999999999999994-1.3919410907075054j)]]) sympy/physics/control/tests/test_control_plots.py:121: AssertionError DO *NOT* COMMIT! ============================================================================================== short test summary info =============================================================================================== FAILED sympy/physics/control/tests/test_control_plots.py::test_pole_zero - assert False ================================================================================================= 1 failed in 0.29s ================================================================================================== ```
I'm not able to reproduce this locally. Tests seem to pass with `numpy==2.0.0`. ``` $ python -c "import numpy; print(numpy.version.version)" 2.0.0 $ pytest sympy/physics/control/tests/test_control_plots.py::test_pole_zero ======================== test session starts ======================== platform darwin -- Python 3.12.4, pytest-8.2.2, pluggy-1.5.0 architecture: 64-bit cache: yes ground types: python rootdir: /Users/akshansh/Documents/GitHub/sympy configfile: pyproject.toml plugins: doctestplus-1.2.1, hypothesis-6.103.2, timeout-2.3.1, split-0.8.2, xdist-3.6.1 collected 1 item sympy/physics/control/tests/test_control_plots.py . [100%] ========================= 1 passed in 0.16s ========================= ``` > I'm not able to reproduce this locally. Tests seem to pass with `numpy==2.0.0`. I can reproduce this only on MacOS. On Linux it passes. On MacOS what is returned is: ``` (Pdb) p pole_zero_numerical_data(tf1) (array([], dtype=float64), array([-0.25-1.39194109j, -0.25+1.39194109j])) ``` The test expects ``` exp1 = [[], [-0.24999999999999994+1.3919410907075054j, -0.24999999999999994-1.3919410907075054j]] ``` The difference is the ordering of the roots. The underlying call is to np.roots: ```python In [1]: np.roots([1. +0.j, 0.5+0.j, 2. +0.j]) # numpy 1.26.4 on MacOS Out[1]: array([-0.25+1.39194109j, -0.25-1.39194109j]) ``` ```python In [1]: np.roots([1. +0.j, 0.5+0.j, 2. +0.j]) # numpy 2.0.0 on MacOS Out[1]: array([-0.25-1.39194109j, -0.25+1.39194109j]) ``` The documentation for np.roots does not say anything about the ordering of the returned roots. It says that they come from computing the eigenvalues of the companion matrix. The eigvals function does say that the roots are not necessarily ordered. The docs do say: ``` .. note:: This forms part of the old polynomial API. Since version 1.4, the new polynomial API defined in `numpy.polynomial` is preferred. A summary of the differences can be found in the :doc:`transition guide </reference/routines.polynomials>`. ``` Really though we should be using sympy routines to find the roots because np.roots can be inaccurate for some roots. This is the guaranteed accurate way to find numerical approximations of the roots for any polynomial with rational or float coefficients using SymPy's routines: ```python In [20]: p = x**2 + x/2 + 2 In [21]: [r.evalf() for r in all_roots(p)] Out[21]: [-0.25 - 1.39194109070751⋅ⅈ, -0.25 + 1.39194109070751⋅ⅈ] ``` This method uses exact arithmetic to isolate the roots, to distinguish real and non-real roots, and to identify complex conjugate pairs. The roots are returned by `all_roots` in a guaranteed order that is real roots followed by non-real roots in pairs ordered first by real parts and then by the magnitudes of the imaginary parts: ```python In [22]: all_roots((x-1)*(x-I)*(x+I)*(x-2*I)*(x+2*I)) Out[22]: [1, -ⅈ, ⅈ, -2⋅ⅈ, 2⋅ⅈ] ``` The `all_roots` function is not limited by degree because it uses `RootOf`. The call to `evalf` from the exact representation of the roots is guaranteed to give a result that is accurate to the requested precision. NumPy's `np.roots` function cannot provide similar guarantees because it uses fixed precision floating point. NumPy would ideally be eliminated from `control_plots.py`. This code gets merged without being stricter about eating our own dog food for plotting. I understand that using numpy is not the most elegant approach to get polynomial root data. Still, I don't think that sympy's built-in polynomial root calculator is fast enough to use it instead. Sure, sympy gives more consistent and precise results, but we don't require low-tolerance data points for plotting purposes. See the time difference using both of these methods: ```python In [2]: p = x**8 + 9*x**5 + 11*x + 7 # Arbitrary 8-deg polynomial In [3]: %time [r.evalf() for r in all_roots(p)] CPU times: user 4.1 s, sys: 5.52 ms, total: 4.11 s Wall time: 4.11 s Out[3]: [-2.10954725209519, -0.582637006756454, -0.55581013748193 - 0.805706304516959*I, -0.55581013748193 + 0.805706304516959*I, 0.912751454705842 - 0.742665591979299*I, 0.912751454705842 + 0.742665591979299*I, 0.98915081220191 - 1.82059670736243*I, 0.98915081220191 + 1.82059670736243*I] In [4]: %time np.roots(np.array(Poly(p, x).all_coeffs(), dtype=np.complex128)) CPU times: user 2.41 ms, sys: 5.06 ms, total: 7.47 ms Wall time: 11.9 ms Out[4]: array([ 0.98915081+1.82059671e+00j, 0.98915081-1.82059671e+00j, -2.10954725+3.95682719e-16j, 0.91275145+7.42665592e-01j, 0.91275145-7.42665592e-01j, -0.55581014+8.05706305e-01j, -0.55581014-8.05706305e-01j, -0.58263701-2.23618104e-25j]) ``` The precision vs calculation-time tradeoff is quite skewed. This is the reason we decided to use numpy in the first place. Sometime in the future, if root-locus plot is added to this module, it will depend on the pz numerical value function for root calculations, which will be called a hundred, perhaps even a thousand times (for sampling loci points). It will only make this time bottleneck worse. I would love to hear if you have any suggestions for solving this. We can make the SymPy routines faster in various ways including by using better algorithms and/or using python-flint: ```python In [3]: from flint import * In [4]: x = fmpz_poly([0, 1]) In [5]: p = x**8 + 9*x**5 + 11*x + 7 In [6]: p Out[6]: x^8 + 9*x^5 + 11*x + 7 In [7]: %time p.complex_roots() CPU times: user 6.61 ms, sys: 0 ns, total: 6.61 ms Wall time: 76.5 ms Out[7]: [([-2.10954725209519 +/- 6.57e-16], 1), ([-0.582637006756454 +/- 3.76e-16], 1), ([0.912751454705842 +/- 4.04e-16] + [0.742665591979299 +/- 1.98e-16]j, 1), ([0.912751454705842 +/- 4.04e-16] + [-0.742665591979299 +/- 1.98e-16]j, 1), ([-0.555810137481930 +/- 4.43e-16] + [0.805706304516960 +/- 4.93e-16]j, 1), ([-0.555810137481930 +/- 4.43e-16] + [-0.805706304516960 +/- 4.93e-16]j, 1), ([0.989150812201910 +/- 4.79e-16] + [1.82059670736243 +/- 1.14e-15]j, 1), ([0.989150812201910 +/- 4.79e-16] + [-1.82059670736243 +/- 1.14e-15]j, 1)] ``` > The precision vs calculation-time tradeoff is quite skewed. This is the reason we decided to use numpy in the first place Why should anyone use SymPy's control module at all? There are other Python control modules. If someone wants to use numpy to do a root locus plot then they can use the `control` module: https://python-control.readthedocs.io/en/latest/generated/control.root_locus.html What purpose is served by SymPy reimplementing things that already exist in other libraries? Providing more accurate calculations is one possible reason for SymPy to have a control module. Duplicating the features that already exist in other Python libraries is not. If SymPy's control module plotting functionality is not any better than the other control libraries then I don't think any plotting functionality should be added to SymPy's control module apart from just calling into the other libraries. > ```python > In [7]: %time p.complex_roots() > CPU times: user 6.61 ms, sys: 0 ns, total: 6.61 ms > Wall time: 76.5 ms > ``` Note actually that this time was inflated by initialising the printer. Here is the actual time: ```python In [1]: from flint import * In [2]: x = fmpz_poly([0, 1]) In [3]: p = x**8 + 9*x**5 + 11*x + 7 In [4]: %time r = p.complex_roots() CPU times: user 2.19 ms, sys: 0 ns, total: 2.19 ms Wall time: 2.21 ms ```
[ { "body": "CC @akshanshbhatt @faze-geek \r\n\r\nThis is the second time I have asked to fix these tests (gh-23128).\r\n\r\nThis test fails with numpy 2.0:\r\n```console\r\n$ pytest sympy/physics/control/tests/test_control_plots.py::test_pole_zero\r\n================================================================================================ test session starts =================================================================================================\r\nplatform darwin -- Python 3.12.2, pytest-8.1.1, pluggy-1.4.0\r\narchitecture: 64-bit\r\ncache: yes\r\nground types: flint (python-flint==0.6.0)\r\n\r\nrootdir: /Users/enojb/work/dev/sympy\r\nconfigfile: pyproject.toml\r\nplugins: instafail-0.5.0, doctestplus-1.2.2.dev5+g5176f32, hypothesis-6.99.9, xdist-3.5.0, timeout-2.3.1, split-0.8.2\r\ncollected 1 item\r\n\r\nsympy/physics/control/tests/test_control_plots.py F [100%]\r\n\r\n====================================================================================================== FAILURES ======================================================================================================\r\n___________________________________________________________________________________________________ test_pole_zero ___________________________________________________________________________________________________\r\n\r\n def test_pole_zero():\r\n if not numpy:\r\n skip(\"NumPy is required for this test\")\r\n\r\n def pz_tester(sys, expected_value):\r\n z, p = pole_zero_numerical_data(sys)\r\n z_check = numpy.allclose(z, expected_value[0])\r\n p_check = numpy.allclose(p, expected_value[1])\r\n return p_check and z_check\r\n\r\n exp1 = [[], [-0.24999999999999994+1.3919410907075054j, -0.24999999999999994-1.3919410907075054j]]\r\n exp2 = [[0.0], [-0.25+0.3227486121839514j, -0.25-0.3227486121839514j]]\r\n exp3 = [[0.0], [-0.5000000000000004+0.8660254037844395j,\r\n -0.5000000000000004-0.8660254037844395j, 0.9999999999999998+0j]]\r\n exp4 = [[], [5.0, 0.0, 0.0, 0.0]]\r\n exp5 = [[-5.645751311064592, -0.5000000000000008, -0.3542486889354093],\r\n [-0.24999999999999986+1.3919410907075052j,\r\n -0.24999999999999986-1.3919410907075052j, -0.2499999999999998+0.32274861218395134j,\r\n -0.2499999999999998-0.32274861218395134j]]\r\n exp6 = [[], [-1.1641600331447917-3.545808351896439j,\r\n -0.8358399668552097+2.5458083518964383j]]\r\n\r\n> assert pz_tester(tf1, exp1)\r\nE assert False\r\nE + where False = <function test_pole_zero.<locals>.pz_tester at 0x1148e6c00>(TransferFunction(1, p**2 + 0.5*p + 2, p), [[], [(-0.24999999999999994+1.3919410907075054j), (-0.24999999999999994-1.3919410907075054j)]])\r\n\r\nsympy/physics/control/tests/test_control_plots.py:121: AssertionError\r\n DO *NOT* COMMIT!\r\n============================================================================================== short test summary info ===============================================================================================\r\nFAILED sympy/physics/control/tests/test_control_plots.py::test_pole_zero - assert False\r\n================================================================================================= 1 failed in 0.29s ==================================================================================================\r\n```", "number": 26728, "title": "Control test fails with numpy 2.0" } ]
6d6e89caad6da38313a9174fc186b67904872a47
{ "head_commit": "dead56ec78be8c9957dbbed0c6526a421716d20e", "head_commit_message": "Update test_control_plots.py", "patch_to_review": "diff --git a/sympy/physics/control/control_plots.py b/sympy/physics/control/control_plots.py\nindex 3742de329e61..7f1fc5179ee6 100644\n--- a/sympy/physics/control/control_plots.py\n+++ b/sympy/physics/control/control_plots.py\n@@ -2,12 +2,14 @@\n from sympy.functions.elementary.exponential import (exp, log)\n from sympy.polys.partfrac import apart\n from sympy.core.symbol import Dummy\n+from sympy.core.sympify import _sympify\n from sympy.external import import_module\n from sympy.functions import arg, Abs\n from sympy.integrals.laplace import _fast_inverse_laplace\n from sympy.physics.control.lti import SISOLinearTimeInvariant\n from sympy.plotting.series import LineOver1DRangeSeries\n from sympy.polys.polytools import Poly\n+from sympy.polys.polyutils import _nsort\n from sympy.printing.latex import latex\n \n __all__ = ['pole_zero_numerical_data', 'pole_zero_plot',\n@@ -88,8 +90,8 @@ def pole_zero_numerical_data(system):\n >>> from sympy.physics.control.lti import TransferFunction\n >>> from sympy.physics.control.control_plots import pole_zero_numerical_data\n >>> tf1 = TransferFunction(s**2 + 1, s**4 + 4*s**3 + 6*s**2 + 5*s + 2, s)\n- >>> pole_zero_numerical_data(tf1) # doctest: +SKIP\n- ([-0.+1.j 0.-1.j], [-2. +0.j -0.5+0.8660254j -0.5-0.8660254j -1. +0.j ])\n+ >>> pole_zero_numerical_data(tf1)\n+ ([-1j, 1j], [-2.0, (-1.5-0.8660254j), (-0.5+0.8660254j)])\n \n See Also\n ========\n@@ -109,7 +111,12 @@ def pole_zero_numerical_data(system):\n zeros = np.roots(num_poly)\n poles = np.roots(den_poly)\n \n- return zeros, poles\n+ # make ordering canonical\n+ def _sort(l):\n+ return [float(i) if i.is_real else complex(i) for i in\n+ _nsort([_sympify(i) for i in l])]\n+\n+ return _sort(zeros), _sort(poles)\n \n \n def pole_zero_plot(system, pole_color='blue', pole_markersize=10,\ndiff --git a/sympy/physics/control/tests/test_control_plots.py b/sympy/physics/control/tests/test_control_plots.py\nindex 673fcee6cfdb..eeb73e6d3702 100644\n--- a/sympy/physics/control/tests/test_control_plots.py\n+++ b/sympy/physics/control/tests/test_control_plots.py\n@@ -1,5 +1,6 @@\n from math import isclose\n-from sympy.core.numbers import I\n+from sympy.core.numbers import I, all_close\n+from sympy.core.sympify import sympify\n from sympy.core.symbol import Dummy\n from sympy.functions.elementary.complexes import (Abs, arg)\n from sympy.functions.elementary.exponential import log\n@@ -101,20 +102,27 @@ def test_pole_zero():\n skip(\"NumPy is required for this test\")\n \n def pz_tester(sys, expected_value):\n- z, p = pole_zero_numerical_data(sys)\n- z_check = numpy.allclose(z, expected_value[0])\n- p_check = numpy.allclose(p, expected_value[1])\n+ z, p = [list(i) for i in pole_zero_numerical_data(sys)]\n+ def check(a, b):\n+ if isinstance(a, (list, tuple)):\n+ return all(check(i, j) for i,j in zip(a, b))\n+ a = sympify(a).n(chop=1e-10)\n+ if not b:\n+ return all_close(a, b, atol=1e-10, rtol=0)\n+ return all_close(a, b, atol=0, rtol=1e-6)\n+ z_check = check(z, expected_value[0])\n+ p_check = check(p, expected_value[1])\n return p_check and z_check\n \n- exp1 = [[], [-0.24999999999999994+1.3919410907075054j, -0.24999999999999994-1.3919410907075054j]]\n- exp2 = [[0.0], [-0.25+0.3227486121839514j, -0.25-0.3227486121839514j]]\n- exp3 = [[0.0], [-0.5000000000000004+0.8660254037844395j,\n- -0.5000000000000004-0.8660254037844395j, 0.9999999999999998+0j]]\n- exp4 = [[], [5.0, 0.0, 0.0, 0.0]]\n+ exp1 = [[], [-0.24999999999999994-1.3919410907075054j, -0.24999999999999994+1.3919410907075054j]]\n+ exp2 = [[0.0], [-0.25-0.3227486121839514j, -0.25+0.3227486121839514j]]\n+ exp3 = [[0.0], [0.9999999999999998+0j, -0.5000000000000004-0.8660254037844395j,\n+ -0.5000000000000004+0.8660254037844395j]]\n+ exp4 = [[], [0.0, 0.0, 0.0, 5.0]]\n exp5 = [[-5.645751311064592, -0.5000000000000008, -0.3542486889354093],\n- [-0.24999999999999986+1.3919410907075052j,\n- -0.24999999999999986-1.3919410907075052j, -0.2499999999999998+0.32274861218395134j,\n- -0.2499999999999998-0.32274861218395134j]]\n+ [-0.24999999999999986-1.3919410907075052j,\n+ -0.24999999999999986-0.32274861218395134j, -0.2499999999999998+0.32274861218395134j,\n+ -0.2499999999999998+1.3919410907075052j]]\n exp6 = [[], [-1.1641600331447917-3.545808351896439j,\n -0.8358399668552097+2.5458083518964383j]]\n \n" }
[ { "diff_hunk": "@@ -88,8 +90,8 @@ def pole_zero_numerical_data(system):\n >>> from sympy.physics.control.lti import TransferFunction\n >>> from sympy.physics.control.control_plots import pole_zero_numerical_data\n >>> tf1 = TransferFunction(s**2 + 1, s**4 + 4*s**3 + 6*s**2 + 5*s + 2, s)\n- >>> pole_zero_numerical_data(tf1) # doctest: +SKIP\n- ([-0.+1.j 0.-1.j], [-2. +0.j -0.5+0.8660254j -0.5-0.8660254j -1. +0.j ])\n+ >>> pole_zero_numerical_data(tf1)", "line": null, "original_line": 93, "original_start_line": null, "path": "sympy/physics/control/control_plots.py", "start_line": null, "text": "@author:\n```suggestion\r\n >>> pole_zero_numerical_data(tf1) # doctest: +SKIP\r\n```" } ]
a3a545f1f3c0a507ae5e6925304e6ddca85796a1
diff --git a/sympy/physics/control/control_plots.py b/sympy/physics/control/control_plots.py index 3742de329e61..e233d5d0a4e2 100644 --- a/sympy/physics/control/control_plots.py +++ b/sympy/physics/control/control_plots.py @@ -2,12 +2,14 @@ from sympy.functions.elementary.exponential import (exp, log) from sympy.polys.partfrac import apart from sympy.core.symbol import Dummy +from sympy.core.sympify import _sympify from sympy.external import import_module from sympy.functions import arg, Abs from sympy.integrals.laplace import _fast_inverse_laplace from sympy.physics.control.lti import SISOLinearTimeInvariant from sympy.plotting.series import LineOver1DRangeSeries from sympy.polys.polytools import Poly +from sympy.polys.polyutils import _nsort from sympy.printing.latex import latex __all__ = ['pole_zero_numerical_data', 'pole_zero_plot', @@ -88,8 +90,8 @@ def pole_zero_numerical_data(system): >>> from sympy.physics.control.lti import TransferFunction >>> from sympy.physics.control.control_plots import pole_zero_numerical_data >>> tf1 = TransferFunction(s**2 + 1, s**4 + 4*s**3 + 6*s**2 + 5*s + 2, s) - >>> pole_zero_numerical_data(tf1) # doctest: +SKIP - ([-0.+1.j 0.-1.j], [-2. +0.j -0.5+0.8660254j -0.5-0.8660254j -1. +0.j ]) + >>> pole_zero_numerical_data(tf1) # doctest: +SKIP + ([-1j, 1j], [-2.0, (-1.5-0.8660254j), (-0.5+0.8660254j)]) See Also ======== @@ -109,7 +111,12 @@ def pole_zero_numerical_data(system): zeros = np.roots(num_poly) poles = np.roots(den_poly) - return zeros, poles + # make ordering canonical + def _sort(l): + return [float(i) if i.is_real else complex(i) for i in + _nsort([_sympify(i) for i in l])] + + return _sort(zeros), _sort(poles) def pole_zero_plot(system, pole_color='blue', pole_markersize=10, diff --git a/sympy/physics/control/tests/test_control_plots.py b/sympy/physics/control/tests/test_control_plots.py index 673fcee6cfdb..dbb9126762ed 100644 --- a/sympy/physics/control/tests/test_control_plots.py +++ b/sympy/physics/control/tests/test_control_plots.py @@ -1,8 +1,10 @@ from math import isclose -from sympy.core.numbers import I +from sympy.core.numbers import I, all_close +from sympy.core.sympify import _sympify from sympy.core.symbol import Dummy from sympy.functions.elementary.complexes import (Abs, arg) from sympy.functions.elementary.exponential import log +from sympy.polys.polyutils import _nsort from sympy.abc import s, p, a from sympy.external import import_module from sympy.physics.control.control_plots import \ @@ -101,20 +103,28 @@ def test_pole_zero(): skip("NumPy is required for this test") def pz_tester(sys, expected_value): - z, p = pole_zero_numerical_data(sys) - z_check = numpy.allclose(z, expected_value[0]) - p_check = numpy.allclose(p, expected_value[1]) + zp = pole_zero_numerical_data(sys) + zp = [[_sympify(i).n(chop=1e-10) for i in j] for j in zp] + z, p = [_nsort(i) for i in zp] + def check(a, b): + if isinstance(a, (list, tuple)): + return all(check(i, j) for i,j in zip(a, b)) + if not b: + return all_close(a, b, atol=1e-10, rtol=0) + return all_close(a, b, atol=0, rtol=1e-6) + z_check = check(z, expected_value[0]) + p_check = check(p, expected_value[1]) return p_check and z_check - exp1 = [[], [-0.24999999999999994+1.3919410907075054j, -0.24999999999999994-1.3919410907075054j]] - exp2 = [[0.0], [-0.25+0.3227486121839514j, -0.25-0.3227486121839514j]] - exp3 = [[0.0], [-0.5000000000000004+0.8660254037844395j, - -0.5000000000000004-0.8660254037844395j, 0.9999999999999998+0j]] - exp4 = [[], [5.0, 0.0, 0.0, 0.0]] + exp1 = [[], [-0.24999999999999994-1.3919410907075054j, -0.24999999999999994+1.3919410907075054j]] + exp2 = [[0.0], [-0.25-0.3227486121839514j, -0.25+0.3227486121839514j]] + exp3 = [[0.0], [0.9999999999999998+0j, -0.5000000000000004-0.8660254037844395j, + -0.5000000000000004+0.8660254037844395j]] + exp4 = [[], [0.0, 0.0, 0.0, 5.0]] exp5 = [[-5.645751311064592, -0.5000000000000008, -0.3542486889354093], - [-0.24999999999999986+1.3919410907075052j, - -0.24999999999999986-1.3919410907075052j, -0.2499999999999998+0.32274861218395134j, - -0.2499999999999998-0.32274861218395134j]] + [-0.24999999999999986-1.3919410907075052j, + -0.24999999999999986-0.32274861218395134j, -0.2499999999999998+0.32274861218395134j, + -0.2499999999999998+1.3919410907075052j]] exp6 = [[], [-1.1641600331447917-3.545808351896439j, -0.8358399668552097+2.5458083518964383j]] diff --git a/sympy/polys/polyutils.py b/sympy/polys/polyutils.py index b5c49fcc9d81..6a2019d3b195 100644 --- a/sympy/polys/polyutils.py +++ b/sympy/polys/polyutils.py @@ -39,6 +39,8 @@ def _nsort(roots, separated=False): """ if not all(r.is_number for r in roots): raise NotImplementedError + if not len(roots): + return [] if not separated else ([], []) # see issue 6137: # get the real part of the evaluated real and imaginary parts of each root key = [[i.n(2).as_real_imag()[0] for i in r.as_real_imag()] for r in roots]
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Dependency Updates & Env Compatibility" }
sympy__sympy-26780@7de5d9d
sympy/sympy
Python
26,780
Fix a bug with diophantine() when cornacchia() is not called properly
Occasionally diophantine() and diop_quadratic() result in a DN form $X^2 - DY^2 = N$ which cannot be handled by the current cornacchia() properly. For example, previously the diophantine solver will miss some solutions: ``` >>> eq1 = x**2 - 15*x + y**2 - 8*y # also mentioned in #18628 >>> diophantine(eq1) {(0, 0), (15, 0), (15, 8), (0, 8)} >>> eq1.subs({x: 16, y: 4}) 0 ``` ``` >>> eq2 = 2*x**2 - 9*x + 4*y**2 - 8*y + 14 >>> diophantine(eq2) set() >>> eq2.subs({x: 2, y: 1}) 0 ``` Both the two examples lead to a function call of cornacchia(1, X, 1) with X > 1, which incorrrectly turns out to have no solution by using cornacchia(). The solver can only be applied when $a+b\le m$. See reference from docs: http://www.numbertheory.org/php/cornacchia.html An additional check has been added before the algorithm based on the fact that $ax^2 + by^2 = m$ admits a solution for $a + b > m$ only if $x = 0$ or $y = 0$. A test function including both the two examples has been appended as well. <!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> Fixes #18628 #### Brief description of what is fixed or changed An additional check has been added before the cornacchia algorithm body based on the fact that $ax^2 + by^2 = m$ admits a solution for $a + b > m$ only if $x = 0$ or $y = 0$. #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> * solvers * Fix a bug with cornacchia(). Formerly diophantine() and diop_quadratic() may miss some solutions for quadratic equations. <!-- END RELEASE NOTES -->
2024-07-08T03:07:31Z
diophantine(), diop_quadratic() missing solutions With sympy version 1.5.1: ``` from sympy import * from sympy.solvers.diophantine import diop_quadratic a, b, x, y = symbols('a b x y', integer=True, positive=True) f = -a*x + x**2 - b*y + y**2 g = f.subs([(a, 15), (b, 8)]) print('diophantine (', g, ') solutions:', diophantine(g)) print('diop_quadratic(', g, ') solutions:', diop_quadratic(g)) h = f.subs([(a, 8), (b, 15)]) print('diophantine (', h, ') solutions:', diophantine(h)) print('diop_quadratic(', h, ') solutions:', diop_quadratic(h)) print(g.subs([(x, 16), (y, 4)])) ``` Output: ``` diophantine ( x**2 - 15*x + y**2 - 8*y ) solutions: {(15, 8)} diop_quadratic( x**2 - 15*x + y**2 - 8*y ) solutions: {(15, 0), (0, 8), (15, 8), (0, 0)} diophantine ( x**2 - 8*x + y**2 - 15*y ) solutions: {(8, 15), (4, 16)} diop_quadratic( x**2 - 8*x + y**2 - 15*y ) solutions: {(0, 0), (0, 15), (8, 0), (4, 16), (4, -1), (8, 15)} 0 ``` First, I expect diophantine() to return the same solutions as diop_quadratic(), but it doesn't. Second, the function f() is symmetric in x and y, so the solutions to g() and h() should be symmetric, but they aren't. For example, (16, 4) is a solution to g(), but is missing from the solution set.
`diophantine` is doing additional filtering on the results to match the assumptions on `x` and `y` as positive integers. `diop_quadratic` has no such checks. The boundary between public and private methods in the diophantine solver is extremely ill-defined. I agree that there are solutions missing from your example. I see, thanks. Zero isn't considered positive, so I should have: `a, b, x, y = symbols('a b x y', integer=True, negative=False) ` Then the output is: ``` diophantine ( x**2 - 15*x + y**2 - 8*y ) solutions: {(15, 0), (0, 8), (15, 8), (0, 0)} diop_quadratic( x**2 - 15*x + y**2 - 8*y ) solutions: {(15, 0), (0, 8), (15, 8), (0, 0)} diophantine ( x**2 - 8*x + y**2 - 15*y ) solutions: {(0, 0), (0, 15), (8, 0), (4, 16), (8, 15)} diop_quadratic( x**2 - 8*x + y**2 - 15*y ) solutions: {(0, 0), (0, 15), (8, 0), (4, 16), (4, -1), (8, 15)} 0 ``` So the issue is with diop_quadratic(g), which is missing the solution (16, 4). I've investigated this further. The problem appears to actually be in `transformation_to_DN` and `find_DN`: ``` from sympy import * from sympy.solvers.diophantine import transformation_to_DN, find_DN, diop_DN d, e, x, y = symbols('d e x y', integer=True) f = x**2 + y**2 + d*x + e*y g = f.subs([(d, -15), (e, -8)]) h = f.subs([(d, -8), (e, -15)]) print(f'transformation_to_DN({g}) =') A, B = transformation_to_DN(g) print('A:', A) print('B:', B) D, N = find_DN(g) print(f'find_DN({g}) = {D}, {N}') print(f'diop_DN({D}, {N}) = {diop_DN(D, N)}') print('') print(f'transformation_to_DN({h}) =') A, B = transformation_to_DN(h) print('A:', A) print('B:', B) D, N = find_DN(h) print(f'find_DN({h}) = {D}, {N}') print(f'diop_DN({D}, {N}) = {diop_DN(D, N)}') ``` Output: ``` transformation_to_DN(x**2 - 15*x + y**2 - 8*y) = A: Matrix([[-1/2, 0], [0, -1]]) B: Matrix([[15/2], [4]]) find_DN(x**2 - 15*x + y**2 - 8*y) = -4, 289 diop_DN(-4, 289) = [(15, 4)] transformation_to_DN(x**2 - 8*x + y**2 - 15*y) = A: Matrix([[-1/4, 0], [0, -1/2]]) B: Matrix([[4], [15/2]]) find_DN(x**2 - 8*x + y**2 - 15*y) = -4, 1156 diop_DN(-4, 1156) = [(16, 15), (30, 8), (0, 17)] ``` My math isn't strong, but shouldn't `transformation_to_DN(x**2 - 15*x + y**2 - 8*y)` return: `A: Matrix([[-1/2, 0], [0, -1/4]]) ` and `find_DN(x**2 - 15*x + y**2 - 8*y)` also return `-4, 1156`? I don't understand the method used by these functions, and the link to the paper _Solving the equation ax^2 + bxy + cy^2 + dx + ey + f = 0, John P.Robertson, May 8, 2003_ ([http://www.jpr2718.org/ax2p.pdf](url)) is broken. [Here's a link that works: [https://vdocuments.mx/solving-the-equation-ax2-bxy-cy2-dx-ey-f-0.html](url)] The transformation is not unique. It seems that the one defined by `Matrix([[-1/2, 0], [0, -1]])` would also be valid. However, `diop_DN(-4, 289)` does not return the non-primitive solution `(17, 0)` of `x**2 + 4*y**2 = 289`because `cornacchia` does not find the trivial solution `(1, 0)` of the equation `x**2 + 4*y**2 = 1`: ``` >>> from sympy.solvers.diophantine.diophantine import cornacchia >>> cornacchia(1, 4, 1) set() ``` That looks like a bug in `cornacchia`. From [http://www.numbertheory.org/php/cornacchia.html](url), it appears that `cornacchia` isn't applicable because a + b > m. From the documentation for `cornacchia`: Solving the diophantine equation ax**2 + by**2 = m by Cornacchia’s method, [online], Available: [http://www.numbertheory.org/php/cornacchia.html](http://www.numbertheory.org/php/cornacchia.html) The page at that link states: _Here a > 0, b > 0,m ≥ a+b, gcd(a,m)=1=gcd(a,b)_, so Cornacchia’s method fails because `1 + 4 > 1`, making me think that `cornacchia `isn't applicable in this case. > From the documentation for `cornacchia`: > > Solving the diophantine equation ax**2 + by**2 = m by Cornacchia’s method, [online], Available: http://www.numbertheory.org/php/cornacchia.html > > The page at that link states: _Here a > 0, b > 0,m ≥ a+b, gcd(a,m)=1=gcd(a,b)_, so Cornacchia’s method fails because `1 + 4 > 1`, making me think that `cornacchia `isn't applicable in this case. I've redacted my statements above since I now realize that I did not fully understand the constraints on the coefficients in the equation <code>a*x^2 + b*y^2 = m</code> when solving it using Cornacchia’s method. Thank you for clarifying, I now understand the problem much better. > The transformation is not unique. It seems that the one defined by `Matrix([[-1/2, 0], [0, -1]])` would also be valid. However, `diop_DN(-4, 289)` does not return the non-primitive solution `(17, 0)` of `x**2 + 4*y**2 = 289`because `cornacchia` does not find the trivial solution `(1, 0)` of the equation `x**2 + 4*y**2 = 1`: > > ``` > >>> from sympy.solvers.diophantine.diophantine import cornacchia > >>> cornacchia(1, 4, 1) > set() > ``` > > That looks like a bug in `cornacchia`. There is not a bug in the <code>cornacchia</code> method (at least not for this reason). @kgorlen is correct, we can not use Cornacchia’s method in this instance since a + b > m.
[ { "body": "With sympy version 1.5.1:\r\n\r\n```\r\nfrom sympy import *\r\nfrom sympy.solvers.diophantine import diop_quadratic\r\n\r\na, b, x, y = symbols('a b x y', integer=True, positive=True)\r\nf = -a*x + x**2 - b*y + y**2\r\ng = f.subs([(a, 15), (b, 8)])\r\nprint('diophantine (', g, ') solutions:', diophantine(g))\r\nprint('diop_quadratic(', g, ') solutions:', diop_quadratic(g))\r\nh = f.subs([(a, 8), (b, 15)])\r\nprint('diophantine (', h, ') solutions:', diophantine(h))\r\nprint('diop_quadratic(', h, ') solutions:', diop_quadratic(h))\r\nprint(g.subs([(x, 16), (y, 4)]))\r\n\r\n```\r\nOutput:\r\n\r\n```\r\ndiophantine ( x**2 - 15*x + y**2 - 8*y ) solutions: {(15, 8)}\r\ndiop_quadratic( x**2 - 15*x + y**2 - 8*y ) solutions: {(15, 0), (0, 8), (15, 8), (0, 0)}\r\ndiophantine ( x**2 - 8*x + y**2 - 15*y ) solutions: {(8, 15), (4, 16)}\r\ndiop_quadratic( x**2 - 8*x + y**2 - 15*y ) solutions: {(0, 0), (0, 15), (8, 0), (4, 16), (4, -1), (8, 15)}\r\n0\r\n\r\n```\r\nFirst, I expect diophantine() to return the same solutions as diop_quadratic(), but it doesn't.\r\n\r\nSecond, the function f() is symmetric in x and y, so the solutions to g() and h() should be symmetric, but they aren't. For example, (16, 4) is a solution to g(), but is missing from the solution set.", "number": 18628, "title": "diophantine(), diop_quadratic() missing solutions" } ]
16362cc09da946b518f549f249e81672a221030d
{ "head_commit": "7de5d9d6be7fcb4383c8fc5bcd13197da5ed8745", "head_commit_message": "fix typos in the comments of cornacchia()", "patch_to_review": "diff --git a/.mailmap b/.mailmap\nindex 55b874e6f897..d67a21ac36df 100644\n--- a/.mailmap\n+++ b/.mailmap\n@@ -1550,6 +1550,7 @@ Zeel Shah <[email protected]>\n Zhenxu Zhu <[email protected]> xzdlj <[email protected]>\n Zhi-Qiang Zhou <[email protected]> zhouzq-thu <[email protected]>\n Zhongshi <[email protected]>\n+Zhuoyuan Li <[email protected]> zylipku <[email protected]>\n Zlatan Vasović <[email protected]>\n Zoufiné Lauer-Baré <[email protected]> Zoufiné Lauer-Baré <[email protected]>\n Zoufiné Lauer-Baré <[email protected]> zolabar <[email protected]>\ndiff --git a/sympy/solvers/diophantine/diophantine.py b/sympy/solvers/diophantine/diophantine.py\nindex 3df4fe9b0df1..fc38c7f19d35 100644\n--- a/sympy/solvers/diophantine/diophantine.py\n+++ b/sympy/solvers/diophantine/diophantine.py\n@@ -2174,6 +2174,25 @@ def cornacchia(a:int, b:int, m:int) -> set[tuple[int, int]]:\n \"\"\"\n # Assume gcd(a, b) = gcd(a, m) = 1 and a, b > 0 but no error checking\n sols = set()\n+\n+ if a + b > m:\n+ # xy = 0 must hold if there exists a solution\n+ if m % a == 0:\n+ # y = 0\n+ s, _exact = iroot(m // a, 2)\n+ if _exact:\n+ sols.add((int(s), int(0)))\n+ if a == b:\n+ # only keep one solution\n+ return sols\n+ if m % b == 0:\n+ # x = 0\n+ s, _exact = iroot(m // b, 2)\n+ if _exact:\n+ sols.add((int(0), int(s)))\n+ return sols\n+\n+ # the original cornacchia\n for t in sqrt_mod_iter(-b*invert(a, m), m):\n if t < m // 2:\n continue\ndiff --git a/sympy/solvers/diophantine/tests/test_diophantine.py b/sympy/solvers/diophantine/tests/test_diophantine.py\nindex 094770b7bba7..a500903fc06f 100644\n--- a/sympy/solvers/diophantine/tests/test_diophantine.py\n+++ b/sympy/solvers/diophantine/tests/test_diophantine.py\n@@ -1049,3 +1049,11 @@ def test_quadratic_parameter_passing():\n # test that parameters are passed all the way to the final solution\n assert solution == {(t, 11*t), (t, -22*t)}\n assert solution(0, 0) == {(0, 0)}\n+\n+def test_issue_18628():\n+ eq1 = x**2 - 15*x + y**2 - 8*y\n+ sol = diophantine(eq1)\n+ assert sol == {(15, 0), (15, 8), (-1, 4), (0, 0), (0, 8), (16, 4)}\n+ eq2 = 2*x**2 - 9*x + 4*y**2 - 8*y + 14\n+ sol = diophantine(eq2)\n+ assert sol == {(2, 1)}\n" }
[ { "diff_hunk": "@@ -2174,6 +2174,25 @@ def cornacchia(a:int, b:int, m:int) -> set[tuple[int, int]]:\n \"\"\"\n # Assume gcd(a, b) = gcd(a, m) = 1 and a, b > 0 but no error checking\n sols = set()\n+\n+ if a + b > m:\n+ # xy = 0 must hold if there exists a solution\n+ if m % a == 0:\n+ # y = 0\n+ s, _exact = iroot(m // a, 2)\n+ if _exact:\n+ sols.add((int(s), int(0)))\n+ if a == b:\n+ # only keep one solution\n+ return sols\n+ if m % b == 0:\n+ # x = 0\n+ s, _exact = iroot(m // b, 2)\n+ if _exact:\n+ sols.add((int(0), int(s)))", "line": null, "original_line": 2192, "original_start_line": null, "path": "sympy/solvers/diophantine/diophantine.py", "start_line": null, "text": "@user1:\n```suggestion\r\n sols.add((0, int(s)))\r\n```" }, { "diff_hunk": "@@ -2174,6 +2174,25 @@ def cornacchia(a:int, b:int, m:int) -> set[tuple[int, int]]:\n \"\"\"\n # Assume gcd(a, b) = gcd(a, m) = 1 and a, b > 0 but no error checking\n sols = set()\n+\n+ if a + b > m:\n+ # xy = 0 must hold if there exists a solution\n+ if m % a == 0:\n+ # y = 0\n+ s, _exact = iroot(m // a, 2)\n+ if _exact:\n+ sols.add((int(s), int(0)))", "line": null, "original_line": 2184, "original_start_line": null, "path": "sympy/solvers/diophantine/diophantine.py", "start_line": null, "text": "@user1:\n```suggestion\r\n sols.add((int(s), 0))\r\n```" }, { "diff_hunk": "@@ -2174,6 +2174,25 @@ def cornacchia(a:int, b:int, m:int) -> set[tuple[int, int]]:\n \"\"\"\n # Assume gcd(a, b) = gcd(a, m) = 1 and a, b > 0 but no error checking\n sols = set()\n+\n+ if a + b > m:\n+ # xy = 0 must hold if there exists a solution\n+ if m % a == 0:\n+ # y = 0\n+ s, _exact = iroot(m // a, 2)\n+ if _exact:\n+ sols.add((int(s), int(0)))\n+ if a == b:\n+ # only keep one solution", "line": null, "original_line": 2186, "original_start_line": null, "path": "sympy/solvers/diophantine/diophantine.py", "start_line": null, "text": "@user1:\n```suggestion\r\n # only keep one solution\r\n```" }, { "diff_hunk": "@@ -2174,6 +2174,25 @@ def cornacchia(a:int, b:int, m:int) -> set[tuple[int, int]]:\n \"\"\"\n # Assume gcd(a, b) = gcd(a, m) = 1 and a, b > 0 but no error checking\n sols = set()\n+\n+ if a + b > m:\n+ # xy = 0 must hold if there exists a solution\n+ if m % a == 0:", "line": null, "original_line": 2180, "original_start_line": null, "path": "sympy/solvers/diophantine/diophantine.py", "start_line": null, "text": "@user1:\nGiven the constraint `gcd(a, m) = 1`, I think the condition `if a == 1` should be sufficient." } ]
875e72dbfef4fd279230baf86c29452057506f63
diff --git a/.mailmap b/.mailmap index 55b874e6f897..86b8e0da17b8 100644 --- a/.mailmap +++ b/.mailmap @@ -1550,6 +1550,8 @@ Zeel Shah <[email protected]> Zhenxu Zhu <[email protected]> xzdlj <[email protected]> Zhi-Qiang Zhou <[email protected]> zhouzq-thu <[email protected]> Zhongshi <[email protected]> +Zhuoyuan Li <[email protected]> zylipku <[email protected]> +Zhuoyuan Li <[email protected]> zylipku <[email protected]> Zlatan Vasović <[email protected]> Zoufiné Lauer-Baré <[email protected]> Zoufiné Lauer-Baré <[email protected]> Zoufiné Lauer-Baré <[email protected]> zolabar <[email protected]> diff --git a/sympy/solvers/diophantine/diophantine.py b/sympy/solvers/diophantine/diophantine.py index 3df4fe9b0df1..d02d0689d814 100644 --- a/sympy/solvers/diophantine/diophantine.py +++ b/sympy/solvers/diophantine/diophantine.py @@ -2174,6 +2174,25 @@ def cornacchia(a:int, b:int, m:int) -> set[tuple[int, int]]: """ # Assume gcd(a, b) = gcd(a, m) = 1 and a, b > 0 but no error checking sols = set() + + if a + b > m: + # xy = 0 must hold if there exists a solution + if a == 1: + # y = 0 + s, _exact = iroot(m // a, 2) + if _exact: + sols.add((int(s), 0)) + if a == b: + # only keep one solution + return sols + if m % b == 0: + # x = 0 + s, _exact = iroot(m // b, 2) + if _exact: + sols.add((0, int(s))) + return sols + + # the original cornacchia for t in sqrt_mod_iter(-b*invert(a, m), m): if t < m // 2: continue diff --git a/sympy/solvers/diophantine/tests/test_diophantine.py b/sympy/solvers/diophantine/tests/test_diophantine.py index 094770b7bba7..a500903fc06f 100644 --- a/sympy/solvers/diophantine/tests/test_diophantine.py +++ b/sympy/solvers/diophantine/tests/test_diophantine.py @@ -1049,3 +1049,11 @@ def test_quadratic_parameter_passing(): # test that parameters are passed all the way to the final solution assert solution == {(t, 11*t), (t, -22*t)} assert solution(0, 0) == {(0, 0)} + +def test_issue_18628(): + eq1 = x**2 - 15*x + y**2 - 8*y + sol = diophantine(eq1) + assert sol == {(15, 0), (15, 8), (-1, 4), (0, 0), (0, 8), (16, 4)} + eq2 = 2*x**2 - 9*x + 4*y**2 - 8*y + 14 + sol = diophantine(eq2) + assert sol == {(2, 1)}
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Bug Fixes" }
sympy__sympy-26623@a5db5b8
sympy/sympy
Python
26,623
vector: created VectorKind and Add with vectors now calls VectorAdd
#### References to other Issues or PRs Fixes #26121 #### Brief description of what is fixed or changed Created a VectorKind class, which is the kind for all vector objects in Sympy. The result of operations between vectors, such as Cross, also has VectorKind. Added a postprocessor for Add with respect to Vector class to make sure that when calling Add with vectors as arguments the result is an instance of VectorAdd. Added tests to verify all this. Example: In [1]: Sys = CoordSys3D('Sys') In [2]: Sys.i.kind Out [2]: VectorKind In [3]: isinstance(Add(Sys.i, Sys.j), VectorAdd) Out [3]: True #### Other comments #### Release Notes <!-- BEGIN RELEASE NOTES --> * vector * Added VectorKind * Add with vectors now calls VectorAdd <!-- END RELEASE NOTES -->
2024-05-26T21:35:44Z
matrix of vectors destroys vector type from sympy.matrices import Matrix from sympy.vector import CoordSys3D, Vector Sys = CoordSys3D('Sys') O = Sys.origin # The row matrix of unit vectors E = Matrix([Sys.i, Sys.j, Sys.k]).T # A Column matrix of coordinates a = Matrix([1, 2, 3]) # Obtain the matrix product av = E*a # The following does not work! Why is av[0] not a vector anymore? # Is this a bug? Am I missing something important? A = O.locate_new('A', av[0])
It is because: ```python In [61]: type(Add(Sys.i, Sys.j)) Out[61]: sympy.core.add.Add In [62]: type(Sys.i + Sys.j) Out[62]: sympy.vector.vector.VectorAdd ``` The Matrix code assumes that it can add things with `Add` but there is no mechanism that makes this work with vectors. Ideally the `.kind` would be used for this so that `Add` can dispatch to `VectorAdd` or something like that. Currently `.kind` is undefined for vectors though. Do you think we can handle it any other way like checking the type and removing the vector part initially and adding them like scalars and adding them back later. This sort of problem arises any time `Add(a, b)` is not the same as `a + b`. The vector code defines `__add__` and it is basically always a problem for an Expr class to define `__add__`. Instead what is needed is a way to make it so that `Add(a, b)` can do whatever should be done in the case of vectors. > Ideally the .kind would be used for this so that Add can dispatch to VectorAdd or something like that. Currently .kind is undefined for vectors though. What would you suggest on how to make these changes , I tried looking into it but it seems like it is connected to many different files and classes . ``` >>> E = Matrix([Sys.i, Sys.j, Sys.k]).T >>> E[0].kind UndefinedKind >>> E.kind MatrixKind(UndefinedKind) ``` is what we currently get . Do you think it should be changed so that `.kind` gives something like `VectorAdd` . > An expression representing a matrix may not be an instance of the Matrix class, but it will have kind ``MatrixKind``: Sorry for any unnecessary questions but can't we make vectors work the same way ? So they can always have Something like `Vector` as its kind Yes, ideally the kind of all vector objects would VectorKind. Where would we need to make the changes to ensure the `.kind` gives the desired result for vectors . Once one had implemented the VectorKind, what would be the next step to ensure that Add could dispatch to VectorAdd? I'm not sure. Maybe look at how it works for MatrixKind.
[ { "body": " from sympy.matrices import Matrix\r\n from sympy.vector import CoordSys3D, Vector\r\n\r\n Sys = CoordSys3D('Sys')\r\n O = Sys.origin\r\n\r\n # The row matrix of unit vectors\r\n E = Matrix([Sys.i, Sys.j, Sys.k]).T\r\n\r\n # A Column matrix of coordinates\r\n a = Matrix([1, 2, 3])\r\n\r\n # Obtain the matrix product\r\n av = E*a\r\n\r\n # The following does not work! Why is av[0] not a vector anymore? \r\n # Is this a bug? Am I missing something important?\r\n A = O.locate_new('A', av[0])\r\n\r\n", "number": 26121, "title": "matrix of vectors destroys vector type" } ]
ae3027b4ca682023dee36e7e8883311c62cc08e3
{ "head_commit": "a5db5b850cdeca1d9c06b001977c8aef13f0a110", "head_commit_message": "vector: updated VectorKind docstring\n\nAfter making VectorKind a parametrised kind, we forgot\nto update the docstring. It is updated now.", "patch_to_review": "diff --git a/sympy/vector/__init__.py b/sympy/vector/__init__.py\nindex 9228befbdb4b..f6757bbeb350 100644\n--- a/sympy/vector/__init__.py\n+++ b/sympy/vector/__init__.py\n@@ -17,11 +17,14 @@\n from sympy.vector.implicitregion import ImplicitRegion\n from sympy.vector.parametricregion import (ParametricRegion, parametric_region_list)\n from sympy.vector.integrals import (ParametricIntegral, vector_integrate)\n+from sympy.vector.kind import VectorKind\n \n __all__ = [\n 'Vector', 'VectorAdd', 'VectorMul', 'BaseVector', 'VectorZero', 'Cross',\n 'Dot', 'cross', 'dot',\n \n+ 'VectorKind',\n+\n 'Dyadic', 'DyadicAdd', 'DyadicMul', 'BaseDyadic', 'DyadicZero',\n \n 'BaseScalar',\ndiff --git a/sympy/vector/kind.py b/sympy/vector/kind.py\nnew file mode 100644\nindex 000000000000..c6c04896b34c\n--- /dev/null\n+++ b/sympy/vector/kind.py\n@@ -0,0 +1,67 @@\n+#sympy.vector.kind\n+\n+from sympy.core.kind import Kind, _NumberKind, NumberKind\n+from sympy.core.mul import Mul\n+\n+class VectorKind(Kind):\n+ \"\"\"\n+ Kind for all vector objects in SymPy.\n+\n+ Parameters\n+ ==========\n+\n+ element_kind : Kind\n+ Kind of the element. Default is\n+ :class:`sympy.core.kind.NumberKind`,\n+ which means that the vector contains only numbers.\n+\n+ Examples\n+ ========\n+\n+ Any instance of Vector class has kind ``VectorKind``:\n+\n+ >>> from sympy.vector.coordsysrect import CoordSys3D\n+ >>> Sys = CoordSys3D('Sys')\n+ >>> Sys.i.kind\n+ VectorKind(NumberKind)\n+\n+ Operations between instances of Vector keep also have the kind ``VectorKind``:\n+\n+ >>> from sympy.core.add import Add\n+ >>> v1 = Sys.i * 2 + Sys.j * 3 + Sys.k * 4\n+ >>> v2 = Sys.i * Sys.x + Sys.j * Sys.y + Sys.k * Sys.z\n+ >>> v1.kind\n+ VectorKind(NumberKind)\n+ >>> v2.kind\n+ VectorKind(NumberKind)\n+ >>> Add(v1, v2).kind\n+ VectorKind(NumberKind)\n+\n+ Subclasses of Vector also have the kind ``VectorKind``, such as\n+ Cross, VectorAdd, VectorMul or VectorZero.\n+\n+ See Also\n+ ========\n+\n+ sympy.core.kind.Kind\n+ sympy.matrices.kind.MatrixKind\n+\n+ \"\"\"\n+ def __new__(cls, element_kind=NumberKind):\n+ obj = super().__new__(cls, element_kind)\n+ obj.element_kind = element_kind\n+ return obj\n+\n+ def __repr__(self):\n+ return \"VectorKind(%s)\" % self.element_kind\n+\n+@Mul._kind_dispatcher.register(_NumberKind, VectorKind)\n+def num_vec_mul(k1, k2):\n+ \"\"\"\n+ The result of a multiplication between a number and a Vector should be of VectorKind.\n+ The element kind is selected by recursive dispatching.\n+ \"\"\"\n+ if not isinstance(k2, VectorKind):\n+ k1, k2 = k2, k1\n+ elemk = Mul._kind_dispatcher(k1, k2.element_kind)\n+ return VectorKind(elemk)\ndiff --git a/sympy/vector/scalar.py b/sympy/vector/scalar.py\nindex 42742b021ea5..bcfb56cf177b 100644\n--- a/sympy/vector/scalar.py\n+++ b/sympy/vector/scalar.py\n@@ -2,6 +2,7 @@\n from sympy.core.sympify import _sympify\n from sympy.printing.pretty.stringpict import prettyForm\n from sympy.printing.precedence import PRECEDENCE\n+from sympy.core.kind import NumberKind\n \n \n class BaseScalar(AtomicExpr):\n@@ -12,6 +13,8 @@ class BaseScalar(AtomicExpr):\n \n \"\"\"\n \n+ kind = NumberKind\n+\n def __new__(cls, index, system, pretty_str=None, latex_str=None):\n from sympy.vector.coordsysrect import CoordSys3D\n if pretty_str is None:\ndiff --git a/sympy/vector/tests/test_vector.py b/sympy/vector/tests/test_vector.py\nindex b68fb9fb3efb..64cdc0cb8b59 100644\n--- a/sympy/vector/tests/test_vector.py\n+++ b/sympy/vector/tests/test_vector.py\n@@ -1,4 +1,4 @@\n-from sympy.core import Rational, S\n+from sympy.core import Rational, S, Add, Mul\n from sympy.simplify import simplify, trigsimp\n from sympy.core.function import (Derivative, Function, diff)\n from sympy.core.numbers import pi\n@@ -12,6 +12,8 @@\n from sympy.vector.coordsysrect import CoordSys3D\n from sympy.vector.vector import Cross, Dot, cross\n from sympy.testing.pytest import raises\n+from sympy.vector.kind import VectorKind\n+from sympy.core.kind import NumberKind\n \n C = CoordSys3D('C')\n \n@@ -52,6 +54,50 @@ def test_vector_sympy():\n assert v3.__hash__() == v2.__hash__()\n \n \n+def test_kind():\n+ assert C.i.kind is VectorKind(NumberKind)\n+ assert C.j.kind is VectorKind(NumberKind)\n+ assert C.k.kind is VectorKind(NumberKind)\n+\n+ assert C.x.kind is NumberKind\n+ assert C.y.kind is NumberKind\n+ assert C.z.kind is NumberKind\n+\n+ assert Mul._kind_dispatcher(NumberKind, VectorKind(NumberKind)) is VectorKind(NumberKind)\n+ assert Mul(2, C.i).kind is VectorKind(NumberKind)\n+\n+ v1 = C.x * i + C.z * C.z * j\n+ v2 = C.x * i + C.y * j + C.z * k\n+ assert v1.kind is VectorKind(NumberKind)\n+ assert v2.kind is VectorKind(NumberKind)\n+\n+ assert (v1 + v2).kind is VectorKind(NumberKind)\n+ assert Add(v1, v2).kind is VectorKind(NumberKind)\n+ assert Cross(v1, v2).doit().kind is VectorKind(NumberKind)\n+ assert VectorAdd(v1, v2).kind is VectorKind(NumberKind)\n+ assert VectorMul(2, v1).kind is VectorKind(NumberKind)\n+ assert VectorZero().kind is VectorKind(NumberKind)\n+\n+ assert v1.projection(v2).kind is VectorKind(NumberKind)\n+ assert v2.projection(v1).kind is VectorKind(NumberKind)\n+\n+\n+def test_vectoradd():\n+ assert isinstance(Add(C.i, C.j), VectorAdd)\n+ v1 = C.x * i + C.z * C.z * j\n+ v2 = C.x * i + C.y * j + C.z * k\n+ assert isinstance(Add(v1, v2), VectorAdd)\n+\n+ # https://github.com/sympy/sympy/issues/26121\n+\n+ E = Matrix([C.i, C.j, C.k]).T\n+ a = Matrix([1, 2, 3])\n+ av = E*a\n+\n+ assert av[0].kind == VectorKind()\n+ assert isinstance(av[0], VectorAdd)\n+\n+\n def test_vector():\n assert isinstance(i, BaseVector)\n assert i != j\ndiff --git a/sympy/vector/vector.py b/sympy/vector/vector.py\nindex d64ea6d70890..4f5ff14600c6 100644\n--- a/sympy/vector/vector.py\n+++ b/sympy/vector/vector.py\n@@ -1,7 +1,7 @@\n from __future__ import annotations\n from itertools import product\n \n-from sympy.core.add import Add\n+from sympy.core import Add, Basic\n from sympy.core.assumptions import StdFactKB\n from sympy.core.expr import AtomicExpr, Expr\n from sympy.core.power import Pow\n@@ -14,6 +14,7 @@\n BasisDependent, BasisDependentMul, BasisDependentAdd)\n from sympy.vector.coordsysrect import CoordSys3D\n from sympy.vector.dyadic import Dyadic, BaseDyadic, DyadicAdd\n+from sympy.vector.kind import VectorKind\n \n \n class Vector(BasisDependent):\n@@ -34,6 +35,8 @@ class Vector(BasisDependent):\n _base_func: type[Vector]\n zero: VectorZero\n \n+ kind: VectorKind = VectorKind()\n+\n @property\n def components(self):\n \"\"\"\n@@ -344,6 +347,24 @@ def _div_helper(one, other):\n else:\n raise TypeError(\"Invalid division involving a vector\")\n \n+# The following is adapted from the matrices.expressions.matexpr file\n+\n+def get_postprocessor(cls):\n+ def _postprocessor(expr):\n+ vec_class = {Add: VectorAdd}[cls]\n+ vectors = []\n+ for term in expr.args:\n+ if term.kind is VectorKind():\n+ vectors.append(term)\n+\n+ if vec_class == VectorAdd:\n+ return VectorAdd(*vectors).doit(deep=False)\n+ return _postprocessor\n+\n+\n+Basic._constructor_postprocessor_mapping[Vector] = {\n+ \"Add\": [get_postprocessor(Add)],\n+}\n \n class BaseVector(Vector, AtomicExpr):\n \"\"\"\n" }
[ { "diff_hunk": "@@ -344,6 +347,24 @@ def _div_helper(one, other):\n else:\n raise TypeError(\"Invalid division involving a vector\")\n \n+# The following is adapted from the matrices.expressions.matexpr file\n+\n+def get_postprocessor(cls):\n+ def _postprocessor(expr):\n+ vec_class = {Add: VectorAdd}[cls]\n+ vectors = []\n+ for term in expr.args:\n+ if term.kind is VectorKind():", "line": null, "original_line": 357, "original_start_line": null, "path": "sympy/vector/vector.py", "start_line": null, "text": "@user1:\nShould this not check something like `if isinstance(term.kind, VectorKind)` because the kind might not be `VectorKind(NumberKind)`?\n\n@author:\nYes, it should. I'll change it now." } ]
c4a8977c6a705fc6cfb366b704d3b659c3975631
diff --git a/sympy/vector/__init__.py b/sympy/vector/__init__.py index 9228befbdb4b..f6757bbeb350 100644 --- a/sympy/vector/__init__.py +++ b/sympy/vector/__init__.py @@ -17,11 +17,14 @@ from sympy.vector.implicitregion import ImplicitRegion from sympy.vector.parametricregion import (ParametricRegion, parametric_region_list) from sympy.vector.integrals import (ParametricIntegral, vector_integrate) +from sympy.vector.kind import VectorKind __all__ = [ 'Vector', 'VectorAdd', 'VectorMul', 'BaseVector', 'VectorZero', 'Cross', 'Dot', 'cross', 'dot', + 'VectorKind', + 'Dyadic', 'DyadicAdd', 'DyadicMul', 'BaseDyadic', 'DyadicZero', 'BaseScalar', diff --git a/sympy/vector/kind.py b/sympy/vector/kind.py new file mode 100644 index 000000000000..c6c04896b34c --- /dev/null +++ b/sympy/vector/kind.py @@ -0,0 +1,67 @@ +#sympy.vector.kind + +from sympy.core.kind import Kind, _NumberKind, NumberKind +from sympy.core.mul import Mul + +class VectorKind(Kind): + """ + Kind for all vector objects in SymPy. + + Parameters + ========== + + element_kind : Kind + Kind of the element. Default is + :class:`sympy.core.kind.NumberKind`, + which means that the vector contains only numbers. + + Examples + ======== + + Any instance of Vector class has kind ``VectorKind``: + + >>> from sympy.vector.coordsysrect import CoordSys3D + >>> Sys = CoordSys3D('Sys') + >>> Sys.i.kind + VectorKind(NumberKind) + + Operations between instances of Vector keep also have the kind ``VectorKind``: + + >>> from sympy.core.add import Add + >>> v1 = Sys.i * 2 + Sys.j * 3 + Sys.k * 4 + >>> v2 = Sys.i * Sys.x + Sys.j * Sys.y + Sys.k * Sys.z + >>> v1.kind + VectorKind(NumberKind) + >>> v2.kind + VectorKind(NumberKind) + >>> Add(v1, v2).kind + VectorKind(NumberKind) + + Subclasses of Vector also have the kind ``VectorKind``, such as + Cross, VectorAdd, VectorMul or VectorZero. + + See Also + ======== + + sympy.core.kind.Kind + sympy.matrices.kind.MatrixKind + + """ + def __new__(cls, element_kind=NumberKind): + obj = super().__new__(cls, element_kind) + obj.element_kind = element_kind + return obj + + def __repr__(self): + return "VectorKind(%s)" % self.element_kind + +@Mul._kind_dispatcher.register(_NumberKind, VectorKind) +def num_vec_mul(k1, k2): + """ + The result of a multiplication between a number and a Vector should be of VectorKind. + The element kind is selected by recursive dispatching. + """ + if not isinstance(k2, VectorKind): + k1, k2 = k2, k1 + elemk = Mul._kind_dispatcher(k1, k2.element_kind) + return VectorKind(elemk) diff --git a/sympy/vector/scalar.py b/sympy/vector/scalar.py index 42742b021ea5..bcfb56cf177b 100644 --- a/sympy/vector/scalar.py +++ b/sympy/vector/scalar.py @@ -2,6 +2,7 @@ from sympy.core.sympify import _sympify from sympy.printing.pretty.stringpict import prettyForm from sympy.printing.precedence import PRECEDENCE +from sympy.core.kind import NumberKind class BaseScalar(AtomicExpr): @@ -12,6 +13,8 @@ class BaseScalar(AtomicExpr): """ + kind = NumberKind + def __new__(cls, index, system, pretty_str=None, latex_str=None): from sympy.vector.coordsysrect import CoordSys3D if pretty_str is None: diff --git a/sympy/vector/tests/test_vector.py b/sympy/vector/tests/test_vector.py index b68fb9fb3efb..64cdc0cb8b59 100644 --- a/sympy/vector/tests/test_vector.py +++ b/sympy/vector/tests/test_vector.py @@ -1,4 +1,4 @@ -from sympy.core import Rational, S +from sympy.core import Rational, S, Add, Mul from sympy.simplify import simplify, trigsimp from sympy.core.function import (Derivative, Function, diff) from sympy.core.numbers import pi @@ -12,6 +12,8 @@ from sympy.vector.coordsysrect import CoordSys3D from sympy.vector.vector import Cross, Dot, cross from sympy.testing.pytest import raises +from sympy.vector.kind import VectorKind +from sympy.core.kind import NumberKind C = CoordSys3D('C') @@ -52,6 +54,50 @@ def test_vector_sympy(): assert v3.__hash__() == v2.__hash__() +def test_kind(): + assert C.i.kind is VectorKind(NumberKind) + assert C.j.kind is VectorKind(NumberKind) + assert C.k.kind is VectorKind(NumberKind) + + assert C.x.kind is NumberKind + assert C.y.kind is NumberKind + assert C.z.kind is NumberKind + + assert Mul._kind_dispatcher(NumberKind, VectorKind(NumberKind)) is VectorKind(NumberKind) + assert Mul(2, C.i).kind is VectorKind(NumberKind) + + v1 = C.x * i + C.z * C.z * j + v2 = C.x * i + C.y * j + C.z * k + assert v1.kind is VectorKind(NumberKind) + assert v2.kind is VectorKind(NumberKind) + + assert (v1 + v2).kind is VectorKind(NumberKind) + assert Add(v1, v2).kind is VectorKind(NumberKind) + assert Cross(v1, v2).doit().kind is VectorKind(NumberKind) + assert VectorAdd(v1, v2).kind is VectorKind(NumberKind) + assert VectorMul(2, v1).kind is VectorKind(NumberKind) + assert VectorZero().kind is VectorKind(NumberKind) + + assert v1.projection(v2).kind is VectorKind(NumberKind) + assert v2.projection(v1).kind is VectorKind(NumberKind) + + +def test_vectoradd(): + assert isinstance(Add(C.i, C.j), VectorAdd) + v1 = C.x * i + C.z * C.z * j + v2 = C.x * i + C.y * j + C.z * k + assert isinstance(Add(v1, v2), VectorAdd) + + # https://github.com/sympy/sympy/issues/26121 + + E = Matrix([C.i, C.j, C.k]).T + a = Matrix([1, 2, 3]) + av = E*a + + assert av[0].kind == VectorKind() + assert isinstance(av[0], VectorAdd) + + def test_vector(): assert isinstance(i, BaseVector) assert i != j diff --git a/sympy/vector/vector.py b/sympy/vector/vector.py index d64ea6d70890..7847a9f01f42 100644 --- a/sympy/vector/vector.py +++ b/sympy/vector/vector.py @@ -1,7 +1,7 @@ from __future__ import annotations from itertools import product -from sympy.core.add import Add +from sympy.core import Add, Basic from sympy.core.assumptions import StdFactKB from sympy.core.expr import AtomicExpr, Expr from sympy.core.power import Pow @@ -14,6 +14,7 @@ BasisDependent, BasisDependentMul, BasisDependentAdd) from sympy.vector.coordsysrect import CoordSys3D from sympy.vector.dyadic import Dyadic, BaseDyadic, DyadicAdd +from sympy.vector.kind import VectorKind class Vector(BasisDependent): @@ -34,6 +35,8 @@ class Vector(BasisDependent): _base_func: type[Vector] zero: VectorZero + kind: VectorKind = VectorKind() + @property def components(self): """ @@ -344,6 +347,24 @@ def _div_helper(one, other): else: raise TypeError("Invalid division involving a vector") +# The following is adapted from the matrices.expressions.matexpr file + +def get_postprocessor(cls): + def _postprocessor(expr): + vec_class = {Add: VectorAdd}[cls] + vectors = [] + for term in expr.args: + if isinstance(term.kind, VectorKind): + vectors.append(term) + + if vec_class == VectorAdd: + return VectorAdd(*vectors).doit(deep=False) + return _postprocessor + + +Basic._constructor_postprocessor_mapping[Vector] = { + "Add": [get_postprocessor(Add)], +} class BaseVector(Vector, AtomicExpr): """
{ "difficulty": "medium", "estimated_review_effort": 4, "problem_domain": "Bug Fixes" }
sympy__sympy-26557@07f5e42
sympy/sympy
Python
26,557
Prevent RecursionError in nc_simplify for floating point coefficients
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> Fixes #26556 #### Brief description of what is fixed or changed Allow simplification of expressions with floating point coefficients for non-commutative symbols #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> * simplify * Prevent RecursionError when simplifying expressions with floating-point coefficients on a non-commutative symbol <!-- END RELEASE NOTES -->
2024-04-30T13:44:06Z
RecursionError in simplify of non-commutative expressions with non-integer coefficient Simplifying expressions with non-commutative symbols and floating point coeefficients generates a recursion error. ```python from sympy import Symbol, cos, pi X = Symbol('X', commutative = False) try: display( (2 * cos(pi/3) * X).simplify() ) except RecursionError as e: print(f'RecursionError 2*cos(pi/3): {e}') try: display( (2.0 * cos(pi/3) * X).simplify() ) except RecursionError as e: print(f'RecursionError 2.0*cos(pi/3): {e}') try: display( (2.0 * cos(pi/6) * X).simplify() ) except RecursionError as e: print(f'RecursionError 2.0*cos(pi/6): {e}') ``` outputs: X RecursionError 2.0*cos(pi/3): maximum recursion depth exceeded in comparison RecursionError 2.0*cos(pi/6): maximum recursion depth exceeded in comparison
The check here fails if `com_coeff = 1.0`: https://github.com/sympy/sympy/blob/7c4f580838612edea6dcd6ae7d5b8a75af2d8f10/sympy/simplify/simplify.py#L1761-L1762
[ { "body": "Simplifying expressions with non-commutative symbols and floating point coeefficients generates a recursion error.\r\n\r\n```python\r\nfrom sympy import Symbol, cos, pi\r\nX = Symbol('X', commutative = False)\r\n\r\ntry:\r\n display( (2 * cos(pi/3) * X).simplify() )\r\nexcept RecursionError as e:\r\n print(f'RecursionError 2*cos(pi/3): {e}')\r\n\r\ntry:\r\n display( (2.0 * cos(pi/3) * X).simplify() )\r\nexcept RecursionError as e:\r\n print(f'RecursionError 2.0*cos(pi/3): {e}')\r\n\r\ntry:\r\n display( (2.0 * cos(pi/6) * X).simplify() )\r\nexcept RecursionError as e:\r\n print(f'RecursionError 2.0*cos(pi/6): {e}')\r\n```\r\noutputs:\r\n\r\nX\r\nRecursionError 2.0*cos(pi/3): maximum recursion depth exceeded in comparison\r\nRecursionError 2.0*cos(pi/6): maximum recursion depth exceeded in comparison\r\n\r\n\r\n", "number": 26556, "title": "RecursionError in simplify of non-commutative expressions with non-integer coefficient" } ]
7c4f580838612edea6dcd6ae7d5b8a75af2d8f10
{ "head_commit": "07f5e42677c5b753f2d61e9fc4cfe2464e9e0b75", "head_commit_message": "Add name to mailmap file", "patch_to_review": "diff --git a/.mailmap b/.mailmap\nindex 4c0988c98177..0303e3b9fb17 100644\n--- a/.mailmap\n+++ b/.mailmap\n@@ -1616,6 +1616,7 @@ richierichrawr <[email protected]>\n rimibis <[email protected]>\n risubaba <[email protected]>\n ritikBhandari <[email protected]>\n+rrodenbusch <[email protected]>\n rushyam <[email protected]>\n sachinSingh16-09 <[email protected]>\n samithkavishke <[email protected]>\ndiff --git a/sympy/simplify/simplify.py b/sympy/simplify/simplify.py\nindex c86eb1500131..af351a3c8478 100644\n--- a/sympy/simplify/simplify.py\n+++ b/sympy/simplify/simplify.py\n@@ -9,7 +9,7 @@\n from sympy.core.parameters import global_parameters\n from sympy.core.function import (expand_log, count_ops, _mexpand,\n nfloat, expand_mul, expand)\n-from sympy.core.numbers import Float, I, pi, Rational\n+from sympy.core.numbers import Float, I, pi, Rational, equal_valued\n from sympy.core.relational import Relational\n from sympy.core.rules import Transform\n from sympy.core.sorting import ordered\n@@ -1758,7 +1758,7 @@ def compare(s, alt_s):\n # get the non-commutative part\n c_args, args = expr.args_cnc()\n com_coeff = Mul(*c_args)\n- if com_coeff != 1:\n+ if not equal_valued(com_coeff, 1):\n return com_coeff*nc_simplify(expr/com_coeff, deep=deep)\n \n inv_tot, args = _reduce_inverses(args)\ndiff --git a/sympy/simplify/tests/test_simplify.py b/sympy/simplify/tests/test_simplify.py\nindex a26e8e33a2eb..f4392b669375 100644\n--- a/sympy/simplify/tests/test_simplify.py\n+++ b/sympy/simplify/tests/test_simplify.py\n@@ -1080,3 +1080,8 @@ def test_reduce_inverses_nc_pow():\n x, y = symbols(\"x y\", positive=True)\n assert expand((x*y)**Z) == x**Z * y**Z\n assert simplify(x**Z * y**Z) == expand((x*y)**Z)\n+\n+def test_nc_recursion_coeff():\n+ X = symbols(\"X\", commutative = False)\n+ assert (2 * cos(pi/3) * X).simplify() == X\n+ assert (2.0 * cos(pi/3) * X).simplify() == X\n" }
[ { "diff_hunk": "@@ -1616,6 +1616,7 @@ richierichrawr <[email protected]>\n rimibis <[email protected]>\n risubaba <[email protected]>\n ritikBhandari <[email protected]>\n+rrodenbusch <[email protected]>", "line": null, "original_line": 1619, "original_start_line": null, "path": ".mailmap", "start_line": null, "text": "@user2:\nUsually the idea is to add a .mailmap entry like:\r\n```\r\nRichard Rodenbusch <[email protected]> rrodenbusch <[email protected]>\r\n```\r\nThe first name and email address is what will eventually go in the AUTHORS file. The second is what is recorded in the commit metadata. This line in the .mailmap file associates the author name with the corresponding commits.\r\n\r\nThere are a number of entries not like that in the .mailmap file that mostly come from people who have misunderstood what these entries are intended to do.\n\n@author:\nI will update my entry. The entry in the documentation for this process mentions the dual entry format after the step to commit the change to .mailmap. Perhaps we should re-order that?\r\n\r\n\n\n@user2:\nMaybe. I'm not sure which part you are referring to. Is it this:\r\nhttps://docs.sympy.org/dev/contributing/new-contributors-guide/workflow-process.html#add-your-name-and-email-address-to-the-mailmap-file\n\n@author:\nYes. That is the document I was following. I'd tweak the document so it shows the two entry option in the initial section, then links to the section below with the more detailed description on the entries. I'm attaching changes here so you can look at it before I add it to the PR.\r\n\r\n### _first section_\r\n\r\nThe first time you make a pull request you will need to add your name and email address to the .mailmap file by adding a line like\r\n\r\n```\r\nJoe Bloggs <[email protected]> joeb <[email protected]>\r\n```\r\n\r\nThis line in the .mailmap file associates the author name with the corresponding commits. The first name and email address is what will eventually go in the AUTHORS file. The second entry is what is recorded in the commit metadata. (see [Mapping user names to AUTHORS file entry](#mailmap-mapping-names))\r\n\r\nThe commit metadata name and email should exactly match the name and email that you have configured with git before making the commits (see \"Configure git settings\" above). The `bin/mailmap_check.py` script can check that this has been done correctly. \r\n\r\n\r\n### _then_ _below_\r\n\r\n(mailmap-mapping-names)=\r\n### Mapping user names to AUTHORS file entry\r\nSometimes a commit will be made with an incorrect name or email address or an author will make multiple commits with different names and email addresses or an author wishes to use a proper name that differs from their github name. In this case a line should be added to the .mailmap file where the first name and email address is what should be recorded in the AUTHORS file and the others are the name and email address that was incorrectly used in the other commits.\r\n\r\n\r\n### _the updated md_\r\n[workflow-process.md](https://github.com/sympy/sympy/files/15168077/workflow-process.md)\r\n\n\n@user2:\nYes, maybe that is better.\r\n\r\nThe premise of the way it is written currently is that you will configure git to have the right name and email address first:\r\n```console\r\n$ git config user.name\r\nOscar Benjamin\r\n```\r\nIt seems that it refers to \"configure git settings\" that is not present in the document though.\r\n\r\nThe two entry version is only needed if the name configured in git is not the name that should be in the authors file. It seems to be very common though that the name and/or email in the commits is not the right one for the authors file so it is probably more common to need the two entry version. I agree that\r\n```\r\nJoe Bloggs <[email protected]> joeb <[email protected]>\r\n```\r\nis probably the best thing to suggest by default.\n\n@author:\nI found 'configure git settings' in dev-setup.md and added the links. Do I add these changes be added to this PR or should I generate a new one?\r\n\r\n[workflow-process.md](https://github.com/sympy/sympy/files/15168941/workflow-process.md)\r\n\r\n[dev-setup.md](https://github.com/sympy/sympy/files/15168959/dev-setup.md)\r\n\r\n\n\n@user2:\nI'd say go with a new PR for the workflow changes.\n\n@user2:\nIn the new PR you won't have the problems because you are already in the .mailmap file (unless you change your git config...)." } ]
e3c8666ce5748cb284e84b418087e59d21de6675
diff --git a/.mailmap b/.mailmap index 4c0988c98177..aac350bcc3c5 100644 --- a/.mailmap +++ b/.mailmap @@ -1188,6 +1188,7 @@ Riccardo Magliocchetti <[email protected]> Rich LaSota <[email protected]> Richard Otis <[email protected]> <[email protected]> Richard Otis <[email protected]> <[email protected]> +Richard Rodenbusch <[email protected]> rrodenbusch <[email protected]> Richard Samuel <[email protected]> Richard Samuel <[email protected]> Rick Muller <[email protected]> diff --git a/sympy/simplify/simplify.py b/sympy/simplify/simplify.py index c86eb1500131..af351a3c8478 100644 --- a/sympy/simplify/simplify.py +++ b/sympy/simplify/simplify.py @@ -9,7 +9,7 @@ from sympy.core.parameters import global_parameters from sympy.core.function import (expand_log, count_ops, _mexpand, nfloat, expand_mul, expand) -from sympy.core.numbers import Float, I, pi, Rational +from sympy.core.numbers import Float, I, pi, Rational, equal_valued from sympy.core.relational import Relational from sympy.core.rules import Transform from sympy.core.sorting import ordered @@ -1758,7 +1758,7 @@ def compare(s, alt_s): # get the non-commutative part c_args, args = expr.args_cnc() com_coeff = Mul(*c_args) - if com_coeff != 1: + if not equal_valued(com_coeff, 1): return com_coeff*nc_simplify(expr/com_coeff, deep=deep) inv_tot, args = _reduce_inverses(args) diff --git a/sympy/simplify/tests/test_simplify.py b/sympy/simplify/tests/test_simplify.py index a26e8e33a2eb..f4392b669375 100644 --- a/sympy/simplify/tests/test_simplify.py +++ b/sympy/simplify/tests/test_simplify.py @@ -1080,3 +1080,8 @@ def test_reduce_inverses_nc_pow(): x, y = symbols("x y", positive=True) assert expand((x*y)**Z) == x**Z * y**Z assert simplify(x**Z * y**Z) == expand((x*y)**Z) + +def test_nc_recursion_coeff(): + X = symbols("X", commutative = False) + assert (2 * cos(pi/3) * X).simplify() == X + assert (2.0 * cos(pi/3) * X).simplify() == X
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Bug Fixes" }
sympy__sympy-26546@b4f4d39
sympy/sympy
Python
26,546
improve assumptions of roots of non-real expression
#### References to other Issues or PRs Fixes #26545 #### Brief description of what is fixed or changed Add a simple check in `_eval_is_extended_real` of the `Pow` class, to return `False`, when the n-th root of a non-real expression is taken, i.e. when the base is not real and the exponent is a rational number with a nominator of `1`. #### Other comments The explicit checking, wheather the exponent is `Half` using `if self.exp == S.Half:` seems not optimal. Did i forget other fixed fractions, or is there a better way to check this? #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> * core * Improve the assumptions system by assuming roots of non-real expressions to be non-real themselves <!-- END RELEASE NOTES -->
2024-04-26T11:56:11Z
Roots of non-real elements should be non-real themselves, but their domain is unknown ### Description This is not a bug, rather an idea for an improvement At the moment, taking the squareroot of a non-real expression (`is_extended_real==False`) does result in the value of `is_extended_real` being `None`. ```{python} >>> x = Symbol('x', real=True) >>> print((I+x).is_extended_real) False >>> print(sqrt(I+x).is_extended_real) None ``` To the best of my knowledge, it would be correct to assume, that the root of a non-real number is not real itself, since this follows directly from the reals being closed under multiplication, i.e. $\sqrt{a}\in \mathbb{R} \Rightarrow \sqrt{a}\cdot \sqrt{a}\in\mathbb{R}$, which contradicts with the assumption. ### Motivation When solving multivariate quartic polynomials, the solution sometimes contains the above mentioned subexpression. For example the polynomial $a^4 - 2a^3b - 2a^3c + a^2b^2 + 4a^2bc + a^2c^2 - 2ab^2c - 2abc^2 + b^2c^2 + 1$ has no real solution. When we solve for a, we get, among others, the following solution: $\frac{b}{2} + \frac{c}{2} - \frac{\sqrt{b^2 - 2bc + c^2 - 4I}}{2}$. ```{python} >>> a = Symbol('a', real=True) >>> b = Symbol('b', real=True) >>> c = Symbol('c', real=True) >>> expr = solve(a**4 - 2*a**3*b - 2*a**3*c + a**2*b**2 + 4*a**2*b*c + a**2*c**2 - 2*a*b**2*c - 2*a*b*c**2 + b**2*c**2 + 1, a)[0] >>> print(expr.is_extended_real) None ``` ### Possible modification The function `_eval_is_extended_real` of the `Pow` class could be amplified, to return False, when the base is not real and the exponent is a rational with a nominator of 1.
> The function `_eval_is_extended_real` of the `Pow` class could be amplified, to return False, when the base is not real and the exponent is a rational with a nominator of 1. Sounds reasonable.
[ { "body": "### Description\r\nThis is not a bug, rather an idea for an improvement\r\n\r\nAt the moment, taking the squareroot of a non-real expression (`is_extended_real==False`) does result in the value of `is_extended_real` being `None`.\r\n\r\n```{python}\r\n>>> x = Symbol('x', real=True)\r\n>>> print((I+x).is_extended_real)\r\nFalse\r\n>>> print(sqrt(I+x).is_extended_real)\r\nNone\r\n```\r\nTo the best of my knowledge, it would be correct to assume, that the root of a non-real number is not real itself, since this follows directly from the reals being closed under multiplication, i.e. $\\sqrt{a}\\in \\mathbb{R} \\Rightarrow \\sqrt{a}\\cdot \\sqrt{a}\\in\\mathbb{R}$, which contradicts with the assumption.\r\n\r\n### Motivation\r\nWhen solving multivariate quartic polynomials, the solution sometimes contains the above mentioned subexpression.\r\nFor example the polynomial $a^4 - 2a^3b - 2a^3c + a^2b^2 + 4a^2bc + a^2c^2 - 2ab^2c - 2abc^2 + b^2c^2 + 1$ has no real solution. When we solve for a, we get, among others, the following solution: $\\frac{b}{2} + \\frac{c}{2} - \\frac{\\sqrt{b^2 - 2bc + c^2 - 4I}}{2}$.\r\n\r\n```{python}\r\n>>> a = Symbol('a', real=True)\r\n>>> b = Symbol('b', real=True)\r\n>>> c = Symbol('c', real=True)\r\n>>> expr = solve(a**4 - 2*a**3*b - 2*a**3*c + a**2*b**2 + 4*a**2*b*c + a**2*c**2 - 2*a*b**2*c - 2*a*b*c**2 + b**2*c**2 + 1, a)[0]\r\n>>> print(expr.is_extended_real)\r\nNone\r\n```\r\n\r\n### Possible modification\r\nThe function `_eval_is_extended_real` of the `Pow` class could be amplified, to return False, when the base is not real and the exponent is a rational with a nominator of 1.\r\n", "number": 26545, "title": "Roots of non-real elements should be non-real themselves, but their domain is unknown" } ]
68b548d79d43004a1c68990fedd9933384256d20
{ "head_commit": "b4f4d39e5f07b6ac4b0e6e9439552b0e5dea86de", "head_commit_message": "fix use wrong property", "patch_to_review": "diff --git a/.mailmap b/.mailmap\nindex 2b868286fc0f..4c0988c98177 100644\n--- a/.mailmap\n+++ b/.mailmap\n@@ -881,6 +881,7 @@ Liwei Cai <[email protected]>\n Ljubiša Moćić <[email protected]>\n Lokesh Sharma <[email protected]> <[email protected]>\n Longqi Wang <[email protected]>\n+Lorenz Winkler <[email protected]>\n Lorenzo Contento <[email protected]> Lorenzo Contento <[email protected]>\n Lorenzo Contento <[email protected]> lcontento <[email protected]>\n Louis Abraham <[email protected]> <[email protected]>\ndiff --git a/sympy/core/power.py b/sympy/core/power.py\nindex b8676164ec3c..99c74143aa1a 100644\n--- a/sympy/core/power.py\n+++ b/sympy/core/power.py\n@@ -513,6 +513,11 @@ def _eval_is_extended_real(self):\n return self.exp.is_imaginary\n return\n real_e = self.exp.is_extended_real\n+ if real_b == False and real_e:\n+ if self.exp == S.Half:\n+ return False\n+ if type(self.exp) == Rational and self.exp.numerator == 1:\n+ return False\n if real_e is None:\n return\n if real_b and real_e:\n@@ -1837,6 +1842,6 @@ def _eval_difference_delta(self, n, step):\n power.add((object, object), Pow)\n \n from .add import Add\n-from .numbers import Integer\n+from .numbers import Integer, Rational\n from .mul import Mul, _keep_coeff\n from .symbol import Symbol, Dummy, symbols\n" }
[ { "diff_hunk": "@@ -513,6 +513,11 @@ def _eval_is_extended_real(self):\n return self.exp.is_imaginary\n return\n real_e = self.exp.is_extended_real\n+ if real_b == False and real_e:\n+ if self.exp == S.Half:\n+ return False\n+ if type(self.exp) == Rational and self.exp.numerator == 1:\n+ return False", "line": null, "original_line": 520, "original_start_line": 515, "path": "sympy/core/power.py", "start_line": null, "text": "@user1:\nThere is already a `if real_b is False and real_e` case below so this should be moved there.\r\n\r\nThese two cases can be combined with `if isinstance(self.exp, Rational) and self.exp.p == 1`.\r\n\r\nPerhaps the code there using `arg` should be removed in favour of this code checking for Rational and numerator 1.\n\n@author:\nHi, thanks for the feedback - i somehow missed that case, sorry.\r\n\r\nRegarding the part with `arg`, i would be careful to remove it. As far as i can tell, it is relevant for computing the following:\r\n\r\n```{python}\r\n>>> expr = Pow(4*I+4, 8)\r\n>>> print(expr.is_extended_real)\r\nTrue\r\n```\r\n\r\nbecause when i remove it in favour of my code:\r\n```{python}\r\n>>> expr = Pow(4*I+4, 8)\r\n>>> print(expr.is_extended_real)\r\nNone\r\n```\r\n\r\nI would thus suggest to not remove it\n\n@user1:\nIt is expected that the assumptions system cannot resolve all queries and will sometimes return None. The problem with using `arg` here is that it creates a new expression which is not something that should happen within the assumptions system." } ]
e676709a3f5805acd6b7a9bf19c71c4497a46c3f
diff --git a/.mailmap b/.mailmap index 2b868286fc0f..4c0988c98177 100644 --- a/.mailmap +++ b/.mailmap @@ -881,6 +881,7 @@ Liwei Cai <[email protected]> Ljubiša Moćić <[email protected]> Lokesh Sharma <[email protected]> <[email protected]> Longqi Wang <[email protected]> +Lorenz Winkler <[email protected]> Lorenzo Contento <[email protected]> Lorenzo Contento <[email protected]> Lorenzo Contento <[email protected]> lcontento <[email protected]> Louis Abraham <[email protected]> <[email protected]> diff --git a/sympy/core/power.py b/sympy/core/power.py index b8676164ec3c..02eb307fd655 100644 --- a/sympy/core/power.py +++ b/sympy/core/power.py @@ -560,6 +560,8 @@ def _eval_is_extended_real(self): return ok if real_b is False and real_e: # we already know it's not imag + if isinstance(self.exp, Rational) and self.exp.p == 1: + return False from sympy.functions.elementary.complexes import arg i = arg(self.base)*self.exp/S.Pi if i.is_complex: # finite @@ -1837,6 +1839,6 @@ def _eval_difference_delta(self, n, step): power.add((object, object), Pow) from .add import Add -from .numbers import Integer +from .numbers import Integer, Rational from .mul import Mul, _keep_coeff from .symbol import Symbol, Dummy, symbols diff --git a/sympy/core/tests/test_power.py b/sympy/core/tests/test_power.py index 8279c9f43fde..9dcc828d854c 100644 --- a/sympy/core/tests/test_power.py +++ b/sympy/core/tests/test_power.py @@ -555,7 +555,7 @@ def test_issue_14815(): assert sqrt(x).is_extended_negative is False x = Symbol('x', extended_real=True) assert sqrt(x).is_extended_negative is False - assert sqrt(zoo, evaluate=False).is_extended_negative is None + assert sqrt(zoo, evaluate=False).is_extended_negative is False assert sqrt(nan, evaluate=False).is_extended_negative is None @@ -651,3 +651,13 @@ def test_powers_of_I(): def test_issue_23918(): b = S(2)/3 assert (b**x).as_base_exp() == (1/b, -x) + + +def test_issue_26546(): + x = Symbol('x', real=True) + assert x.is_extended_real is True + assert sqrt(x+I).is_extended_real is False + assert Pow(x+I, S.Half).is_extended_real is False + assert Pow(x+I, Rational(1,2)).is_extended_real is False + assert Pow(x+I, Rational(1,13)).is_extended_real is False + assert Pow(x+I, Rational(2,3)).is_extended_real is None
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "New Feature Additions" }
sympy__sympy-26484@cf57b4f
sympy/sympy
Python
26,484
Text backend plotting
Fixes #26481. For some reason, the bounds of the backend renderer became complex at some point. I just check if they are complex and then convert them to real if the complex part is equal to 0. This solution is kinda a hack and it might be good to investigate why the endpoints are becoming complex at some point, but this fixes the problem I was having. Also, might still be a good idea to add in some message telling user that installing matplotlib would probably be a good idea. <!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> #### Brief description of what is fixed or changed #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> * plotting * Fixed an issue which caused plotting using backend='text' to error. <!-- END RELEASE NOTES -->
2024-04-09T22:10:30Z
Plotting with Sympy's inbuilt TextBackend doesn't seem to be working ```from sympy import symbols from sympy.plotting import plot x = symbols('x') plot(x, (x, 0, 5)) Traceback (most recent call last): File "c:\Users\sammy\gsoc\sympy\test.py", line 4, in <module> plot(x, (x, 0, 5)) File "c:\Users\sammy\gsoc\sympy\sympy\plotting\plot.py", line 408, in plot plots.show() File "c:\Users\sammy\gsoc\sympy\sympy\plotting\backends\textbackend\text.py", line 21, in show textplot(ser.expr, ser.start, ser.end) File "c:\Users\sammy\gsoc\sympy\sympy\plotting\textplot.py", line 161, in textplot for line in textplot_str(expr, a, b, W, H): File "c:\Users\sammy\gsoc\sympy\sympy\plotting\textplot.py", line 53, in textplot_str a = float(a) ^^^^^^^^ TypeError: float() argument must be a string or a real number, not 'complex' ``` Note: I think you will only be able to reproduce this if you don't have matplotlib installed, or else it will just use matplotlib.
You can force the backend to be text with `plot(..., backend='text')`. @SamLubelsky did you come across this as a result of running the tests/doctests in some way? I recently came across this as well with `pytest --doctest-only sympy/plotting` when matplotlib was not installed. I have fixed that in https://github.com/sympy/sympy/pull/26451 to at least skip the test when matplotlib is not installed but if this gets fixed then the test should not be skipped. I came across it while I was trying to fix some of the issues shown in #19996 that should be fixed due to changing from exprimental_lambdify to lambdify when I didn't have matplotlib installed in my environment. @moorepants This doesn't work because the problem seems to be with the text backend, so forcing it to use the text backend doesn't change anything. Using backend='matplotlib' on the other hand does work as long as matplotlib is installed, cause the matplotlib plotting seems to be working. If this issue can't be fixed(I have no idea how it would be), I feel like users who don't have matplotlib installed should be given some better error telling them something like: `Since matplotlib is not installed, Sympy's graphing backend will be used. Installing matplotlib is recommended.` When I was trying to debug why it wasn't graphing properly it took me a while to figure out it was because matplotlib wasn't installed because Sympy currently gives no indication of that. > This doesn't work because the problem seems to be with the text backend, so forcing it to use the text backend doesn't change anything. I wasn't trying to imply that it fixed anything. I just mentioned it so you can reproduce the error without uninstalling matplotlib. Oh sorry, I see what you mean.
[ { "body": "```from sympy import symbols\r\nfrom sympy.plotting import plot\r\nx = symbols('x')\r\nplot(x, (x, 0, 5))\r\n\r\nTraceback (most recent call last):\r\n File \"c:\\Users\\sammy\\gsoc\\sympy\\test.py\", line 4, in <module>\r\n plot(x, (x, 0, 5))\r\n File \"c:\\Users\\sammy\\gsoc\\sympy\\sympy\\plotting\\plot.py\", line 408, in plot\r\n plots.show()\r\n File \"c:\\Users\\sammy\\gsoc\\sympy\\sympy\\plotting\\backends\\textbackend\\text.py\", line 21, in show\r\n textplot(ser.expr, ser.start, ser.end)\r\n File \"c:\\Users\\sammy\\gsoc\\sympy\\sympy\\plotting\\textplot.py\", line 161, in textplot\r\n for line in textplot_str(expr, a, b, W, H):\r\n File \"c:\\Users\\sammy\\gsoc\\sympy\\sympy\\plotting\\textplot.py\", line 53, in textplot_str\r\n a = float(a)\r\n ^^^^^^^^\r\nTypeError: float() argument must be a string or a real number, not 'complex'\r\n```\r\n\r\nNote: I think you will only be able to reproduce this if you don't have matplotlib installed, or else it will just use matplotlib.", "number": 26481, "title": "Plotting with Sympy's inbuilt TextBackend doesn't seem to be working" } ]
e89ee9373fbbb90e92e18e6181606254653bbc34
{ "head_commit": "cf57b4f3631eba99cc379a458076fa0e992dcc61", "head_commit_message": "added regression tests", "patch_to_review": "diff --git a/sympy/plotting/tests/test_plot.py b/sympy/plotting/tests/test_plot.py\nindex dc6950de2224..b62ec113bfeb 100644\n--- a/sympy/plotting/tests/test_plot.py\n+++ b/sympy/plotting/tests/test_plot.py\n@@ -60,6 +60,13 @@ def save(self):\n def close(self):\n pass\n \n+def test_basic_plotting_backend():\n+ x = Symbol('x')\n+ try:\n+ plot(x, (x, 0, 3), backend='text')\n+ plot(x**2 + 1, (x, 0, 3), backend='text')\n+ except TypeError as exc:\n+ assert False, f\"{exc}\"\n \n @pytest.mark.parametrize(\"adaptive\", [True, False])\n def test_plot_and_save_1(adaptive):\ndiff --git a/sympy/plotting/textplot.py b/sympy/plotting/textplot.py\nindex 2ba83ad0fbfb..5f1f2b639d6c 100644\n--- a/sympy/plotting/textplot.py\n+++ b/sympy/plotting/textplot.py\n@@ -50,6 +50,12 @@ def textplot_str(expr, a, b, W=55, H=21):\n .format(free))\n x = free.pop() if free else Dummy()\n f = lambdify([x], expr)\n+ if isinstance(a, complex):\n+ if a.imag == 0:\n+ a = a.real\n+ if isinstance(b, complex):\n+ if b.imag == 0:\n+ b = b.real\n a = float(a)\n b = float(b)\n \n" }
[ { "diff_hunk": "@@ -60,6 +60,13 @@ def save(self):\n def close(self):\n pass\n \n+def test_basic_plotting_backend():\n+ x = Symbol('x')\n+ try:\n+ plot(x, (x, 0, 3), backend='text')\n+ plot(x**2 + 1, (x, 0, 3), backend='text')\n+ except TypeError as exc:\n+ assert False, f\"{exc}\"", "line": null, "original_line": 69, "original_start_line": 65, "path": "sympy/plotting/tests/test_plot.py", "start_line": null, "text": "@user1:\nI don't think that you should catch errors here.\r\nIt is often pointless for test code to catch errors because the pytest or the test framework does actually handles the thrown errors. If you catch the errors, it can just obscure the error messages or stack trace.\r\n\r\n```suggestion\r\n plot(x, (x, 0, 3), backend='text')\r\n plot(x**2 + 1, (x, 0, 3), backend='text')\r\n```" } ]
8757b8879934776241fe84138b38c1ad19b577b0
diff --git a/sympy/plotting/tests/test_plot.py b/sympy/plotting/tests/test_plot.py index dc6950de2224..bf09e825e744 100644 --- a/sympy/plotting/tests/test_plot.py +++ b/sympy/plotting/tests/test_plot.py @@ -60,6 +60,10 @@ def save(self): def close(self): pass +def test_basic_plotting_backend(): + x = Symbol('x') + plot(x, (x, 0, 3), backend='text') + plot(x**2 + 1, (x, 0, 3), backend='text') @pytest.mark.parametrize("adaptive", [True, False]) def test_plot_and_save_1(adaptive): diff --git a/sympy/plotting/textplot.py b/sympy/plotting/textplot.py index 2ba83ad0fbfb..5f1f2b639d6c 100644 --- a/sympy/plotting/textplot.py +++ b/sympy/plotting/textplot.py @@ -50,6 +50,12 @@ def textplot_str(expr, a, b, W=55, H=21): .format(free)) x = free.pop() if free else Dummy() f = lambdify([x], expr) + if isinstance(a, complex): + if a.imag == 0: + a = a.real + if isinstance(b, complex): + if b.imag == 0: + b = b.real a = float(a) b = float(b)
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Bug Fixes" }
sympy__sympy-26515@2c01ca9
sympy/sympy
Python
26,515
fix(polys): fix multi factorisation over QQ<a>
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> Fixes gh-26497 #### Brief description of what is fixed or changed There are various things here. - Fix `dmp_sqf_p` to check properly that multivariate polynomials are square free over the ground domain. This is also fixes `Poly.is_sqf`. - Change `dmp_sqf_norm` to use shifts in multiple variables. - Add `_dmp_sqf_norm_shifts` to try to find efficient multivariate shifts. - The `sqf_norm` functions now return a shift list rather than a single shift integer. This is to compute a shift like `f(x1-s1*a, x2-s2*a, ...)`. - Add `dmp_shift` and `Poly.shift_list` for multivariate polynomial shifts. - Add sanity checks that the degrees add up in various factorisation routines. - Add `dmp_norm` to `PolyRing`. - Add `norm` to `PolyElement`. - Add tests for better general coverage of densetools. - Add missing functions to polys docs and minor improvements to docstrings. - Expand docstrings with explanation of some algorithms and references. The most important part is the fix for gh-26497 which is that `dmp_ext_factor` and hence `factor` over `ZZ_I` or `QQ_I` gave incorrect results for some multivariate polynomials. The bug is caused by `dmp_sqf_norm` choosing a polynomial that is not square-free which is in turn caused by `dmp_sqf_p` incorrectly reporting some polynomials as square-free. The implementation of `dmp_sqf_p` on master treats a multivariate polynomial as if it were a univariate polynomial in its leading variable. This means that it only considers a polynomial to be square-free if it has squared factors of positive degree in the leading variable. Hence: ```python In [1]: (y**2).as_poly(x, y).is_sqf Out[1]: True ``` This is not the sense of "square-free" that is needed for `sqf_norm` i.e. a polynomial should only be square free if there are no squared factors having positive degree in *any* variable. Potentially there is some value in having two different notions of square-free but realistically the notion that we generally want is that of being square free in any variable so that is what I have changed it to. Fixing `dmp_sqf_p` means that `dmp_sqf_norm` does not return an incorrect result with a norm that is not square-free. However then `dmp_sqf_norm` goes into an infinite loop. This is because it tries to shift the polynomial like `f(x,y,z,...) -> f(x-1,y,z,...) -> f(x-2,y,z,...)`. This is sequence of shifts is not guaranteed to result in a square-free norm for a multivariate polynomial. Instead we can use `f(x,y,z,...) -> f(x-1,y-1,z-1,...) -> f(x-2,y-2,z-2,...)`. Shifting all variables like this works and guarantees to find a square-free norm but can be very slow for some polynomials and caused some tests to hang. A more efficient sequence of shifts requires that we use different shifts for each variable e.g. we try `f(x,y,z)`, `f(x-1,y,z)`, `f(x,y-1,z)` and so on before shifting all variables. This usually results in finding a shift such that the norm can be computed just as efficiently as before. The problem though is that this means that `s` returned from `dmp_sqf_norm` needs to be a list of integer shifts rather than a single integer. Changing `dmp_sqf_norm` and also `Poly.sqf_norm` and `sqf_norm` to return a list of integer shifts is not backwards compatible. However I doubt that anyone is using these functions for anything. As far as I know their only real purpose is as an internal routine in Trager's algorithm for factorisation (`dup_ext_factor/dmp_ext_factor`). I don't see why anyone would be using these functions directly rather than e.g. `factor` so I think it should be okay to change the return type. In turn I then added `dmp_shift` and `Poly.shift_list` so that the list of shifts can be used directly. I then added various other things including tests, docs, etc as a general cleanup. #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> * polys * A serious bug in factorisation of multivariate polynomials over extension fields was fixed. Previously, incorrect results were returned in some cases. The bug would most likely manifest when calling `factor` on an expression with more than one symbol and the complex unit `I`. * A bug in the Poly method `is_sqf` was fixed. Previously some multivariate polynomials were incorrectly reported as being square-free. * BREAKING CHANGE: The `sqf_norm` function and `Poly.sqf_norm` functions are now changed to return a list of integer shifts instead of a single integer shift. This is not a backwards compatible change but is needed because in general in the multivariate case different shifts are needed for each variable. * A new `shift_list` method is added to `Poly` and other polynomial types for computing shifts of multivariate polynomials. * Some documentation was added for internal routines in the polys module. <!-- END RELEASE NOTES -->
2024-04-17T22:48:43Z
factor produces wrong output `sympy.factor(expr)` produces the wrong output if an underscore is used in the name of one of the symbols in `expr`. EDIT: The issue is more general. The correctness of the factorization output depends on the names of the variables. Using symbol names with more than one character also changes the output, for example. **Observed behavior:** For example, ``` a_n, m = sympy.symbols('a_n m', real=True, positive=True) expr = (-a_n**4 *m**2 / (a_n**2 + 1) - 2 * a_n**4 *m / (a_n**2 + 1) - a_n**4 - a_n**4 / (a_n**2 + 1) + 2 * sympy.I * a_n**3 *m**2 / (a_n**2 + 1) + 4 * sympy.I * a_n**3 *m / (a_n**2 + 1) + 2 * sympy.I * a_n**3 + 2 * sympy.I * a_n**3 / (a_n**2 + 1) + 2 * sympy.I * a_n *m**2 / (a_n**2 + 1) + 4 * sympy.I * a_n *m / (a_n**2 + 1) + 2 * sympy.I * a_n + 2 * sympy.I * a_n / (a_n**2 + 1) +m**2 / (a_n**2 + 1) + 2 *m / (a_n**2 + 1) + 1 + 1 / (a_n**2 + 1)) ``` gives different numerical results if I factor `expr` and replace `a_n` and `m` with numbers. `expr.subs({a_n: 1, m: 1})` produces `12i`, whereas `expr.factor().subs({a_n: 1, m: 1})` produces `-6`. A correct behavior would result in `12i`, or equivalent numerical expressions, in both cases. On the other hand, defining ``` a_n, m = sympy.symbols('a m', real=True, positive=True) ``` correctly produces the equivalent outputs `12i` in both cases. **Sympy version** I observe this in the stable branch, but also in 1.12 and 1.10. Didn't check other versions. **Other modifications that produce the same error** Using `a_n, m = sympy.symbols('an m', real=True, positive=True)`, so a symbol with a name of more than one character, also reproduces the error.
The bug seems to be related to factoring of multivariate expressions including `sympy.I`. A shorter reproduction example is ```py import sympy a, b = sympy.symbols("a b") ((b - sympy.I)**2 * (b + sympy.I) * (a + 1)).expand().factor() ``` which produces an incorrect expression $(a+1)(b^2 + 1)$. The originally reported role of variable naming seems to be related to the ordering of terms or factors: swapping $a$ and $b$ in the expression above makes the output correct. This is a serious bug. It has been around since SymPy started factorising such polynomials automatically: d3eab468768801dff2c810a5dc5d6d446d6db01c. Before that you would need to use `extension=True` to factorise this. Now all mehods give the same: ```python In [1]: import sympy ...: a, b = sympy.symbols("a b") ...: e = ((b - sympy.I)**2 * (b + sympy.I) * (a + 1)) In [6]: e.expand().factor() Out[6]: ⎛ 2 ⎞ (a + 1)⋅⎝b + 1⎠ In [7]: e.expand().factor(extension=True) Out[7]: ⎛ 2 ⎞ (a + 1)⋅⎝b + 1⎠ In [8]: e.expand().factor(domain=QQ_I) Out[8]: ⎛ 2 ⎞ (a + 1)⋅⎝b + 1⎠ In [11]: e.expand().factor(domain=QQ.algebraic_field(I)) Out[11]: ⎛ 2 ⎞ (a + 1)⋅⎝b + 1⎠ ``` There must be a bug in the code for factorisation over algebraic fields: https://github.com/sympy/sympy/blob/d11d0b220e94dd04e0acaf339e784edb8e7d24fe/sympy/polys/factortools.py#L1280-L1307 The bug is in `dmp_sqf_list`: ```python In [25]: sqf(expand(((y - I)**2 * (y + I) * (z + 1)))) Out[25]: 2 (y - ⅈ) ⋅(y + ⅈ) In [26]: sqf(expand(((y - I)**2 * (y + I) * (x + 1)))) Out[26]: x + 1 ``` Both of those answers are wrong. The factors for one variable are missing in each case. The bug is here: https://github.com/sympy/sympy/blob/87bf9d4343202ee2d4d544a0ee8c6cdfe36feb6a/sympy/polys/sqfreetools.py#L409-L429 That seems to use the same logic as for the univariate case that you can compute the square free factorisation by differentiating and then computing the gcd. The difference though is that in the multivariate case there is more than one choice of variable to differentiate with respect to and this seems to make an arbitrary choice. We might then end up with a factor like `x + 1` and then differentiate wrt `y` and then exit the loop without collecting all factors. What should probably happen is that once `p` gets down to degree 0 in the first variable it should recurse like it would at the start if the polynomial was already degree 0 in the first variable at the start: https://github.com/sympy/sympy/blob/87bf9d4343202ee2d4d544a0ee8c6cdfe36feb6a/sympy/polys/sqfreetools.py#L404-L407 The implemented algorithm seems to be Yun's algorithm but applying the univariate version of the algorithm naively to multivariate polynomials. In Yun's paper it is stated that the algorithm requires the multivariate polynomials to be primitive in the given varaible that is used for differentiation: > On square free decomposition algorithms, Yun, 1976 > https://dl.acm.org/doi/10.1145/800205.806320 I think this means that to apply this algorithm in the multivariate case you would first need to use `dmp_primitive` to factor out the content of the coefficients wrt x and then also compute the square free factorisation of the content recursively. Currently it uses `dmp_ground_primitive` which is not the same e.g.: ```python In [12]: p = expand((y - I)**2 * (y + I) * (z + 1)) In [13]: p Out[13]: 3 3 2 2 y ⋅z + y - ⅈ⋅y ⋅z - ⅈ⋅y + y⋅z + y - ⅈ⋅z - ⅈ In [14]: p.as_poly().primitive() Out[14]: (1, Poly(y**3*z + y**3 - I*y**2*z - I*y**2 + y*z + y - I*z - I, y, z, domain='ZZ_I')) In [15]: p.as_poly(y).primitive() Out[15]: (z + 1, Poly(y**3 - I*y**2 + y - I, y, domain='ZZ_I[z]')) ``` The bug for `sqf` does not require there to be any complex numbers i.e. it happens with ZZ as well as ZZ_I: ```python In [1]: sqf(expand(((y - 2)**2 * (y + 2) * (x + 1)))) Out[1]: x + 1 In [2]: sqf(expand(((y - 2)**2 * (y + 2) * (z + 1)))) Out[2]: 2 (y - 2) ⋅(y + 2) ``` I think that the difference in the case of `factor` is just that for `ZZ_I` it uses `dmp_sqf_list` whereas for `ZZ` it does not. To be clear the expected output for sqf is like: ```python In [3]: sqf(expand((x-1)*(x-2)*(x-3)**2*(x-4)**5)) Out[3]: 5 2 ⎛ 2 ⎞ (x - 4) ⋅(x - 3) ⋅⎝x - 3⋅x + 2⎠ ``` It is supposed to be a factorisation of the expression like with `factor` except that it is not a full factorisation into irreducibles. This fixes the bug in `dmp_sqf_list` makes all examples above with `sqf` work as intended: ```diff diff --git a/sympy/polys/sqfreetools.py b/sympy/polys/sqfreetools.py index e11eebc7b7..2fd4f0b8ca 100644 --- a/sympy/polys/sqfreetools.py +++ b/sympy/polys/sqfreetools.py @@ -23,7 +23,7 @@ from sympy.polys.euclidtools import ( dup_inner_gcd, dmp_inner_gcd, dup_gcd, dmp_gcd, - dmp_resultant) + dmp_resultant, dmp_primitive) from sympy.polys.galoistools import ( gf_sqf_list, gf_sqf_part) from sympy.polys.polyerrors import ( @@ -401,30 +401,36 @@ def dmp_sqf_list(f, u, K, all=False): deg = dmp_degree(f, u) if deg < 0: return coeff, [] - elif deg == 0: - coeff2, factors = dmp_sqf_list(dmp_LC(f, u), u-1, K, all=all) - factors = [([fac], exp) for fac, exp in factors] - return coeff*coeff2, factors - result, i = [], 1 + content, f = dmp_primitive(f, u, K) - h = dmp_diff(f, 1, u, K) - g, p, q = dmp_inner_gcd(f, h, u, K) + result = [] - while True: - d = dmp_diff(p, 1, u, K) - h = dmp_sub(q, d, u, K) + if deg != 0: - if dmp_zero_p(h, u): - result.append((p, i)) - break + h = dmp_diff(f, 1, u, K) + g, p, q = dmp_inner_gcd(f, h, u, K) - g, p, q = dmp_inner_gcd(p, h, u, K) + i = 1 - if all or dmp_degree(g, u) > 0: - result.append((g, i)) + while True: + d = dmp_diff(p, 1, u, K) + h = dmp_sub(q, d, u, K) - i += 1 + if dmp_zero_p(h, u): + result.append((p, i)) + break + + g, p, q = dmp_inner_gcd(p, h, u, K) + + if all or dmp_degree(g, u) > 0: + result.append((g, i)) + + i += 1 + + coeff_content, result_content = dmp_sqf_list(content, u-1, K, all=all) + coeff *= coeff_content + result += [([fac], exp) for fac, exp in result_content] return coeff, result ``` Unfortunately that does not fix the original problem. It turns out that `dmp_sqf_list` is not even being called but rather `dmp_sqf_part` which does seem to return correct output: ```python In [3]: a, b = symbols("a b") In [4]: e = ((b - I)**2 * (b + I) * (a + 1)) In [5]: e Out[5]: 2 (a + 1)⋅(b - ⅈ) ⋅(b + ⅈ) In [6]: expand(e) Out[6]: 3 2 3 2 a⋅b - ⅈ⋅a⋅b + a⋅b - ⅈ⋅a + b - ⅈ⋅b + b - ⅈ In [7]: sqf_part(expand(e)) Out[7]: 2 2 a⋅b + a + b + 1 In [8]: sqf_part(expand(e)).factor() Out[8]: ⎛ 2 ⎞ (a + 1)⋅⎝b + 1⎠ In [9]: expand(e).factor() # wrong Out[9]: ⎛ 2 ⎞ (a + 1)⋅⎝b + 1⎠ ``` Here the output from `sqf_part` is correct for `sqf_part` but the same output is not correct for `factor`. The problem then is that factor is returning the `sqf_part`. The bug is somewhere in `dmp_ext_factor`: https://github.com/sympy/sympy/blob/87bf9d4343202ee2d4d544a0ee8c6cdfe36feb6a/sympy/polys/factortools.py#L1280-L1307 I'm not familiar with the algorithm in use here. Again it looks very similar to the corresponding univariate algorithm. In the debugger we already have a wrong output from `dmp_factor_list_include` because we only get two factors when there should be three. The output of `dmp_factor_list` is correct for the given input so the problem needs to be earlier. The input `f` seems correct and `dmp_sqf_part` gets the right square free part. Maybe the bug is triggered because the square free part has only real coefficients i.e. we're trying to factorise a polynomial in `QQ(I)[a,b]` but its square free part is in `QQ[a,b]`: ```python In [11]: e Out[11]: 2 (a + 1)⋅(b - ⅈ) ⋅(b + ⅈ) In [12]: sqf_part(e) Out[12]: 2 2 a⋅b + a + b + 1 ``` It should be possible for `dmp_trial_division` to use the factors that are found to find all of the factors e.g.: ```python In [20]: quo(e, sqf_part(expand(e))) Out[20]: b - ⅈ ``` I'm not sure that's how the algorithm is supposed to work though. I don't actually know where this particular algorithm comes from. The algorithm in `dup_ext_factor` seems to be algorithm 3.6.4 in Cohen's "A course in computational number theory": http://tomlr.free.fr/Math%E9matiques/Math%20Complete/Number%20theory/A%20course%20in%20computational%20algebraic%20number%20theory%20-%20Cohen%20H..pdf That algorithm is written for univariate polynomials though. I'm not sure if `dmp_ext_factor` is a valid generalisation of the algorithm to the multivariate case. Possibly the `dmp_ext_factor` algorithm is from: Algebraic factoring and rational function integration Barry M. Trager 1976 https://dl.acm.org/doi/10.1145/800205.806338 The problem with `sqf` was fixed in gh-26514 but the problem with `factor` (`dmp_ext_factor`) remains. The algorithm in `dmp_ext_factor` is the one in the Trager paper looks correct per the paper. It also calls `dmp_sqf_norm` which is also defined in the paper and also looks correct. However `dmp_sqf_norm` calls `dmp_sqf_p` to check if the polynomial is square free and that seems to have a similar bug to the one that `dmp_sqf_list` had: https://github.com/sympy/sympy/blob/af0c8559a845df49a6d0f8855659cebd0ddc9bc8/sympy/polys/sqfreetools.py#L55-L74 It is not enough just to differentiate wrt one arbitrarily chosen variable in the multivariate case. Hence: ```python In [22]: p = x**2*y**4 + 2*x**2*y**2 + x**2 + 2*x*y**4 + 4*x*y**2 + 2*x + 2*y**4 + 4*y**2 + 2 In [23]: p Out[23]: 2 4 2 2 2 4 2 4 2 x ⋅y + 2⋅x ⋅y + x + 2⋅x⋅y + 4⋅x⋅y + 2⋅x + 2⋅y + 4⋅y + 2 In [24]: p.factor() Out[24]: 2 ⎛ 2 ⎞ ⎛ 2 ⎞ ⎝y + 1⎠ ⋅⎝x + 2⋅x + 2⎠ In [25]: Poly(p).is_sqf # wrong Out[25]: True ``` I think that wrong result from `dmp_sqf_p` leads to the wrong result in `dmp_sqf_norm` and in turn `dmp_ext_factor`.
[ { "body": "`sympy.factor(expr)` produces the wrong output if an underscore is used in the name of one of the symbols in `expr`.\r\nEDIT: The issue is more general. The correctness of the factorization output depends on the names of the variables. Using symbol names with more than one character also changes the output, for example.\r\n\r\n**Observed behavior:**\r\nFor example,\r\n```\r\na_n, m = sympy.symbols('a_n m', real=True, positive=True)\r\nexpr = (-a_n**4 *m**2 / (a_n**2 + 1) - 2 * a_n**4 *m / (a_n**2 + 1) - a_n**4 - a_n**4 / (a_n**2 + 1) + 2 * sympy.I * a_n**3 *m**2 / (a_n**2 + 1) + 4 * sympy.I * a_n**3 *m / (a_n**2 + 1) + 2 * sympy.I * a_n**3 + 2 * sympy.I * a_n**3 / (a_n**2 + 1) + 2 * sympy.I * a_n *m**2 / (a_n**2 + 1) + 4 * sympy.I * a_n *m / (a_n**2 + 1) + 2 * sympy.I * a_n + 2 * sympy.I * a_n / (a_n**2 + 1) +m**2 / (a_n**2 + 1) + 2 *m / (a_n**2 + 1) + 1 + 1 / (a_n**2 + 1))\r\n```\r\ngives different numerical results if I factor `expr` and replace `a_n` and `m` with numbers.\r\n`expr.subs({a_n: 1, m: 1})` produces `12i`, whereas `expr.factor().subs({a_n: 1, m: 1})` produces `-6`.\r\nA correct behavior would result in `12i`, or equivalent numerical expressions, in both cases.\r\n\r\nOn the other hand, defining\r\n```\r\na_n, m = sympy.symbols('a m', real=True, positive=True)\r\n```\r\ncorrectly produces the equivalent outputs `12i` in both cases.\r\n\r\n**Sympy version**\r\nI observe this in the stable branch, but also in 1.12 and 1.10. Didn't check other versions.\r\n\r\n**Other modifications that produce the same error**\r\nUsing `a_n, m = sympy.symbols('an m', real=True, positive=True)`, so a symbol with a name of more than one character, also reproduces the error.", "number": 26497, "title": "factor produces wrong output" } ]
af0c8559a845df49a6d0f8855659cebd0ddc9bc8
{ "head_commit": "2c01ca924ef4ff8aed4f4d0f5b3a222f44ac299a", "head_commit_message": "polys: rename PolyElement.norm_algebraic -> norm", "patch_to_review": "diff --git a/doc/src/modules/polys/internals.rst b/doc/src/modules/polys/internals.rst\nindex 31c8659be326..25d24f6ac28c 100644\n--- a/doc/src/modules/polys/internals.rst\n+++ b/doc/src/modules/polys/internals.rst\n@@ -545,7 +545,9 @@ Polynomial factorization in characteristic zero:\n \n .. currentmodule:: sympy.polys.factortools\n \n+.. autofunction:: dup_trial_division\n .. autofunction:: dmp_trial_division\n+.. autofunction:: dup_zz_mignotte_bound\n .. autofunction:: dmp_zz_mignotte_bound\n .. autofunction:: dup_zz_hensel_step\n .. autofunction:: dup_zz_hensel_lift\n@@ -559,16 +561,49 @@ Polynomial factorization in characteristic zero:\n .. autofunction:: dmp_zz_wang_non_divisors\n .. autofunction:: dmp_zz_wang_test_points\n .. autofunction:: dmp_zz_wang_lead_coeffs\n+.. autofunction:: dup_zz_diophantine\n .. autofunction:: dmp_zz_diophantine\n .. autofunction:: dmp_zz_wang_hensel_lifting\n .. autofunction:: dmp_zz_wang\n .. autofunction:: dmp_zz_factor\n+.. autofunction:: dup_qq_i_factor\n+.. autofunction:: dup_zz_i_factor\n+.. autofunction:: dmp_qq_i_factor\n+.. autofunction:: dmp_zz_i_factor\n+.. autofunction:: dup_ext_factor\n .. autofunction:: dmp_ext_factor\n .. autofunction:: dup_gf_factor\n+.. autofunction:: dmp_gf_factor\n+.. autofunction:: dup_factor_list\n+.. autofunction:: dup_factor_list_include\n .. autofunction:: dmp_factor_list\n .. autofunction:: dmp_factor_list_include\n+.. autofunction:: dup_irreducible_p\n .. autofunction:: dmp_irreducible_p\n \n+Square-free factorization:\n+\n+.. currentmodule:: sympy.polys.sqfreetools\n+\n+.. autofunction:: dup_sqf_p\n+.. autofunction:: dmp_sqf_p\n+.. autofunction:: dup_sqf_norm\n+.. autofunction:: dmp_sqf_norm\n+.. autofunction:: dmp_norm\n+.. autofunction:: dup_gf_sqf_part\n+.. autofunction:: dmp_gf_sqf_part\n+.. autofunction:: dup_sqf_part\n+.. autofunction:: dmp_sqf_part\n+.. autofunction:: dup_gf_sqf_list\n+.. autofunction:: dmp_gf_sqf_list\n+.. autofunction:: dup_sqf_list\n+.. autofunction:: dup_sqf_list_include\n+.. autofunction:: dmp_sqf_list\n+.. autofunction:: dmp_sqf_list_include\n+.. autofunction:: dup_gff_list\n+.. autofunction:: dmp_gff_list\n+\n+\n Groebner basis algorithms\n *************************\n \ndiff --git a/doc/src/modules/polys/literature.rst b/doc/src/modules/polys/literature.rst\nindex 47fa9453baa9..5cfc3336ed68 100644\n--- a/doc/src/modules/polys/literature.rst\n+++ b/doc/src/modules/polys/literature.rst\n@@ -144,4 +144,16 @@ a theoretical foundation for implementing polynomials manipulation module.\n https://isc.tamu.edu/resources/preprints/1996/1996-02.pdf\n \n .. [Cohen93] Henri Cohen. \"A Course in Computational Algebraic Number Theory\",\n- Springer, 1993.\n+ Springer, 1993.\n+\n+.. [Trager76] Barry M. Trager. \"Algebraic factoriing and rational function\n+ integration\", Proceedings of SYMSAC 1976, pp. 219-226, ACM, 1976.\n+ https://dl.acm.org/doi/abs/10.1145/800205.806338\n+\n+.. [Yun76] David Y.Y. Yun. \"On square-free decomposition algorithms\",\n+ Proceedings of SYMSAC 1976, pp. 219-226, ACM, 1976.\n+ https://dl.acm.org/doi/10.1145/800205.806320\n+\n+.. [Abbott13] John Abbott. \"Bounds on factors in Z[x]\".\n+ Journal of Symbolic Computation 50 (2013), pp. 532-563\n+ https://doi.org/10.1016/j.jsc.2012.09.004\ndiff --git a/sympy/polys/compatibility.py b/sympy/polys/compatibility.py\nindex 6635c61e569c..ee71d9efedae 100644\n--- a/sympy/polys/compatibility.py\n+++ b/sympy/polys/compatibility.py\n@@ -101,6 +101,7 @@\n from sympy.polys.densetools import dup_mirror\n from sympy.polys.densetools import dup_scale\n from sympy.polys.densetools import dup_shift\n+from sympy.polys.densetools import dmp_shift\n from sympy.polys.densetools import dup_transform\n from sympy.polys.densetools import dup_compose\n from sympy.polys.densetools import dmp_compose\n@@ -209,9 +210,10 @@\n from sympy.polys.rootisolation import dup_isolate_all_roots\n \n from sympy.polys.sqfreetools import (\n- dup_sqf_p, dmp_sqf_p, dup_sqf_norm, dmp_sqf_norm, dup_gf_sqf_part, dmp_gf_sqf_part,\n- dup_sqf_part, dmp_sqf_part, dup_gf_sqf_list, dmp_gf_sqf_list, dup_sqf_list,\n- dup_sqf_list_include, dmp_sqf_list, dmp_sqf_list_include, dup_gff_list, dmp_gff_list)\n+ dup_sqf_p, dmp_sqf_p, dmp_norm, dup_sqf_norm, dmp_sqf_norm,\n+ dup_gf_sqf_part, dmp_gf_sqf_part, dup_sqf_part, dmp_sqf_part,\n+ dup_gf_sqf_list, dmp_gf_sqf_list, dup_sqf_list, dup_sqf_list_include,\n+ dmp_sqf_list, dmp_sqf_list_include, dup_gff_list, dmp_gff_list)\n \n from sympy.polys.galoistools import (\n gf_degree, gf_LC, gf_TC, gf_strip, gf_from_dict,\n@@ -515,6 +517,8 @@ def dup_scale(self, f, a):\n return self.from_dense(dup_scale(self.to_dense(f), a, self.domain))\n def dup_shift(self, f, a):\n return self.from_dense(dup_shift(self.to_dense(f), a, self.domain))\n+ def dmp_shift(self, f, a):\n+ return self.from_dense(dmp_shift(self.to_dense(f), a, self.ngens-1, self.domain))\n def dup_transform(self, f, p, q):\n return self.from_dense(dup_transform(self.to_dense(f), self.to_dense(p), self.to_dense(q), self.domain))\n \n@@ -877,6 +881,10 @@ def dup_sqf_p(self, f):\n def dmp_sqf_p(self, f):\n return dmp_sqf_p(self.to_dense(f), self.ngens-1, self.domain)\n \n+ def dmp_norm(self, f):\n+ n = dmp_norm(self.to_dense(f), self.ngens-1, self.domain)\n+ return self.to_ground().from_dense(n)\n+\n def dup_sqf_norm(self, f):\n s, F, R = dup_sqf_norm(self.to_dense(f), self.domain)\n return (s, self.from_dense(F), self.to_ground().from_dense(R))\ndiff --git a/sympy/polys/densetools.py b/sympy/polys/densetools.py\nindex 30760c6559c6..0c9db99e9e68 100644\n--- a/sympy/polys/densetools.py\n+++ b/sympy/polys/densetools.py\n@@ -782,7 +782,7 @@ def dmp_ground_extract(f, g, u, K):\n \n def dup_real_imag(f, K):\n \"\"\"\n- Return bivariate polynomials ``f1`` and ``f2``, such that ``f = f1 + f2*I``.\n+ Find ``f1`` and ``f2``, such that ``f(x+I*y) = f1(x,y) + f2(x,y)*I``.\n \n Examples\n ========\n@@ -793,6 +793,11 @@ def dup_real_imag(f, K):\n >>> R.dup_real_imag(x**3 + x**2 + x + 1)\n (x**3 + x**2 - 3*x*y**2 + x - y**2 + 1, 3*x**2*y + 2*x*y - y**3 + y)\n \n+ >>> from sympy.abc import x, y, z\n+ >>> from sympy import I\n+ >>> (z**3 + z**2 + z + 1).subs(z, x+I*y).expand().collect(I)\n+ x**3 + x**2 - 3*x*y**2 + x - y**2 + I*(3*x**2*y + 2*x*y - y**3 + y) + 1\n+\n \"\"\"\n if not K.is_ZZ and not K.is_QQ:\n raise DomainError(\"computing real and imaginary parts is not supported over %s\" % K)\n@@ -894,6 +899,44 @@ def dup_shift(f, a, K):\n return f\n \n \n+def dmp_shift(f, a, u, K):\n+ \"\"\"\n+ Evaluate efficiently Taylor shift ``f(X + A)`` in ``K[X]``.\n+\n+ Examples\n+ ========\n+\n+ >>> from sympy import symbols, ring, ZZ\n+ >>> x, y = symbols('x y')\n+ >>> R, _, _ = ring([x, y], ZZ)\n+\n+ >>> p = x**2*y + 2*x*y + 3*x + 4*y + 5\n+\n+ >>> R.dmp_shift(R(p), [ZZ(1), ZZ(2)])\n+ x**2*y + 2*x**2 + 4*x*y + 11*x + 7*y + 22\n+\n+ >>> p.subs({x: x + 1, y: y + 2}).expand()\n+ x**2*y + 2*x**2 + 4*x*y + 11*x + 7*y + 22\n+ \"\"\"\n+ if not u:\n+ return dup_shift(f, a[0], K)\n+\n+ if dmp_zero_p(f, u):\n+ return f\n+\n+ a0, a1 = a[0], a[1:]\n+\n+ f = [ dmp_shift(c, a1, u-1, K) for c in f ]\n+ n = len(f) - 1\n+\n+ for i in range(n, 0, -1):\n+ for j in range(0, i):\n+ afj = dmp_mul_ground(f[j], a0, u-1, K)\n+ f[j + 1] = dmp_add(f[j + 1], afj, u-1, K)\n+\n+ return f\n+\n+\n def dup_transform(f, p, q, K):\n \"\"\"\n Evaluate functional transformation ``q**n * f(p/q)`` in ``K[x]``.\ndiff --git a/sympy/polys/factortools.py b/sympy/polys/factortools.py\nindex c0316375ba1a..dc477f02e672 100644\n--- a/sympy/polys/factortools.py\n+++ b/sympy/polys/factortools.py\n@@ -55,8 +55,7 @@\n dup_primitive, dmp_ground_primitive,\n dmp_eval_tail,\n dmp_eval_in, dmp_diff_eval_in,\n- dmp_compose,\n- dup_shift, dup_mirror)\n+ dup_shift, dmp_shift, dup_mirror)\n \n from sympy.polys.euclidtools import (\n dmp_primitive,\n@@ -65,7 +64,9 @@\n from sympy.polys.sqfreetools import (\n dup_sqf_p,\n dup_sqf_norm, dmp_sqf_norm,\n- dup_sqf_part, dmp_sqf_part)\n+ dup_sqf_part, dmp_sqf_part,\n+ _dup_check_degrees, _dmp_check_degrees,\n+ )\n \n from sympy.polys.polyutils import _sort_factors\n from sympy.polys.polyconfig import query\n@@ -125,6 +126,9 @@ def dmp_trial_division(f, factors, u, K):\n else:\n break\n \n+ if k == 0:\n+ raise RuntimeError(\"trial division failed\")\n+\n result.append((factor, k))\n \n return _sort_factors(result)\n@@ -133,7 +137,7 @@ def dmp_trial_division(f, factors, u, K):\n def dup_zz_mignotte_bound(f, K):\n \"\"\"\n The Knuth-Cohen variant of Mignotte bound for\n- univariate polynomials in `K[x]`.\n+ univariate polynomials in ``K[x]``.\n \n Examples\n ========\n@@ -145,17 +149,18 @@ def dup_zz_mignotte_bound(f, K):\n >>> R.dup_zz_mignotte_bound(f)\n 152\n \n- By checking `factor(f)` we can see that max coeff is 8\n+ By checking ``factor(f)`` we can see that max coeff is 8\n \n- Also consider a case that `f` is irreducible for example `f = 2*x**2 + 3*x + 4`\n- To avoid a bug for these cases, we return the bound plus the max coefficient of `f`\n+ Also consider a case that ``f`` is irreducible for example\n+ ``f = 2*x**2 + 3*x + 4``. To avoid a bug for these cases, we return the\n+ bound plus the max coefficient of ``f``\n \n >>> f = 2*x**2 + 3*x + 4\n >>> R.dup_zz_mignotte_bound(f)\n 6\n \n- Lastly,To see the difference between the new and the old Mignotte bound\n- consider the irreducible polynomial::\n+ Lastly, to see the difference between the new and the old Mignotte bound\n+ consider the irreducible polynomial:\n \n >>> f = 87*x**7 + 4*x**6 + 80*x**5 + 17*x**4 + 9*x**3 + 12*x**2 + 49*x + 26\n >>> R.dup_zz_mignotte_bound(f)\n@@ -167,7 +172,7 @@ def dup_zz_mignotte_bound(f, K):\n References\n ==========\n \n- ..[1] [Abbott2013]_\n+ ..[1] [Abbott13]_\n \n \"\"\"\n from sympy.functions.combinatorial.factorials import binomial\n@@ -702,6 +707,9 @@ def dup_zz_factor(f, K):\n H = dup_zz_zassenhaus(g, K)\n \n factors = dup_trial_division(f, H, K)\n+\n+ _dup_check_degrees(f, factors)\n+\n return cont, factors\n \n \n@@ -1177,6 +1185,8 @@ def dmp_zz_factor(f, u, K):\n for g, k in dmp_zz_factor(G, u - 1, K)[1]:\n factors.insert(0, ([g], k))\n \n+ _dmp_check_degrees(f, u, factors)\n+\n return cont, _sort_factors(factors)\n \n \n@@ -1247,7 +1257,71 @@ def dmp_zz_i_factor(f, u, K0):\n \n \n def dup_ext_factor(f, K):\n- \"\"\"Factor univariate polynomials over algebraic number fields. \"\"\"\n+ r\"\"\"Factor univariate polynomials over algebraic number fields.\n+\n+ The domain `K` must be an algebraic number field `k(a)` (see :ref:`QQ(a)`).\n+\n+ Examples\n+ ========\n+\n+ First define the algebraic number field `K = \\mathbb{Q}(\\sqrt{2})`:\n+\n+ >>> from sympy import QQ, sqrt\n+ >>> from sympy.polys.factortools import dup_ext_factor\n+ >>> K = QQ.algebraic_field(sqrt(2))\n+\n+ We can now factorise the polynomial `x^2 - 2` over `K`:\n+\n+ >>> p = [K(1), K(0), K(-2)] # x^2 - 2\n+ >>> p1 = [K(1), -K.unit] # x - sqrt(2)\n+ >>> p2 = [K(1), +K.unit] # x + sqrt(2)\n+ >>> dup_ext_factor(p, K) == (K.one, [(p1, 1), (p2, 1)])\n+ True\n+\n+ Usually this would be done at a higher level:\n+\n+ >>> from sympy import factor\n+ >>> from sympy.abc import x\n+ >>> factor(x**2 - 2, extension=sqrt(2))\n+ (x - sqrt(2))*(x + sqrt(2))\n+\n+ Explanation\n+ ===========\n+\n+ Uses Trager's algorithm. In particular this function is algorithm\n+ ``alg_factor`` from [Trager76]_.\n+\n+ If `f` is a polynomial in `k(a)[x]` then its norm `g(x)` is a polynomial in\n+ `k[x]`. If `g(x)` is square-free and has irreducible factors `g_1(x)`,\n+ `g_2(x)`, `\\cdots` then the irreducible factors of `f` in `k(a)[x]` are\n+ given by `f_i(x) = \\gcd(f(x), g_i(x))` where the GCD is computed in\n+ `k(a)[x]`.\n+\n+ The first step in Trager's algorithm is to find an integer shift `s` so\n+ that `f(x-sa)` has square-free norm. Then the norm is factorized in `k[x]`\n+ and the GCD of (shifted) `f` with each factor gives the shifted factors of\n+ `f`. At the end the shift is undone to recover the unshifted factors of `f`\n+ in `k(a)[x]`.\n+\n+ The algorithm reduces the problem of factorization in `k(a)[x]` to\n+ factorization in `k[x]` with the main additional steps being to compute the\n+ norm (a resultant calculation in `k[x,y]`) and some polynomial GCDs in\n+ `k(a)[x]`.\n+\n+ In practice in SymPy the base field `k` will be the rationals :ref:`QQ` and\n+ this function factorizes a polynomial with coefficients in an algebraic\n+ number field like `\\mathbb{Q}(\\sqrt{2})`.\n+\n+ See Also\n+ ========\n+\n+ dmp_ext_factor:\n+ Analogous function for multivariate polynomials over ``k(a)``.\n+ dup_sqf_norm:\n+ Subroutine ``sqfr_norm`` also from [Trager76]_.\n+ sympy.polys.polytools.factor:\n+ The high-level function that ultimately uses this function as needed.\n+ \"\"\"\n n, lc = dup_degree(f), dup_LC(f, K)\n \n f = dup_monic(f, K)\n@@ -1274,11 +1348,59 @@ def dup_ext_factor(f, K):\n factors[i] = h\n \n factors = dup_trial_division(F, factors, K)\n+\n+ _dup_check_degrees(F, factors)\n+\n return lc, factors\n \n \n def dmp_ext_factor(f, u, K):\n- \"\"\"Factor multivariate polynomials over algebraic number fields. \"\"\"\n+ r\"\"\"Factor multivariate polynomials over algebraic number fields.\n+\n+ The domain `K` must be an algebraic number field `k(a)` (see :ref:`QQ(a)`).\n+\n+ Examples\n+ ========\n+\n+ First define the algebraic number field `K = \\mathbb{Q}(\\sqrt{2})`:\n+\n+ >>> from sympy import QQ, sqrt\n+ >>> from sympy.polys.factortools import dmp_ext_factor\n+ >>> K = QQ.algebraic_field(sqrt(2))\n+\n+ We can now factorise the polynomial `x^2 y^2 - 2` over `K`:\n+\n+ >>> p = [[K(1),K(0),K(0)], [], [K(-2)]] # x**2*y**2 - 2\n+ >>> p1 = [[K(1),K(0)], [-K.unit]] # x*y - sqrt(2)\n+ >>> p2 = [[K(1),K(0)], [+K.unit]] # x*y + sqrt(2)\n+ >>> dmp_ext_factor(p, 1, K) == (K.one, [(p1, 1), (p2, 1)])\n+ True\n+\n+ Usually this would be done at a higher level:\n+\n+ >>> from sympy import factor\n+ >>> from sympy.abc import x, y\n+ >>> factor(x**2*y**2 - 2, extension=sqrt(2))\n+ (x*y - sqrt(2))*(x*y + sqrt(2))\n+\n+ Explanation\n+ ===========\n+\n+ This is Trager's algorithm for multivariate polynomials. In particular this\n+ function is algorithm ``alg_factor`` from [Trager76]_.\n+\n+ See :func:`dup_ext_factor` for explanation.\n+\n+ See Also\n+ ========\n+\n+ dmp_ext_factor:\n+ Analogous function for multivariate polynomials over ``k(a)``.\n+ dmp_sqf_norm:\n+ Multivariate version of subroutine ``sqfr_norm`` also from [Trager76]_.\n+ sympy.polys.polytools.factor:\n+ The high-level function that ultimately uses this function as needed.\n+ \"\"\"\n if not u:\n return dup_ext_factor(f, K)\n \n@@ -1296,15 +1418,18 @@ def dmp_ext_factor(f, u, K):\n if len(factors) == 1:\n factors = [f]\n else:\n- H = dmp_raise([K.one, s*K.unit], u, 0, K)\n-\n for i, (factor, _) in enumerate(factors):\n h = dmp_convert(factor, u, K.dom, K)\n h, _, g = dmp_inner_gcd(h, g, u, K)\n- h = dmp_compose(h, H, u, K)\n+ a = [si*K.unit for si in s]\n+ h = dmp_shift(h, a, u, K)\n factors[i] = h\n \n- return lc, dmp_trial_division(F, factors, u, K)\n+ result = dmp_trial_division(F, factors, u, K)\n+\n+ _dmp_check_degrees(F, u, result)\n+\n+ return lc, result\n \n \n def dup_gf_factor(f, K):\ndiff --git a/sympy/polys/numberfields/subfield.py b/sympy/polys/numberfields/subfield.py\nindex 4c6a1e147eb5..b959ddeb27a6 100644\n--- a/sympy/polys/numberfields/subfield.py\n+++ b/sympy/polys/numberfields/subfield.py\n@@ -356,7 +356,7 @@ def primitive_element(extension, x=None, *, ex=False, polys=False):\n continue\n _, factors = factor_list(g, extension=ext)\n g = _choose_factor(factors, x, gen)\n- s, _, g = g.sqf_norm()\n+ [s], _, g = g.sqf_norm()\n gen += s*ext\n coeffs.append(s)\n \n@@ -378,7 +378,7 @@ def primitive_element(extension, x=None, *, ex=False, polys=False):\n L = QQ.algebraic_field((p, ext))\n _, factors = factor_list(f, domain=L)\n f = _choose_factor(factors, x, gen)\n- s, g, f = f.sqf_norm()\n+ [s], g, f = f.sqf_norm()\n gen += s*ext\n coeffs.append(s)\n K = QQ.algebraic_field((f, gen))\ndiff --git a/sympy/polys/polyclasses.py b/sympy/polys/polyclasses.py\nindex 058f32681e22..c19f72a227f8 100644\n--- a/sympy/polys/polyclasses.py\n+++ b/sympy/polys/polyclasses.py\n@@ -81,6 +81,7 @@\n dmp_compose,\n dup_decompose,\n dup_shift,\n+ dmp_shift,\n dup_transform,\n dmp_lift)\n \n@@ -897,6 +898,11 @@ def shift(f, a):\n \n return f._shift(f.dom.convert(a))\n \n+ def shift_list(f, a):\n+ \"\"\"Efficiently compute Taylor shift ``f(X + A)``. \"\"\"\n+ a = [f.dom.convert(ai) for ai in a]\n+ return f._shift_list(a)\n+\n def _shift(f, a):\n raise NotImplementedError\n \n@@ -1574,6 +1580,10 @@ def _shift(f, a):\n \"\"\"Efficiently compute Taylor shift ``f(x + a)``. \"\"\"\n return f.per(dup_shift(f._rep, a, f.dom))\n \n+ def _shift_list(f, a):\n+ \"\"\"Efficiently compute Taylor shift ``f(X + A)``. \"\"\"\n+ return f.per(dmp_shift(f._rep, a, f.lev, f.dom))\n+\n def _transform(f, p, q):\n \"\"\"Evaluate functional transformation ``q**n * f(p/q)``.\"\"\"\n return f.per(dup_transform(f._rep, p._rep, q._rep, f.dom))\ndiff --git a/sympy/polys/polytools.py b/sympy/polys/polytools.py\nindex b17310651072..80f317d6798a 100644\n--- a/sympy/polys/polytools.py\n+++ b/sympy/polys/polytools.py\n@@ -3102,13 +3102,32 @@ def shift(f, a):\n >>> Poly(x**2 - 2*x + 1, x).shift(2)\n Poly(x**2 + 2*x + 1, x, domain='ZZ')\n \n+ See Also\n+ ========\n+\n+ shift_list: Analogous method for multivariate polynomials.\n \"\"\"\n- if hasattr(f.rep, 'shift'):\n- result = f.rep.shift(a)\n- else: # pragma: no cover\n- raise OperationNotSupported(f, 'shift')\n+ return f.per(f.rep.shift(a))\n \n- return f.per(result)\n+ def shift_list(f, a):\n+ \"\"\"\n+ Efficiently compute Taylor shift ``f(X + A)``.\n+\n+ Examples\n+ ========\n+\n+ >>> from sympy import Poly\n+ >>> from sympy.abc import x, y\n+\n+ >>> Poly(x*y, [x,y]).shift_list([1, 2]) == Poly((x+1)*(y+2), [x,y])\n+ True\n+\n+ See Also\n+ ========\n+\n+ shift: Analogous method for univariate polynomials.\n+ \"\"\"\n+ return f.per(f.rep.shift_list(a))\n \n def transform(f, p, q):\n \"\"\"\n@@ -3240,7 +3259,7 @@ def sqf_norm(f):\n >>> s, f, r = Poly(x**2 + 1, x, extension=[sqrt(3)]).sqf_norm()\n \n >>> s\n- 1\n+ [1]\n >>> f\n Poly(x**2 - 2*sqrt(3)*x + 4, x, domain='QQ<sqrt(3)>')\n >>> r\n@@ -6042,7 +6061,7 @@ def sqf_norm(f, *gens, **args):\n >>> from sympy.abc import x\n \n >>> sqf_norm(x**2 + 1, extension=[sqrt(3)])\n- (1, x**2 - 2*sqrt(3)*x + 4, x**4 - 4*x**2 + 16)\n+ ([1], x**2 - 2*sqrt(3)*x + 4, x**4 - 4*x**2 + 16)\n \n \"\"\"\n options.allowed_flags(args, ['polys'])\n@@ -6054,10 +6073,12 @@ def sqf_norm(f, *gens, **args):\n \n s, g, r = F.sqf_norm()\n \n+ s_expr = [Integer(si) for si in s]\n+\n if not opt.polys:\n- return Integer(s), g.as_expr(), r.as_expr()\n+ return s_expr, g.as_expr(), r.as_expr()\n else:\n- return Integer(s), g, r\n+ return s_expr, g, r\n \n \n @public\n@@ -6554,11 +6575,11 @@ def _try_factor(expr):\n \n try:\n return _generic_factor(f, gens, args, method='factor')\n- except PolynomialError as msg:\n+ except PolynomialError:\n if not f.is_commutative:\n return factor_nc(f)\n else:\n- raise PolynomialError(msg)\n+ raise\n \n \n @public\ndiff --git a/sympy/polys/rings.py b/sympy/polys/rings.py\nindex 57b0e92e6946..7e3db5d3997b 100644\n--- a/sympy/polys/rings.py\n+++ b/sympy/polys/rings.py\n@@ -2983,7 +2983,10 @@ def shift(f, a):\n if f.ring.is_univariate:\n return f.ring.dup_shift(f, a)\n else:\n- raise MultivariatePolynomialError(\"polynomial shift\")\n+ raise MultivariatePolynomialError(\"shift: use shift_list instead\")\n+\n+ def shift_list(f, a):\n+ return f.ring.dmp_shift(f, a)\n \n def sturm(f):\n if f.ring.is_univariate:\n@@ -2994,6 +2997,9 @@ def sturm(f):\n def gff_list(f):\n return f.ring.dmp_gff_list(f)\n \n+ def norm(f):\n+ return f.ring.dmp_norm(f)\n+\n def sqf_norm(f):\n return f.ring.dmp_sqf_norm(f)\n \ndiff --git a/sympy/polys/sqfreetools.py b/sympy/polys/sqfreetools.py\nindex a27b6f573aa9..58dbec904ee7 100644\n--- a/sympy/polys/sqfreetools.py\n+++ b/sympy/polys/sqfreetools.py\n@@ -12,12 +12,12 @@\n dup_LC, dmp_ground_LC,\n dmp_zero_p,\n dmp_ground,\n- dup_degree, dmp_degree,\n+ dup_degree, dmp_degree, dmp_degree_in, dmp_degree_list,\n dmp_raise, dmp_inject,\n dup_convert)\n from sympy.polys.densetools import (\n dup_diff, dmp_diff, dmp_diff_in,\n- dup_shift, dmp_compose,\n+ dup_shift, dmp_shift,\n dup_monic, dmp_ground_monic,\n dup_primitive, dmp_ground_primitive)\n from sympy.polys.euclidtools import (\n@@ -30,6 +30,22 @@\n MultivariatePolynomialError,\n DomainError)\n \n+\n+def _dup_check_degrees(f, result):\n+ \"\"\"Sanity check the degrees of a computed factorization in K[x].\"\"\"\n+ deg = sum(k * dup_degree(fac) for (fac, k) in result)\n+ assert deg == dup_degree(f)\n+\n+\n+def _dmp_check_degrees(f, u, result):\n+ \"\"\"Sanity check the degrees of a computed factorization in K[X].\"\"\"\n+ degs = [0] * (u + 1)\n+ for fac, k in result:\n+ degs_fac = dmp_degree_list(fac, u)\n+ degs = [d1 + k * d2 for d1, d2 in zip(degs, degs_fac)]\n+ assert tuple(degs) == dmp_degree_list(f, u)\n+\n+\n def dup_sqf_p(f, K):\n \"\"\"\n Return ``True`` if ``f`` is a square-free polynomial in ``K[x]``.\n@@ -70,20 +86,37 @@ def dmp_sqf_p(f, u, K):\n \"\"\"\n if dmp_zero_p(f, u):\n return True\n- else:\n- return not dmp_degree(dmp_gcd(f, dmp_diff(f, 1, u, K), u, K), u)\n+\n+ for i in range(u+1):\n+\n+ fp = dmp_diff_in(f, 1, i, u, K)\n+\n+ if dmp_zero_p(fp, u):\n+ continue\n+\n+ gcd = dmp_gcd(f, fp, u, K)\n+\n+ if dmp_degree_in(gcd, i, u) != 0:\n+ return False\n+\n+ return True\n \n \n def dup_sqf_norm(f, K):\n- \"\"\"\n- Square-free norm of ``f`` in ``K[x]``, useful over algebraic domains.\n+ r\"\"\"\n+ Find a shift of `f` in `K[x]` that has square-free norm.\n+\n+ The domain `K` must be an algebraic number field `k(a)` (see :ref:`QQ(a)`).\n \n- Returns ``s``, ``f``, ``r``, such that ``g(x) = f(x-sa)`` and ``r(x) = Norm(g(x))``\n- is a square-free polynomial over K, where ``a`` is the algebraic extension of ``K``.\n+ Returns `(s,g,r)`, such that `g(x)=f(x-sa)`, `r(x)=\\text{Norm}(g(x))` and\n+ `r` is a square-free polynomial over `k`.\n \n Examples\n ========\n \n+ We first create the algebraic number field `K=k(a)=\\mathbb{Q}(\\sqrt{3})`\n+ and rings `K[x]` and `k[x]`:\n+\n >>> from sympy.polys import ring, QQ\n >>> from sympy import sqrt\n \n@@ -91,15 +124,48 @@ def dup_sqf_norm(f, K):\n >>> R, x = ring(\"x\", K)\n >>> _, X = ring(\"x\", QQ)\n \n- >>> s, f, r = R.dup_sqf_norm(x**2 - 2)\n+ We can now find a square free norm for a shift of `f`:\n+\n+ >>> f = x**2 - 1\n+ >>> s, g, r = R.dup_sqf_norm(f)\n+\n+ The choice of shift `s` is arbitrary and the particular values returned for\n+ `g` and `r` are determined by `s`.\n \n >>> s == 1\n True\n- >>> f == x**2 + K([QQ(-2), QQ(0)])*x + 1\n+ >>> g == x**2 - 2*sqrt(3)*x + 2\n True\n- >>> r == X**4 - 10*X**2 + 1\n+ >>> r == X**4 - 8*X**2 + 4\n True\n \n+ The invariants are:\n+\n+ >>> g == f.shift(-s*K.unit)\n+ True\n+ >>> g.norm() == r\n+ True\n+ >>> r.is_squarefree\n+ True\n+\n+ Explanation\n+ ===========\n+\n+ This is part of Trager's algorithm for factorizing polynomials over\n+ algebraic number fields. In particular this function is algorithm\n+ ``sqfr_norm`` from [Trager76]_.\n+\n+ See Also\n+ ========\n+\n+ dmp_sqf_norm:\n+ Analogous function for multivariate polynomials over ``k(a)``.\n+ dmp_norm:\n+ Computes the norm of `f` directly without any shift.\n+ dup_ext_factor:\n+ Function implementing Trager's algorithm that uses this.\n+ sympy.polys.polytools.sqf_norm:\n+ High-level interface for using this function.\n \"\"\"\n if not K.is_Algebraic:\n raise DomainError(\"ground domain must be algebraic\")\n@@ -118,16 +184,62 @@ def dup_sqf_norm(f, K):\n return s, f, r\n \n \n+def _dmp_sqf_norm_shifts(f, u, K):\n+ \"\"\"Generate a sequence of candidate shifts for dmp_sqf_norm.\"\"\"\n+ #\n+ # We want to find a minimal shift if possible because shifting high degree\n+ # variables can be expensive e.g. x**10 -> (x + 1)**10. We try a few easy\n+ # cases first before the final infinite loop that is guaranteed to give\n+ # only finitely many bad shifts (see Trager76 for proof of this in the\n+ # univariate case).\n+ #\n+\n+ # First the trivial shift [0, 0, ...]\n+ n = u + 1\n+ s0 = [0] * n\n+ yield s0, f\n+\n+ # Shift in multiples of the generator of the extension field K\n+ a = K.unit\n+\n+ # Variables of degree > 0 ordered by increasing degree\n+ d = dmp_degree_list(f, u)\n+ var_indices = [i for di, i in sorted(zip(d, range(u+1))) if di > 0]\n+\n+ # Now try [1, 0, 0, ...], [0, 1, 0, ...]\n+ for i in var_indices:\n+ s1 = s0.copy()\n+ s1[i] = 1\n+ a1 = [-a*s1i for s1i in s1]\n+ f1 = dmp_shift(f, a1, u, K)\n+ yield s1, f1\n+\n+ # Now try [1, 1, 1, ...], [2, 2, 2, ...]\n+ j = 0\n+ while True:\n+ j += 1\n+ sj = [j] * n\n+ aj = [-a*j] * n\n+ fj = dmp_shift(f, aj, u, K)\n+ yield sj, fj\n+\n+\n def dmp_sqf_norm(f, u, K):\n- \"\"\"\n- Square-free norm of ``f`` in ``K[X]``, useful over algebraic domains.\n+ r\"\"\"\n+ Find a shift of ``f`` in ``K[X]`` that has square-free norm.\n+\n+ The domain `K` must be an algebraic number field `k(a)` (see :ref:`QQ(a)`).\n \n- Returns ``s``, ``f``, ``r``, such that ``g(x) = f(x-sa)`` and ``r(x) = Norm(g(x))``\n- is a square-free polynomial over K, where ``a`` is the algebraic extension of ``K``.\n+ Returns `(s,g,r)`, such that `g(x_1,x_2,\\cdots)=f(x_1-s_1 a, x_2 - s_2 a,\n+ \\cdots)`, `r(x)=\\text{Norm}(g(x))` and `r` is a square-free polynomial over\n+ `k`.\n \n Examples\n ========\n \n+ We first create the algebraic number field `K=k(a)=\\mathbb{Q}(i)` and rings\n+ `K[x]` and `k[x]`:\n+\n >>> from sympy.polys import ring, QQ\n >>> from sympy import I\n \n@@ -135,42 +247,140 @@ def dmp_sqf_norm(f, u, K):\n >>> R, x, y = ring(\"x,y\", K)\n >>> _, X, Y = ring(\"x,y\", QQ)\n \n- >>> s, f, r = R.dmp_sqf_norm(x*y + y**2)\n+ We can now find a square free norm for a shift of `f`:\n \n- >>> s == 1\n+ >>> f = x*y + y**2\n+ >>> s, g, r = R.dmp_sqf_norm(f)\n+\n+ The choice of shifts ``s`` is arbitrary and the particular values returned\n+ for ``g`` and ``r`` are determined by ``s``.\n+\n+ >>> s\n+ [0, 1]\n+ >>> g == x*y - I*x + y**2 - 2*I*y - 1\n+ True\n+ >>> r == X**2*Y**2 + X**2 + 2*X*Y**3 + 2*X*Y + Y**4 + 2*Y**2 + 1\n True\n- >>> f == x*y + y**2 + K([QQ(-1), QQ(0)])*y\n+\n+ The required invariants are:\n+\n+ >>> g == f.shift_list([-si*K.unit for si in s])\n True\n- >>> r == X**2*Y**2 + 2*X*Y**3 + Y**4 + Y**2\n+ >>> g.norm() == r\n True\n+ >>> r.is_squarefree\n+ True\n+\n+ Explanation\n+ ===========\n+\n+ This is part of Trager's algorithm for factorizing polynomials over\n+ algebraic number fields. In particular this function is a multivariate\n+ generalization of algorithm ``sqfr_norm`` from [Trager76]_.\n+\n+ See Also\n+ ========\n \n+ dup_sqf_norm:\n+ Analogous function for multivariate polynomials over ``k(a)``.\n+ dmp_norm:\n+ Computes the norm of `f` directly without any shift.\n+ dmp_ext_factor:\n+ Function implementing Trager's algorithm that uses this.\n+ sympy.polys.polytools.sqf_norm:\n+ High-level interface for using this function.\n \"\"\"\n if not u:\n- return dup_sqf_norm(f, K)\n+ s, g, r = dup_sqf_norm(f, K)\n+ return [s], g, r\n \n if not K.is_Algebraic:\n raise DomainError(\"ground domain must be algebraic\")\n \n g = dmp_raise(K.mod.to_list(), u + 1, 0, K.dom)\n- F = dmp_raise([K.one, -K.unit], u, 0, K)\n \n- s = 0\n+ for s, f in _dmp_sqf_norm_shifts(f, u, K):\n \n- while True:\n h, _ = dmp_inject(f, u, K, front=True)\n r = dmp_resultant(g, h, u + 1, K.dom)\n \n if dmp_sqf_p(r, u, K.dom):\n break\n- else:\n- f, s = dmp_compose(f, F, u, K), s + 1\n \n return s, f, r\n \n \n def dmp_norm(f, u, K):\n- \"\"\"\n- Norm of ``f`` in ``K[X1, ..., Xn]``, often not square-free.\n+ r\"\"\"\n+ Norm of ``f`` in ``K[X]``, often not square-free.\n+\n+ The domain `K` must be an algebraic number field `k(a)` (see :ref:`QQ(a)`).\n+\n+ Examples\n+ ========\n+\n+ We first define the algebraic number field `K = k(a) = \\mathbb{Q}(\\sqrt{2})`:\n+\n+ >>> from sympy import QQ, sqrt\n+ >>> from sympy.polys.sqfreetools import dmp_norm\n+ >>> k = QQ\n+ >>> K = k.algebraic_field(sqrt(2))\n+\n+ We can now compute the norm of a polynomial `p` in `K[x,y]`:\n+\n+ >>> p = [[K(1)], [K(1),K.unit]] # x + y + sqrt(2)\n+ >>> N = [[k(1)], [k(2),k(0)], [k(1),k(0),k(-2)]] # x**2 + 2*x*y + y**2 - 2\n+ >>> dmp_norm(p, 1, K) == N\n+ True\n+\n+ In higher level functions that is:\n+\n+ >>> from sympy import expand, roots, minpoly\n+ >>> from sympy.abc import x, y\n+ >>> from math import prod\n+ >>> a = sqrt(2)\n+ >>> e = (x + y + a)\n+ >>> e.as_poly([x, y], extension=a).norm()\n+ Poly(x**2 + 2*x*y + y**2 - 2, x, y, domain='QQ')\n+\n+ This is equal to the product of the expressions `x + y + a_i` where the\n+ `a_i` are the conjugates of `a`:\n+\n+ >>> pa = minpoly(a)\n+ >>> pa\n+ _x**2 - 2\n+ >>> rs = roots(pa, multiple=True)\n+ >>> rs\n+ [sqrt(2), -sqrt(2)]\n+ >>> n = prod(e.subs(a, r) for r in rs)\n+ >>> n\n+ (x + y - sqrt(2))*(x + y + sqrt(2))\n+ >>> expand(n)\n+ x**2 + 2*x*y + y**2 - 2\n+\n+ Explanation\n+ ===========\n+\n+ Given an algebraic number field `K = k(a)` any element `b` of `K` can be\n+ represented as polynomial function `b=g(a)` where `g` is in `k[x]`. If the\n+ minimal polynomial of `a` over `k` is `p_a` then the roots `a_1`, `a_2`,\n+ `\\cdots` of `p_a(x)` are the conjugates of `a`. The norm of `b` is the\n+ product `g(a1) \\times g(a2) \\times \\cdots` and is an element of `k`.\n+\n+ As in [Trager76]_ we extend this norm to multivariate polynomials over `K`.\n+ If `b(x)` is a polynomial in `k(a)[X]` then we can think of `b` as being\n+ alternately a function `g_X(a)` where `g_X` is an element of `k[X][y]` i.e.\n+ a polynomial function with coefficients that are elements of `k[X]`. Then\n+ the norm of `b` is the product `g_X(a1) \\times g_X(a2) \\times \\cdots` and\n+ will be an element of `k[X]`.\n+\n+ See Also\n+ ========\n+\n+ dmp_sqf_norm:\n+ Compute a shift of `f` so that the `\\text{Norm}(f)` is square-free.\n+ sympy.polys.polytools.Poly.norm:\n+ Higher-level function that calls this.\n \"\"\"\n if not K.is_Algebraic:\n raise DomainError(\"ground domain must be algebraic\")\n@@ -206,6 +416,10 @@ def dup_sqf_part(f, K):\n >>> R.dup_sqf_part(x**3 - 3*x - 2)\n x**2 - x - 2\n \n+ See Also\n+ ========\n+\n+ sympy.polys.polytools.Poly.sqf_part\n \"\"\"\n if K.is_FiniteField:\n return dup_gf_sqf_part(f, K)\n@@ -264,6 +478,8 @@ def dmp_sqf_part(f, u, K):\n \n def dup_gf_sqf_list(f, K, all=False):\n \"\"\"Compute square-free decomposition of ``f`` in ``GF(p)[x]``. \"\"\"\n+ f_orig = f\n+\n f = dup_convert(f, K, K.dom)\n \n coeff, factors = gf_sqf_list(f, K.mod, K.dom, all=all)\n@@ -271,6 +487,8 @@ def dup_gf_sqf_list(f, K, all=False):\n for i, (f, k) in enumerate(factors):\n factors[i] = (dup_convert(f, K.dom, K), k)\n \n+ _dup_check_degrees(f_orig, factors)\n+\n return K.convert(coeff, K.dom), factors\n \n \n@@ -283,6 +501,8 @@ def dup_sqf_list(f, K, all=False):\n \"\"\"\n Return square-free decomposition of a polynomial in ``K[x]``.\n \n+ Uses Yun's algorithm from [Yun76]_.\n+\n Examples\n ========\n \n@@ -296,10 +516,26 @@ def dup_sqf_list(f, K, all=False):\n >>> R.dup_sqf_list(f, all=True)\n (2, [(1, 1), (x + 1, 2), (x + 2, 3)])\n \n+ See Also\n+ ========\n+\n+ dmp_sqf_list:\n+ Corresponding function for multivariate polynomials.\n+ sympy.polys.polytools.sqf_list:\n+ High-level function for square-free factorization of expressions.\n+ sympy.polys.polytools.Poly.sqf_list:\n+ Analogous method on :class:`~.Poly`.\n+\n+ References\n+ ==========\n+\n+ [Yun76]_\n \"\"\"\n if K.is_FiniteField:\n return dup_gf_sqf_list(f, K, all=all)\n \n+ f_orig = f\n+\n if K.is_Field:\n coeff = dup_LC(f, K)\n f = dup_monic(f, K)\n@@ -333,6 +569,8 @@ def dup_sqf_list(f, K, all=False):\n \n i += 1\n \n+ _dup_check_degrees(f_orig, result)\n+\n return coeff, result\n \n \n@@ -366,7 +604,7 @@ def dup_sqf_list_include(f, K, all=False):\n \n def dmp_sqf_list(f, u, K, all=False):\n \"\"\"\n- Return square-free decomposition of a polynomial in ``K[X]``.\n+ Return square-free decomposition of a polynomial in `K[X]`.\n \n Examples\n ========\n@@ -381,6 +619,26 @@ def dmp_sqf_list(f, u, K, all=False):\n >>> R.dmp_sqf_list(f, all=True)\n (1, [(1, 1), (x + y, 2), (x, 3)])\n \n+ Explanation\n+ ===========\n+\n+ Uses Yun's algorithm for univariate polynomials from [Yun76]_ recrusively.\n+ The multivariate polynomial is treated as a univariate polynomial in its\n+ leading variable. Then Yun's algorithm computes the square-free\n+ factorization of the primitive and the content is factored recursively.\n+\n+ It would be better to use a dedicated algorithm for multivariate\n+ polynomials instead.\n+\n+ See Also\n+ ========\n+\n+ dup_sqf_list:\n+ Corresponding function for univariate polynomials.\n+ sympy.polys.polytools.sqf_list:\n+ High-level function for square-free factorization of expressions.\n+ sympy.polys.polytools.Poly.sqf_list:\n+ Analogous method on :class:`~.Poly`.\n \"\"\"\n if not u:\n return dup_sqf_list(f, K, all=all)\n@@ -388,6 +646,8 @@ def dmp_sqf_list(f, u, K, all=False):\n if K.is_FiniteField:\n return dmp_gf_sqf_list(f, u, K, all=all)\n \n+ f_orig = f\n+\n if K.is_Field:\n coeff = dmp_ground_LC(f, u, K)\n f = dmp_ground_monic(f, u, K)\n@@ -445,6 +705,8 @@ def dmp_sqf_list(f, u, K, all=False):\n \n result = [(result[i], i) for i in sorted(result)]\n \n+ _dmp_check_degrees(f_orig, u, result)\n+\n return coeff, result\n \n \ndiff --git a/sympy/polys/tests/test_densetools.py b/sympy/polys/tests/test_densetools.py\nindex d9c9cf6e56b1..43dae691f52d 100644\n--- a/sympy/polys/tests/test_densetools.py\n+++ b/sympy/polys/tests/test_densetools.py\n@@ -20,7 +20,7 @@\n dup_primitive, dmp_ground_primitive,\n dup_extract, dmp_ground_extract,\n dup_real_imag,\n- dup_mirror, dup_scale, dup_shift,\n+ dup_mirror, dup_scale, dup_shift, dmp_shift,\n dup_transform,\n dup_compose, dmp_compose,\n dup_decompose,\n@@ -39,7 +39,7 @@\n \n from sympy.polys.specialpolys import f_polys\n \n-from sympy.polys.domains import FF, ZZ, QQ, EX, RR\n+from sympy.polys.domains import FF, ZZ, QQ, ZZ_I, QQ_I, EX, RR\n from sympy.polys.rings import ring\n \n from sympy.core.numbers import I\n@@ -75,6 +75,8 @@ def test_dup_integrate():\n \n \n def test_dmp_integrate():\n+ assert dmp_integrate([QQ(1)], 2, 0, QQ) == [QQ(1, 2), QQ(0), QQ(0)]\n+\n assert dmp_integrate([[[]]], 1, 2, QQ) == [[[]]]\n assert dmp_integrate([[[]]], 2, 2, QQ) == [[[]]]\n \n@@ -107,6 +109,9 @@ def test_dmp_integrate_in():\n dmp_swap(\n dmp_integrate(dmp_swap(f, 0, 2, 3, QQ), 3, 3, QQ), 0, 2, 3, QQ)\n \n+ raises(IndexError, lambda: dmp_integrate_in(f, 1, -1, 3, QQ))\n+ raises(IndexError, lambda: dmp_integrate_in(f, 1, 4, 3, QQ))\n+\n \n def test_dup_diff():\n assert dup_diff([], 1, ZZ) == []\n@@ -180,6 +185,8 @@ def test_dmp_diff_in():\n assert dmp_diff_in(f_6, 3, 2, 3, ZZ) == \\\n dmp_swap(dmp_diff(dmp_swap(f_6, 0, 2, 3, ZZ), 3, 3, ZZ), 0, 2, 3, ZZ)\n \n+ raises(IndexError, lambda: dmp_diff_in(f_6, 1, -1, 3, ZZ))\n+ raises(IndexError, lambda: dmp_diff_in(f_6, 1, 4, 3, ZZ))\n \n def test_dup_eval():\n assert dup_eval([], 7, ZZ) == 0\n@@ -217,6 +224,9 @@ def test_dmp_eval_in():\n assert dmp_eval_in(f, -2, 2, 2, ZZ) == \\\n [[45], [], [], [-9, -1, 0, -44]]\n \n+ raises(IndexError, lambda: dmp_eval_in(f_6, ZZ(1), -1, 3, ZZ))\n+ raises(IndexError, lambda: dmp_eval_in(f_6, ZZ(1), 4, 3, ZZ))\n+\n \n def test_dmp_eval_tail():\n assert dmp_eval_tail([[]], [1], 1, ZZ) == []\n@@ -248,6 +258,11 @@ def test_dmp_diff_eval_in():\n assert dmp_diff_eval_in(f_6, 2, 7, 1, 3, ZZ) == \\\n dmp_eval(dmp_diff(dmp_swap(f_6, 0, 1, 3, ZZ), 2, 3, ZZ), 7, 3, ZZ)\n \n+ assert dmp_diff_eval_in(f_6, 2, 7, 0, 3, ZZ) == \\\n+ dmp_eval(dmp_diff(f_6, 2, 3, ZZ), 7, 3, ZZ)\n+\n+ raises(IndexError, lambda: dmp_diff_eval_in(f_6, 1, ZZ(1), 4, 3, ZZ))\n+\n \n def test_dup_revert():\n f = [-QQ(1, 720), QQ(0), QQ(1, 24), QQ(0), -QQ(1, 2), QQ(0), QQ(1)]\n@@ -271,6 +286,12 @@ def test_dup_trunc():\n assert dup_trunc([1, 2, 3, 4, 5, 6], ZZ(3), ZZ) == [1, -1, 0, 1, -1, 0]\n assert dup_trunc([6, 5, 4, 3, 2, 1], ZZ(3), ZZ) == [-1, 1, 0, -1, 1]\n \n+ R = ZZ_I\n+ assert dup_trunc([R(3), R(4), R(5)], R(3), R) == [R(1), R(-1)]\n+\n+ K = FF(5)\n+ assert dup_trunc([K(3), K(4), K(5)], K(3), K) == [K(1), K(0)]\n+\n \n def test_dmp_trunc():\n assert dmp_trunc([[]], [1, 2], 2, ZZ) == [[]]\n@@ -294,6 +315,8 @@ def test_dup_monic():\n \n \n def test_dmp_ground_monic():\n+ assert dmp_ground_monic([3, 6, 9], 0, ZZ) == [1, 2, 3]\n+\n assert dmp_ground_monic([[3], [6], [9]], 1, ZZ) == [[1], [2], [3]]\n \n raises(\n@@ -386,6 +409,8 @@ def test_dup_primitive():\n \n \n def test_dmp_ground_primitive():\n+ assert dmp_ground_primitive([ZZ(1)], 0, ZZ) == (ZZ(1), [ZZ(1)])\n+\n assert dmp_ground_primitive([[]], 1, ZZ) == (ZZ(0), [[]])\n \n assert dmp_ground_primitive(f_0, 2, ZZ) == (ZZ(1), f_0)\n@@ -456,9 +481,15 @@ def test_dup_real_imag():\n assert dup_real_imag(\n [1, 2, 3], ZZ) == ([[1], [2], [-1, 0, 3]], [[2, 0], [2, 0]])\n \n+ assert dup_real_imag([ZZ(1), ZZ(0), ZZ(1), ZZ(3)], ZZ) == (\n+ [[ZZ(1)], [], [ZZ(-3), ZZ(0), ZZ(1)], [ZZ(3)]],\n+ [[ZZ(3), ZZ(0)], [], [ZZ(-1), ZZ(0), ZZ(1), ZZ(0)]]\n+ )\n+\n raises(DomainError, lambda: dup_real_imag([EX(1), EX(2)], EX))\n \n \n+\n def test_dup_mirror():\n assert dup_mirror([], ZZ) == []\n assert dup_mirror([1], ZZ) == [1]\n@@ -483,6 +514,16 @@ def test_dup_shift():\n assert dup_shift([1, 2, 3, 4, 5], 7, ZZ) == [1, 30, 339, 1712, 3267]\n \n \n+def test_dmp_shift():\n+ assert dmp_shift([ZZ(1), ZZ(2)], [ZZ(1)], 0, ZZ) == [ZZ(1), ZZ(3)]\n+\n+ assert dmp_shift([[]], [ZZ(1), ZZ(2)], 1, ZZ) == [[]]\n+\n+ xy = [[ZZ(1), ZZ(0)], []] # x*y\n+ x1y2 = [[ZZ(1), ZZ(2)], [ZZ(1), ZZ(2)]] # (x+1)*(y+2)\n+ assert dmp_shift(xy, [ZZ(1), ZZ(2)], 1, ZZ) == x1y2\n+\n+\n def test_dup_transform():\n assert dup_transform([], [], [1, 1], ZZ) == []\n assert dup_transform([], [1], [1, 1], ZZ) == []\n@@ -570,12 +611,17 @@ def test_dup_decompose():\n def test_dmp_lift():\n q = [QQ(1, 1), QQ(0, 1), QQ(1, 1)]\n \n- f = [ANP([QQ(1, 1)], q, QQ), ANP([], q, QQ), ANP([], q, QQ),\n+ f_a = [ANP([QQ(1, 1)], q, QQ), ANP([], q, QQ), ANP([], q, QQ),\n ANP([QQ(1, 1), QQ(0, 1)], q, QQ), ANP([QQ(17, 1), QQ(0, 1)], q, QQ)]\n \n- assert dmp_lift(f, 0, QQ.algebraic_field(I)) == \\\n- [QQ(1), QQ(0), QQ(0), QQ(0), QQ(0), QQ(0), QQ(2), QQ(0), QQ(578),\n- QQ(0), QQ(0), QQ(0), QQ(1), QQ(0), QQ(-578), QQ(0), QQ(83521)]\n+ f_lift = [QQ(1), QQ(0), QQ(0), QQ(0), QQ(0), QQ(0), QQ(2), QQ(0), QQ(578),\n+ QQ(0), QQ(0), QQ(0), QQ(1), QQ(0), QQ(-578), QQ(0), QQ(83521)]\n+\n+ assert dmp_lift(f_a, 0, QQ.algebraic_field(I)) == f_lift\n+\n+ f_g = [QQ_I(1), QQ_I(0), QQ_I(0), QQ_I(0, 1), QQ_I(0, 17)]\n+\n+ assert dmp_lift(f_g, 0, QQ_I) == f_lift\n \n raises(DomainError, lambda: dmp_lift([EX(1), EX(2)], 0, EX))\n \ndiff --git a/sympy/polys/tests/test_factortools.py b/sympy/polys/tests/test_factortools.py\nindex 84133d4137e4..7f99097c71e9 100644\n--- a/sympy/polys/tests/test_factortools.py\n+++ b/sympy/polys/tests/test_factortools.py\n@@ -562,7 +562,10 @@ def anp(element):\n \n \n def test_dmp_ext_factor():\n- R, x,y = ring(\"x,y\", QQ.algebraic_field(sqrt(2)))\n+ K = QQ.algebraic_field(sqrt(2))\n+ R, x,y = ring(\"x,y\", K)\n+ sqrt2 = K.unit\n+\n def anp(x):\n return ANP(x, [QQ(1), QQ(0), QQ(-2)], QQ)\n \n@@ -588,6 +591,12 @@ def anp(x):\n (anp([QQ(2)]), [(anp([QQ(1)])*x + anp([QQ(-1), QQ(0)])*y, 1),\n (anp([QQ(1)])*x + anp([QQ( 1), QQ(0)])*y, 1)])\n \n+ f1 = y + 1\n+ f2 = y + sqrt2\n+ f3 = x**2 + x + 2 + 3*sqrt2\n+ f = f1**2 * f2**2 * f3**2\n+ assert R.dmp_ext_factor(f) == (K.one, [(f1, 2), (f2, 2), (f3, 2)])\n+\n \n def test_dup_factor_list():\n R, x = ring(\"x\", ZZ)\ndiff --git a/sympy/polys/tests/test_polytools.py b/sympy/polys/tests/test_polytools.py\nindex 2bab24433e43..0cce69d0ffb1 100644\n--- a/sympy/polys/tests/test_polytools.py\n+++ b/sympy/polys/tests/test_polytools.py\n@@ -2313,6 +2313,11 @@ def test_compose():\n def test_shift():\n assert Poly(x**2 - 2*x + 1, x).shift(2) == Poly(x**2 + 2*x + 1, x)\n \n+\n+def test_shift_list():\n+ assert Poly(x*y, [x,y]).shift_list([1,2]) == Poly((x+1)*(y+2), [x,y])\n+\n+\n def test_transform():\n # Also test that 3-way unification is done correctly\n assert Poly(x**2 - 2*x + 1, x).transform(Poly(x + 1), Poly(x - 1)) == \\\n@@ -2397,17 +2402,17 @@ def test_norm():\n \n def test_sqf_norm():\n assert sqf_norm(x**2 - 2, extension=sqrt(3)) == \\\n- (1, x**2 - 2*sqrt(3)*x + 1, x**4 - 10*x**2 + 1)\n+ ([1], x**2 - 2*sqrt(3)*x + 1, x**4 - 10*x**2 + 1)\n assert sqf_norm(x**2 - 3, extension=sqrt(2)) == \\\n- (1, x**2 - 2*sqrt(2)*x - 1, x**4 - 10*x**2 + 1)\n+ ([1], x**2 - 2*sqrt(2)*x - 1, x**4 - 10*x**2 + 1)\n \n assert Poly(x**2 - 2, extension=sqrt(3)).sqf_norm() == \\\n- (1, Poly(x**2 - 2*sqrt(3)*x + 1, x, extension=sqrt(3)),\n- Poly(x**4 - 10*x**2 + 1, x, domain='QQ'))\n+ ([1], Poly(x**2 - 2*sqrt(3)*x + 1, x, extension=sqrt(3)),\n+ Poly(x**4 - 10*x**2 + 1, x, domain='QQ'))\n \n assert Poly(x**2 - 3, extension=sqrt(2)).sqf_norm() == \\\n- (1, Poly(x**2 - 2*sqrt(2)*x - 1, x, extension=sqrt(2)),\n- Poly(x**4 - 10*x**2 + 1, x, domain='QQ'))\n+ ([1], Poly(x**2 - 2*sqrt(2)*x - 1, x, extension=sqrt(2)),\n+ Poly(x**4 - 10*x**2 + 1, x, domain='QQ'))\n \n \n def test_sqf():\n@@ -2693,6 +2698,24 @@ def test_factor():\n assert factor_list((x - sqrt(2)*pi)*(x + sqrt(2)*pi), x) == (\n 1, [(x - sqrt(2)*pi, 1), (x + sqrt(2)*pi, 1)])\n \n+ # https://github.com/sympy/sympy/issues/26497\n+ p = ((y - I)**2 * (y + I) * (x + 1))\n+ assert factor(expand(p)) == p\n+\n+ p = ((x - I)**2 * (x + I) * (y + 1))\n+ assert factor(expand(p)) == p\n+\n+ p = (y + 1)**2*(y + sqrt(2))**2*(x**2 + x + 2 + 3*sqrt(2))**2\n+ assert factor(expand(p), extension=True) == p\n+\n+ e = (\n+ -x**2*y**4/(y**2 + 1) + 2*I*x**2*y**3/(y**2 + 1) + 2*I*x**2*y/(y**2 + 1) +\n+ x**2/(y**2 + 1) - 2*x*y**4/(y**2 + 1) + 4*I*x*y**3/(y**2 + 1) +\n+ 4*I*x*y/(y**2 + 1) + 2*x/(y**2 + 1) - y**4 - y**4/(y**2 + 1) + 2*I*y**3\n+ + 2*I*y**3/(y**2 + 1) + 2*I*y + 2*I*y/(y**2 + 1) + 1 + 1/(y**2 + 1)\n+ )\n+ assert factor(e) == -(y - I)**3*(y + I)*(x**2 + 2*x + y**2 + 2)/(y**2 + 1)\n+\n \n def test_factor_large():\n f = (x**2 + 4*x + 4)**10000000*(x**2 + 1)*(x**2 + 2*x + 1)**1234567\ndiff --git a/sympy/polys/tests/test_rings.py b/sympy/polys/tests/test_rings.py\nindex 2f560922ea81..d2cc34da5a83 100644\n--- a/sympy/polys/tests/test_rings.py\n+++ b/sympy/polys/tests/test_rings.py\n@@ -1490,6 +1490,12 @@ def test_PolyElement_decompose():\n def test_PolyElement_shift():\n _, x = ring(\"x\", ZZ)\n assert (x**2 - 2*x + 1).shift(2) == x**2 + 2*x + 1\n+ assert (x**2 - 2*x + 1).shift_list([2]) == x**2 + 2*x + 1\n+\n+ R, x, y = ring(\"x, y\", ZZ)\n+ assert (x*y).shift_list([1, 2]) == (x+1)*(y+2)\n+\n+ raises(MultivariatePolynomialError, lambda: (x*y).shift(1))\n \n def test_PolyElement_sturm():\n F, t = field(\"t\", ZZ)\n@@ -1517,12 +1523,12 @@ def test_PolyElement_sqf_norm():\n R, x = ring(\"x\", QQ.algebraic_field(sqrt(3)))\n X = R.to_ground().x\n \n- assert (x**2 - 2).sqf_norm() == (1, x**2 - 2*sqrt(3)*x + 1, X**4 - 10*X**2 + 1)\n+ assert (x**2 - 2).sqf_norm() == ([1], x**2 - 2*sqrt(3)*x + 1, X**4 - 10*X**2 + 1)\n \n R, x = ring(\"x\", QQ.algebraic_field(sqrt(2)))\n X = R.to_ground().x\n \n- assert (x**2 - 3).sqf_norm() == (1, x**2 - 2*sqrt(2)*x - 1, X**4 - 10*X**2 + 1)\n+ assert (x**2 - 3).sqf_norm() == ([1], x**2 - 2*sqrt(2)*x - 1, X**4 - 10*X**2 + 1)\n \n def test_PolyElement_sqf_list():\n _, x = ring(\"x\", ZZ)\ndiff --git a/sympy/polys/tests/test_sqfreetools.py b/sympy/polys/tests/test_sqfreetools.py\nindex 976b035163ed..b772a05a50e2 100644\n--- a/sympy/polys/tests/test_sqfreetools.py\n+++ b/sympy/polys/tests/test_sqfreetools.py\n@@ -134,6 +134,10 @@ def test_dmp_sqf():\n f = -x**2 + 2*x - 1\n assert R.dmp_sqf_list_include(f) == [(-1, 1), (x - 1, 2)]\n \n+ f = (y**2 + 1)**2*(x**2 + 2*x + 2)\n+ assert R.dmp_sqf_p(f) is False\n+ assert R.dmp_sqf_list(f) == (1, [(x**2 + 2*x + 2, 1), (y**2 + 1, 2)])\n+\n R, x, y = ring(\"x,y\", FF(2))\n raises(NotImplementedError, lambda: R.dmp_sqf_list(y**2 + 1))\n \n" }
[ { "diff_hunk": "@@ -118,59 +184,203 @@ def dup_sqf_norm(f, K):\n return s, f, r\n \n \n+def _dmp_sqf_norm_shifts(f, u, K):\n+ \"\"\"Generate a sequence of candidate shifts for dmp_sqf_norm.\"\"\"\n+ #\n+ # We want to find a minimal shift if possible because shifting high degree\n+ # variables can be expensive e.g. x**10 -> (x + 1)**10. We try a few easy\n+ # cases first before the final infinite loop that is guaranteed to give\n+ # only finitely many bad shifts (see Trager76 for proof of this in the\n+ # univariate case).\n+ #\n+\n+ # First the trivial shift [0, 0, ...]\n+ n = u + 1\n+ s0 = [0] * n\n+ yield s0, f\n+\n+ # Shift in multiples of the generator of the extension field K\n+ a = K.unit\n+\n+ # Variables of degree > 0 ordered by increasing degree\n+ d = dmp_degree_list(f, u)\n+ var_indices = [i for di, i in sorted(zip(d, range(u+1))) if di > 0]\n+\n+ # Now try [1, 0, 0, ...], [0, 1, 0, ...]\n+ for i in var_indices:\n+ s1 = s0.copy()\n+ s1[i] = 1\n+ a1 = [-a*s1i for s1i in s1]\n+ f1 = dmp_shift(f, a1, u, K)\n+ yield s1, f1\n+\n+ # Now try [1, 1, 1, ...], [2, 2, 2, ...]\n+ j = 0\n+ while True:\n+ j += 1\n+ sj = [j] * n\n+ aj = [-a*j] * n\n+ fj = dmp_shift(f, aj, u, K)\n+ yield sj, fj\n+\n+\n def dmp_sqf_norm(f, u, K):\n- \"\"\"\n- Square-free norm of ``f`` in ``K[X]``, useful over algebraic domains.\n+ r\"\"\"\n+ Find a shift of ``f`` in ``K[X]`` that has square-free norm.\n+\n+ The domain `K` must be an algebraic number field `k(a)` (see :ref:`QQ(a)`).\n \n- Returns ``s``, ``f``, ``r``, such that ``g(x) = f(x-sa)`` and ``r(x) = Norm(g(x))``\n- is a square-free polynomial over K, where ``a`` is the algebraic extension of ``K``.\n+ Returns `(s,g,r)`, such that `g(x_1,x_2,\\cdots)=f(x_1-s_1 a, x_2 - s_2 a,\n+ \\cdots)`, `r(x)=\\text{Norm}(g(x))` and `r` is a square-free polynomial over\n+ `k`.\n \n Examples\n ========\n \n+ We first create the algebraic number field `K=k(a)=\\mathbb{Q}(i)` and rings\n+ `K[x]` and `k[x]`:", "line": null, "original_line": 241, "original_start_line": null, "path": "sympy/polys/sqfreetools.py", "start_line": null, "text": "@user1:\n``K[x, y]`` and ``k[x, y]``\n\n@author:\nFixed." }, { "diff_hunk": "@@ -118,59 +184,203 @@ def dup_sqf_norm(f, K):\n return s, f, r\n \n \n+def _dmp_sqf_norm_shifts(f, u, K):\n+ \"\"\"Generate a sequence of candidate shifts for dmp_sqf_norm.\"\"\"\n+ #\n+ # We want to find a minimal shift if possible because shifting high degree\n+ # variables can be expensive e.g. x**10 -> (x + 1)**10. We try a few easy\n+ # cases first before the final infinite loop that is guaranteed to give\n+ # only finitely many bad shifts (see Trager76 for proof of this in the\n+ # univariate case).\n+ #\n+\n+ # First the trivial shift [0, 0, ...]\n+ n = u + 1\n+ s0 = [0] * n\n+ yield s0, f\n+\n+ # Shift in multiples of the generator of the extension field K\n+ a = K.unit\n+\n+ # Variables of degree > 0 ordered by increasing degree\n+ d = dmp_degree_list(f, u)\n+ var_indices = [i for di, i in sorted(zip(d, range(u+1))) if di > 0]\n+\n+ # Now try [1, 0, 0, ...], [0, 1, 0, ...]\n+ for i in var_indices:\n+ s1 = s0.copy()\n+ s1[i] = 1\n+ a1 = [-a*s1i for s1i in s1]\n+ f1 = dmp_shift(f, a1, u, K)\n+ yield s1, f1\n+\n+ # Now try [1, 1, 1, ...], [2, 2, 2, ...]\n+ j = 0\n+ while True:\n+ j += 1\n+ sj = [j] * n\n+ aj = [-a*j] * n\n+ fj = dmp_shift(f, aj, u, K)\n+ yield sj, fj\n+\n+\n def dmp_sqf_norm(f, u, K):\n- \"\"\"\n- Square-free norm of ``f`` in ``K[X]``, useful over algebraic domains.\n+ r\"\"\"\n+ Find a shift of ``f`` in ``K[X]`` that has square-free norm.\n+\n+ The domain `K` must be an algebraic number field `k(a)` (see :ref:`QQ(a)`).\n \n- Returns ``s``, ``f``, ``r``, such that ``g(x) = f(x-sa)`` and ``r(x) = Norm(g(x))``\n- is a square-free polynomial over K, where ``a`` is the algebraic extension of ``K``.\n+ Returns `(s,g,r)`, such that `g(x_1,x_2,\\cdots)=f(x_1-s_1 a, x_2 - s_2 a,\n+ \\cdots)`, `r(x)=\\text{Norm}(g(x))` and `r` is a square-free polynomial over\n+ `k`.\n \n Examples\n ========\n \n+ We first create the algebraic number field `K=k(a)=\\mathbb{Q}(i)` and rings\n+ `K[x]` and `k[x]`:\n+\n >>> from sympy.polys import ring, QQ\n >>> from sympy import I\n \n >>> K = QQ.algebraic_field(I)\n >>> R, x, y = ring(\"x,y\", K)\n >>> _, X, Y = ring(\"x,y\", QQ)\n \n- >>> s, f, r = R.dmp_sqf_norm(x*y + y**2)\n+ We can now find a square free norm for a shift of `f`:\n \n- >>> s == 1\n+ >>> f = x*y + y**2\n+ >>> s, g, r = R.dmp_sqf_norm(f)\n+\n+ The choice of shifts ``s`` is arbitrary and the particular values returned\n+ for ``g`` and ``r`` are determined by ``s``.\n+\n+ >>> s\n+ [0, 1]\n+ >>> g == x*y - I*x + y**2 - 2*I*y - 1\n+ True\n+ >>> r == X**2*Y**2 + X**2 + 2*X*Y**3 + 2*X*Y + Y**4 + 2*Y**2 + 1\n True\n- >>> f == x*y + y**2 + K([QQ(-1), QQ(0)])*y\n+\n+ The required invariants are:\n+\n+ >>> g == f.shift_list([-si*K.unit for si in s])\n True\n- >>> r == X**2*Y**2 + 2*X*Y**3 + Y**4 + Y**2\n+ >>> g.norm() == r\n True\n+ >>> r.is_squarefree\n+ True\n+\n+ Explanation\n+ ===========\n+\n+ This is part of Trager's algorithm for factorizing polynomials over\n+ algebraic number fields. In particular this function is a multivariate\n+ generalization of algorithm ``sqfr_norm`` from [Trager76]_.\n+\n+ See Also\n+ ========\n \n+ dup_sqf_norm:\n+ Analogous function for multivariate polynomials over ``k(a)``.", "line": null, "original_line": 285, "original_start_line": null, "path": "sympy/polys/sqfreetools.py", "start_line": null, "text": "@user1:\n\"multivariate\" --> \"univariate\"\n\n@author:\nFixed." } ]
82c1557b6a7ea386e0182020b6930e4bfeb64763
diff --git a/doc/src/modules/polys/internals.rst b/doc/src/modules/polys/internals.rst index 31c8659be326..25d24f6ac28c 100644 --- a/doc/src/modules/polys/internals.rst +++ b/doc/src/modules/polys/internals.rst @@ -545,7 +545,9 @@ Polynomial factorization in characteristic zero: .. currentmodule:: sympy.polys.factortools +.. autofunction:: dup_trial_division .. autofunction:: dmp_trial_division +.. autofunction:: dup_zz_mignotte_bound .. autofunction:: dmp_zz_mignotte_bound .. autofunction:: dup_zz_hensel_step .. autofunction:: dup_zz_hensel_lift @@ -559,16 +561,49 @@ Polynomial factorization in characteristic zero: .. autofunction:: dmp_zz_wang_non_divisors .. autofunction:: dmp_zz_wang_test_points .. autofunction:: dmp_zz_wang_lead_coeffs +.. autofunction:: dup_zz_diophantine .. autofunction:: dmp_zz_diophantine .. autofunction:: dmp_zz_wang_hensel_lifting .. autofunction:: dmp_zz_wang .. autofunction:: dmp_zz_factor +.. autofunction:: dup_qq_i_factor +.. autofunction:: dup_zz_i_factor +.. autofunction:: dmp_qq_i_factor +.. autofunction:: dmp_zz_i_factor +.. autofunction:: dup_ext_factor .. autofunction:: dmp_ext_factor .. autofunction:: dup_gf_factor +.. autofunction:: dmp_gf_factor +.. autofunction:: dup_factor_list +.. autofunction:: dup_factor_list_include .. autofunction:: dmp_factor_list .. autofunction:: dmp_factor_list_include +.. autofunction:: dup_irreducible_p .. autofunction:: dmp_irreducible_p +Square-free factorization: + +.. currentmodule:: sympy.polys.sqfreetools + +.. autofunction:: dup_sqf_p +.. autofunction:: dmp_sqf_p +.. autofunction:: dup_sqf_norm +.. autofunction:: dmp_sqf_norm +.. autofunction:: dmp_norm +.. autofunction:: dup_gf_sqf_part +.. autofunction:: dmp_gf_sqf_part +.. autofunction:: dup_sqf_part +.. autofunction:: dmp_sqf_part +.. autofunction:: dup_gf_sqf_list +.. autofunction:: dmp_gf_sqf_list +.. autofunction:: dup_sqf_list +.. autofunction:: dup_sqf_list_include +.. autofunction:: dmp_sqf_list +.. autofunction:: dmp_sqf_list_include +.. autofunction:: dup_gff_list +.. autofunction:: dmp_gff_list + + Groebner basis algorithms ************************* diff --git a/doc/src/modules/polys/literature.rst b/doc/src/modules/polys/literature.rst index 47fa9453baa9..5cfc3336ed68 100644 --- a/doc/src/modules/polys/literature.rst +++ b/doc/src/modules/polys/literature.rst @@ -144,4 +144,16 @@ a theoretical foundation for implementing polynomials manipulation module. https://isc.tamu.edu/resources/preprints/1996/1996-02.pdf .. [Cohen93] Henri Cohen. "A Course in Computational Algebraic Number Theory", - Springer, 1993. + Springer, 1993. + +.. [Trager76] Barry M. Trager. "Algebraic factoriing and rational function + integration", Proceedings of SYMSAC 1976, pp. 219-226, ACM, 1976. + https://dl.acm.org/doi/abs/10.1145/800205.806338 + +.. [Yun76] David Y.Y. Yun. "On square-free decomposition algorithms", + Proceedings of SYMSAC 1976, pp. 219-226, ACM, 1976. + https://dl.acm.org/doi/10.1145/800205.806320 + +.. [Abbott13] John Abbott. "Bounds on factors in Z[x]". + Journal of Symbolic Computation 50 (2013), pp. 532-563 + https://doi.org/10.1016/j.jsc.2012.09.004 diff --git a/sympy/polys/compatibility.py b/sympy/polys/compatibility.py index 6635c61e569c..ee71d9efedae 100644 --- a/sympy/polys/compatibility.py +++ b/sympy/polys/compatibility.py @@ -101,6 +101,7 @@ from sympy.polys.densetools import dup_mirror from sympy.polys.densetools import dup_scale from sympy.polys.densetools import dup_shift +from sympy.polys.densetools import dmp_shift from sympy.polys.densetools import dup_transform from sympy.polys.densetools import dup_compose from sympy.polys.densetools import dmp_compose @@ -209,9 +210,10 @@ from sympy.polys.rootisolation import dup_isolate_all_roots from sympy.polys.sqfreetools import ( - dup_sqf_p, dmp_sqf_p, dup_sqf_norm, dmp_sqf_norm, dup_gf_sqf_part, dmp_gf_sqf_part, - dup_sqf_part, dmp_sqf_part, dup_gf_sqf_list, dmp_gf_sqf_list, dup_sqf_list, - dup_sqf_list_include, dmp_sqf_list, dmp_sqf_list_include, dup_gff_list, dmp_gff_list) + dup_sqf_p, dmp_sqf_p, dmp_norm, dup_sqf_norm, dmp_sqf_norm, + dup_gf_sqf_part, dmp_gf_sqf_part, dup_sqf_part, dmp_sqf_part, + dup_gf_sqf_list, dmp_gf_sqf_list, dup_sqf_list, dup_sqf_list_include, + dmp_sqf_list, dmp_sqf_list_include, dup_gff_list, dmp_gff_list) from sympy.polys.galoistools import ( gf_degree, gf_LC, gf_TC, gf_strip, gf_from_dict, @@ -515,6 +517,8 @@ def dup_scale(self, f, a): return self.from_dense(dup_scale(self.to_dense(f), a, self.domain)) def dup_shift(self, f, a): return self.from_dense(dup_shift(self.to_dense(f), a, self.domain)) + def dmp_shift(self, f, a): + return self.from_dense(dmp_shift(self.to_dense(f), a, self.ngens-1, self.domain)) def dup_transform(self, f, p, q): return self.from_dense(dup_transform(self.to_dense(f), self.to_dense(p), self.to_dense(q), self.domain)) @@ -877,6 +881,10 @@ def dup_sqf_p(self, f): def dmp_sqf_p(self, f): return dmp_sqf_p(self.to_dense(f), self.ngens-1, self.domain) + def dmp_norm(self, f): + n = dmp_norm(self.to_dense(f), self.ngens-1, self.domain) + return self.to_ground().from_dense(n) + def dup_sqf_norm(self, f): s, F, R = dup_sqf_norm(self.to_dense(f), self.domain) return (s, self.from_dense(F), self.to_ground().from_dense(R)) diff --git a/sympy/polys/densetools.py b/sympy/polys/densetools.py index 30760c6559c6..0c9db99e9e68 100644 --- a/sympy/polys/densetools.py +++ b/sympy/polys/densetools.py @@ -782,7 +782,7 @@ def dmp_ground_extract(f, g, u, K): def dup_real_imag(f, K): """ - Return bivariate polynomials ``f1`` and ``f2``, such that ``f = f1 + f2*I``. + Find ``f1`` and ``f2``, such that ``f(x+I*y) = f1(x,y) + f2(x,y)*I``. Examples ======== @@ -793,6 +793,11 @@ def dup_real_imag(f, K): >>> R.dup_real_imag(x**3 + x**2 + x + 1) (x**3 + x**2 - 3*x*y**2 + x - y**2 + 1, 3*x**2*y + 2*x*y - y**3 + y) + >>> from sympy.abc import x, y, z + >>> from sympy import I + >>> (z**3 + z**2 + z + 1).subs(z, x+I*y).expand().collect(I) + x**3 + x**2 - 3*x*y**2 + x - y**2 + I*(3*x**2*y + 2*x*y - y**3 + y) + 1 + """ if not K.is_ZZ and not K.is_QQ: raise DomainError("computing real and imaginary parts is not supported over %s" % K) @@ -894,6 +899,44 @@ def dup_shift(f, a, K): return f +def dmp_shift(f, a, u, K): + """ + Evaluate efficiently Taylor shift ``f(X + A)`` in ``K[X]``. + + Examples + ======== + + >>> from sympy import symbols, ring, ZZ + >>> x, y = symbols('x y') + >>> R, _, _ = ring([x, y], ZZ) + + >>> p = x**2*y + 2*x*y + 3*x + 4*y + 5 + + >>> R.dmp_shift(R(p), [ZZ(1), ZZ(2)]) + x**2*y + 2*x**2 + 4*x*y + 11*x + 7*y + 22 + + >>> p.subs({x: x + 1, y: y + 2}).expand() + x**2*y + 2*x**2 + 4*x*y + 11*x + 7*y + 22 + """ + if not u: + return dup_shift(f, a[0], K) + + if dmp_zero_p(f, u): + return f + + a0, a1 = a[0], a[1:] + + f = [ dmp_shift(c, a1, u-1, K) for c in f ] + n = len(f) - 1 + + for i in range(n, 0, -1): + for j in range(0, i): + afj = dmp_mul_ground(f[j], a0, u-1, K) + f[j + 1] = dmp_add(f[j + 1], afj, u-1, K) + + return f + + def dup_transform(f, p, q, K): """ Evaluate functional transformation ``q**n * f(p/q)`` in ``K[x]``. diff --git a/sympy/polys/factortools.py b/sympy/polys/factortools.py index c0316375ba1a..021a6b06cb88 100644 --- a/sympy/polys/factortools.py +++ b/sympy/polys/factortools.py @@ -55,8 +55,7 @@ dup_primitive, dmp_ground_primitive, dmp_eval_tail, dmp_eval_in, dmp_diff_eval_in, - dmp_compose, - dup_shift, dup_mirror) + dup_shift, dmp_shift, dup_mirror) from sympy.polys.euclidtools import ( dmp_primitive, @@ -65,7 +64,9 @@ from sympy.polys.sqfreetools import ( dup_sqf_p, dup_sqf_norm, dmp_sqf_norm, - dup_sqf_part, dmp_sqf_part) + dup_sqf_part, dmp_sqf_part, + _dup_check_degrees, _dmp_check_degrees, + ) from sympy.polys.polyutils import _sort_factors from sympy.polys.polyconfig import query @@ -88,6 +89,8 @@ def dup_trial_division(f, factors, K): """ Determine multiplicities of factors for a univariate polynomial using trial division. + + An error will be raised if any factor does not divide ``f``. """ result = [] @@ -102,6 +105,9 @@ def dup_trial_division(f, factors, K): else: break + if k == 0: + raise RuntimeError("trial division failed") + result.append((factor, k)) return _sort_factors(result) @@ -111,6 +117,8 @@ def dmp_trial_division(f, factors, u, K): """ Determine multiplicities of factors for a multivariate polynomial using trial division. + + An error will be raised if any factor does not divide ``f``. """ result = [] @@ -125,6 +133,9 @@ def dmp_trial_division(f, factors, u, K): else: break + if k == 0: + raise RuntimeError("trial division failed") + result.append((factor, k)) return _sort_factors(result) @@ -133,7 +144,7 @@ def dmp_trial_division(f, factors, u, K): def dup_zz_mignotte_bound(f, K): """ The Knuth-Cohen variant of Mignotte bound for - univariate polynomials in `K[x]`. + univariate polynomials in ``K[x]``. Examples ======== @@ -145,17 +156,18 @@ def dup_zz_mignotte_bound(f, K): >>> R.dup_zz_mignotte_bound(f) 152 - By checking `factor(f)` we can see that max coeff is 8 + By checking ``factor(f)`` we can see that max coeff is 8 - Also consider a case that `f` is irreducible for example `f = 2*x**2 + 3*x + 4` - To avoid a bug for these cases, we return the bound plus the max coefficient of `f` + Also consider a case that ``f`` is irreducible for example + ``f = 2*x**2 + 3*x + 4``. To avoid a bug for these cases, we return the + bound plus the max coefficient of ``f`` >>> f = 2*x**2 + 3*x + 4 >>> R.dup_zz_mignotte_bound(f) 6 - Lastly,To see the difference between the new and the old Mignotte bound - consider the irreducible polynomial:: + Lastly, to see the difference between the new and the old Mignotte bound + consider the irreducible polynomial: >>> f = 87*x**7 + 4*x**6 + 80*x**5 + 17*x**4 + 9*x**3 + 12*x**2 + 49*x + 26 >>> R.dup_zz_mignotte_bound(f) @@ -167,7 +179,7 @@ def dup_zz_mignotte_bound(f, K): References ========== - ..[1] [Abbott2013]_ + ..[1] [Abbott13]_ """ from sympy.functions.combinatorial.factorials import binomial @@ -702,6 +714,9 @@ def dup_zz_factor(f, K): H = dup_zz_zassenhaus(g, K) factors = dup_trial_division(f, H, K) + + _dup_check_degrees(f, factors) + return cont, factors @@ -1177,6 +1192,8 @@ def dmp_zz_factor(f, u, K): for g, k in dmp_zz_factor(G, u - 1, K)[1]: factors.insert(0, ([g], k)) + _dmp_check_degrees(f, u, factors) + return cont, _sort_factors(factors) @@ -1247,7 +1264,71 @@ def dmp_zz_i_factor(f, u, K0): def dup_ext_factor(f, K): - """Factor univariate polynomials over algebraic number fields. """ + r"""Factor univariate polynomials over algebraic number fields. + + The domain `K` must be an algebraic number field `k(a)` (see :ref:`QQ(a)`). + + Examples + ======== + + First define the algebraic number field `K = \mathbb{Q}(\sqrt{2})`: + + >>> from sympy import QQ, sqrt + >>> from sympy.polys.factortools import dup_ext_factor + >>> K = QQ.algebraic_field(sqrt(2)) + + We can now factorise the polynomial `x^2 - 2` over `K`: + + >>> p = [K(1), K(0), K(-2)] # x^2 - 2 + >>> p1 = [K(1), -K.unit] # x - sqrt(2) + >>> p2 = [K(1), +K.unit] # x + sqrt(2) + >>> dup_ext_factor(p, K) == (K.one, [(p1, 1), (p2, 1)]) + True + + Usually this would be done at a higher level: + + >>> from sympy import factor + >>> from sympy.abc import x + >>> factor(x**2 - 2, extension=sqrt(2)) + (x - sqrt(2))*(x + sqrt(2)) + + Explanation + =========== + + Uses Trager's algorithm. In particular this function is algorithm + ``alg_factor`` from [Trager76]_. + + If `f` is a polynomial in `k(a)[x]` then its norm `g(x)` is a polynomial in + `k[x]`. If `g(x)` is square-free and has irreducible factors `g_1(x)`, + `g_2(x)`, `\cdots` then the irreducible factors of `f` in `k(a)[x]` are + given by `f_i(x) = \gcd(f(x), g_i(x))` where the GCD is computed in + `k(a)[x]`. + + The first step in Trager's algorithm is to find an integer shift `s` so + that `f(x-sa)` has square-free norm. Then the norm is factorized in `k[x]` + and the GCD of (shifted) `f` with each factor gives the shifted factors of + `f`. At the end the shift is undone to recover the unshifted factors of `f` + in `k(a)[x]`. + + The algorithm reduces the problem of factorization in `k(a)[x]` to + factorization in `k[x]` with the main additional steps being to compute the + norm (a resultant calculation in `k[x,y]`) and some polynomial GCDs in + `k(a)[x]`. + + In practice in SymPy the base field `k` will be the rationals :ref:`QQ` and + this function factorizes a polynomial with coefficients in an algebraic + number field like `\mathbb{Q}(\sqrt{2})`. + + See Also + ======== + + dmp_ext_factor: + Analogous function for multivariate polynomials over ``k(a)``. + dup_sqf_norm: + Subroutine ``sqfr_norm`` also from [Trager76]_. + sympy.polys.polytools.factor: + The high-level function that ultimately uses this function as needed. + """ n, lc = dup_degree(f), dup_LC(f, K) f = dup_monic(f, K) @@ -1274,11 +1355,59 @@ def dup_ext_factor(f, K): factors[i] = h factors = dup_trial_division(F, factors, K) + + _dup_check_degrees(F, factors) + return lc, factors def dmp_ext_factor(f, u, K): - """Factor multivariate polynomials over algebraic number fields. """ + r"""Factor multivariate polynomials over algebraic number fields. + + The domain `K` must be an algebraic number field `k(a)` (see :ref:`QQ(a)`). + + Examples + ======== + + First define the algebraic number field `K = \mathbb{Q}(\sqrt{2})`: + + >>> from sympy import QQ, sqrt + >>> from sympy.polys.factortools import dmp_ext_factor + >>> K = QQ.algebraic_field(sqrt(2)) + + We can now factorise the polynomial `x^2 y^2 - 2` over `K`: + + >>> p = [[K(1),K(0),K(0)], [], [K(-2)]] # x**2*y**2 - 2 + >>> p1 = [[K(1),K(0)], [-K.unit]] # x*y - sqrt(2) + >>> p2 = [[K(1),K(0)], [+K.unit]] # x*y + sqrt(2) + >>> dmp_ext_factor(p, 1, K) == (K.one, [(p1, 1), (p2, 1)]) + True + + Usually this would be done at a higher level: + + >>> from sympy import factor + >>> from sympy.abc import x, y + >>> factor(x**2*y**2 - 2, extension=sqrt(2)) + (x*y - sqrt(2))*(x*y + sqrt(2)) + + Explanation + =========== + + This is Trager's algorithm for multivariate polynomials. In particular this + function is algorithm ``alg_factor`` from [Trager76]_. + + See :func:`dup_ext_factor` for explanation. + + See Also + ======== + + dup_ext_factor: + Analogous function for univariate polynomials over ``k(a)``. + dmp_sqf_norm: + Multivariate version of subroutine ``sqfr_norm`` also from [Trager76]_. + sympy.polys.polytools.factor: + The high-level function that ultimately uses this function as needed. + """ if not u: return dup_ext_factor(f, K) @@ -1296,15 +1425,18 @@ def dmp_ext_factor(f, u, K): if len(factors) == 1: factors = [f] else: - H = dmp_raise([K.one, s*K.unit], u, 0, K) - for i, (factor, _) in enumerate(factors): h = dmp_convert(factor, u, K.dom, K) h, _, g = dmp_inner_gcd(h, g, u, K) - h = dmp_compose(h, H, u, K) + a = [si*K.unit for si in s] + h = dmp_shift(h, a, u, K) factors[i] = h - return lc, dmp_trial_division(F, factors, u, K) + result = dmp_trial_division(F, factors, u, K) + + _dmp_check_degrees(F, u, result) + + return lc, result def dup_gf_factor(f, K): diff --git a/sympy/polys/numberfields/subfield.py b/sympy/polys/numberfields/subfield.py index 4c6a1e147eb5..b959ddeb27a6 100644 --- a/sympy/polys/numberfields/subfield.py +++ b/sympy/polys/numberfields/subfield.py @@ -356,7 +356,7 @@ def primitive_element(extension, x=None, *, ex=False, polys=False): continue _, factors = factor_list(g, extension=ext) g = _choose_factor(factors, x, gen) - s, _, g = g.sqf_norm() + [s], _, g = g.sqf_norm() gen += s*ext coeffs.append(s) @@ -378,7 +378,7 @@ def primitive_element(extension, x=None, *, ex=False, polys=False): L = QQ.algebraic_field((p, ext)) _, factors = factor_list(f, domain=L) f = _choose_factor(factors, x, gen) - s, g, f = f.sqf_norm() + [s], g, f = f.sqf_norm() gen += s*ext coeffs.append(s) K = QQ.algebraic_field((f, gen)) diff --git a/sympy/polys/polyclasses.py b/sympy/polys/polyclasses.py index 058f32681e22..c19f72a227f8 100644 --- a/sympy/polys/polyclasses.py +++ b/sympy/polys/polyclasses.py @@ -81,6 +81,7 @@ dmp_compose, dup_decompose, dup_shift, + dmp_shift, dup_transform, dmp_lift) @@ -897,6 +898,11 @@ def shift(f, a): return f._shift(f.dom.convert(a)) + def shift_list(f, a): + """Efficiently compute Taylor shift ``f(X + A)``. """ + a = [f.dom.convert(ai) for ai in a] + return f._shift_list(a) + def _shift(f, a): raise NotImplementedError @@ -1574,6 +1580,10 @@ def _shift(f, a): """Efficiently compute Taylor shift ``f(x + a)``. """ return f.per(dup_shift(f._rep, a, f.dom)) + def _shift_list(f, a): + """Efficiently compute Taylor shift ``f(X + A)``. """ + return f.per(dmp_shift(f._rep, a, f.lev, f.dom)) + def _transform(f, p, q): """Evaluate functional transformation ``q**n * f(p/q)``.""" return f.per(dup_transform(f._rep, p._rep, q._rep, f.dom)) diff --git a/sympy/polys/polytools.py b/sympy/polys/polytools.py index b17310651072..80f317d6798a 100644 --- a/sympy/polys/polytools.py +++ b/sympy/polys/polytools.py @@ -3102,13 +3102,32 @@ def shift(f, a): >>> Poly(x**2 - 2*x + 1, x).shift(2) Poly(x**2 + 2*x + 1, x, domain='ZZ') + See Also + ======== + + shift_list: Analogous method for multivariate polynomials. """ - if hasattr(f.rep, 'shift'): - result = f.rep.shift(a) - else: # pragma: no cover - raise OperationNotSupported(f, 'shift') + return f.per(f.rep.shift(a)) - return f.per(result) + def shift_list(f, a): + """ + Efficiently compute Taylor shift ``f(X + A)``. + + Examples + ======== + + >>> from sympy import Poly + >>> from sympy.abc import x, y + + >>> Poly(x*y, [x,y]).shift_list([1, 2]) == Poly((x+1)*(y+2), [x,y]) + True + + See Also + ======== + + shift: Analogous method for univariate polynomials. + """ + return f.per(f.rep.shift_list(a)) def transform(f, p, q): """ @@ -3240,7 +3259,7 @@ def sqf_norm(f): >>> s, f, r = Poly(x**2 + 1, x, extension=[sqrt(3)]).sqf_norm() >>> s - 1 + [1] >>> f Poly(x**2 - 2*sqrt(3)*x + 4, x, domain='QQ<sqrt(3)>') >>> r @@ -6042,7 +6061,7 @@ def sqf_norm(f, *gens, **args): >>> from sympy.abc import x >>> sqf_norm(x**2 + 1, extension=[sqrt(3)]) - (1, x**2 - 2*sqrt(3)*x + 4, x**4 - 4*x**2 + 16) + ([1], x**2 - 2*sqrt(3)*x + 4, x**4 - 4*x**2 + 16) """ options.allowed_flags(args, ['polys']) @@ -6054,10 +6073,12 @@ def sqf_norm(f, *gens, **args): s, g, r = F.sqf_norm() + s_expr = [Integer(si) for si in s] + if not opt.polys: - return Integer(s), g.as_expr(), r.as_expr() + return s_expr, g.as_expr(), r.as_expr() else: - return Integer(s), g, r + return s_expr, g, r @public @@ -6554,11 +6575,11 @@ def _try_factor(expr): try: return _generic_factor(f, gens, args, method='factor') - except PolynomialError as msg: + except PolynomialError: if not f.is_commutative: return factor_nc(f) else: - raise PolynomialError(msg) + raise @public diff --git a/sympy/polys/rings.py b/sympy/polys/rings.py index 57b0e92e6946..7e3db5d3997b 100644 --- a/sympy/polys/rings.py +++ b/sympy/polys/rings.py @@ -2983,7 +2983,10 @@ def shift(f, a): if f.ring.is_univariate: return f.ring.dup_shift(f, a) else: - raise MultivariatePolynomialError("polynomial shift") + raise MultivariatePolynomialError("shift: use shift_list instead") + + def shift_list(f, a): + return f.ring.dmp_shift(f, a) def sturm(f): if f.ring.is_univariate: @@ -2994,6 +2997,9 @@ def sturm(f): def gff_list(f): return f.ring.dmp_gff_list(f) + def norm(f): + return f.ring.dmp_norm(f) + def sqf_norm(f): return f.ring.dmp_sqf_norm(f) diff --git a/sympy/polys/sqfreetools.py b/sympy/polys/sqfreetools.py index a27b6f573aa9..1e773e9a1b6b 100644 --- a/sympy/polys/sqfreetools.py +++ b/sympy/polys/sqfreetools.py @@ -12,12 +12,12 @@ dup_LC, dmp_ground_LC, dmp_zero_p, dmp_ground, - dup_degree, dmp_degree, + dup_degree, dmp_degree, dmp_degree_in, dmp_degree_list, dmp_raise, dmp_inject, dup_convert) from sympy.polys.densetools import ( dup_diff, dmp_diff, dmp_diff_in, - dup_shift, dmp_compose, + dup_shift, dmp_shift, dup_monic, dmp_ground_monic, dup_primitive, dmp_ground_primitive) from sympy.polys.euclidtools import ( @@ -30,6 +30,22 @@ MultivariatePolynomialError, DomainError) + +def _dup_check_degrees(f, result): + """Sanity check the degrees of a computed factorization in K[x].""" + deg = sum(k * dup_degree(fac) for (fac, k) in result) + assert deg == dup_degree(f) + + +def _dmp_check_degrees(f, u, result): + """Sanity check the degrees of a computed factorization in K[X].""" + degs = [0] * (u + 1) + for fac, k in result: + degs_fac = dmp_degree_list(fac, u) + degs = [d1 + k * d2 for d1, d2 in zip(degs, degs_fac)] + assert tuple(degs) == dmp_degree_list(f, u) + + def dup_sqf_p(f, K): """ Return ``True`` if ``f`` is a square-free polynomial in ``K[x]``. @@ -70,20 +86,37 @@ def dmp_sqf_p(f, u, K): """ if dmp_zero_p(f, u): return True - else: - return not dmp_degree(dmp_gcd(f, dmp_diff(f, 1, u, K), u, K), u) + + for i in range(u+1): + + fp = dmp_diff_in(f, 1, i, u, K) + + if dmp_zero_p(fp, u): + continue + + gcd = dmp_gcd(f, fp, u, K) + + if dmp_degree_in(gcd, i, u) != 0: + return False + + return True def dup_sqf_norm(f, K): - """ - Square-free norm of ``f`` in ``K[x]``, useful over algebraic domains. + r""" + Find a shift of `f` in `K[x]` that has square-free norm. + + The domain `K` must be an algebraic number field `k(a)` (see :ref:`QQ(a)`). - Returns ``s``, ``f``, ``r``, such that ``g(x) = f(x-sa)`` and ``r(x) = Norm(g(x))`` - is a square-free polynomial over K, where ``a`` is the algebraic extension of ``K``. + Returns `(s,g,r)`, such that `g(x)=f(x-sa)`, `r(x)=\text{Norm}(g(x))` and + `r` is a square-free polynomial over `k`. Examples ======== + We first create the algebraic number field `K=k(a)=\mathbb{Q}(\sqrt{3})` + and rings `K[x]` and `k[x]`: + >>> from sympy.polys import ring, QQ >>> from sympy import sqrt @@ -91,15 +124,48 @@ def dup_sqf_norm(f, K): >>> R, x = ring("x", K) >>> _, X = ring("x", QQ) - >>> s, f, r = R.dup_sqf_norm(x**2 - 2) + We can now find a square free norm for a shift of `f`: + + >>> f = x**2 - 1 + >>> s, g, r = R.dup_sqf_norm(f) + + The choice of shift `s` is arbitrary and the particular values returned for + `g` and `r` are determined by `s`. >>> s == 1 True - >>> f == x**2 + K([QQ(-2), QQ(0)])*x + 1 + >>> g == x**2 - 2*sqrt(3)*x + 2 True - >>> r == X**4 - 10*X**2 + 1 + >>> r == X**4 - 8*X**2 + 4 True + The invariants are: + + >>> g == f.shift(-s*K.unit) + True + >>> g.norm() == r + True + >>> r.is_squarefree + True + + Explanation + =========== + + This is part of Trager's algorithm for factorizing polynomials over + algebraic number fields. In particular this function is algorithm + ``sqfr_norm`` from [Trager76]_. + + See Also + ======== + + dmp_sqf_norm: + Analogous function for multivariate polynomials over ``k(a)``. + dmp_norm: + Computes the norm of `f` directly without any shift. + dup_ext_factor: + Function implementing Trager's algorithm that uses this. + sympy.polys.polytools.sqf_norm: + High-level interface for using this function. """ if not K.is_Algebraic: raise DomainError("ground domain must be algebraic") @@ -118,16 +184,62 @@ def dup_sqf_norm(f, K): return s, f, r +def _dmp_sqf_norm_shifts(f, u, K): + """Generate a sequence of candidate shifts for dmp_sqf_norm.""" + # + # We want to find a minimal shift if possible because shifting high degree + # variables can be expensive e.g. x**10 -> (x + 1)**10. We try a few easy + # cases first before the final infinite loop that is guaranteed to give + # only finitely many bad shifts (see Trager76 for proof of this in the + # univariate case). + # + + # First the trivial shift [0, 0, ...] + n = u + 1 + s0 = [0] * n + yield s0, f + + # Shift in multiples of the generator of the extension field K + a = K.unit + + # Variables of degree > 0 ordered by increasing degree + d = dmp_degree_list(f, u) + var_indices = [i for di, i in sorted(zip(d, range(u+1))) if di > 0] + + # Now try [1, 0, 0, ...], [0, 1, 0, ...] + for i in var_indices: + s1 = s0.copy() + s1[i] = 1 + a1 = [-a*s1i for s1i in s1] + f1 = dmp_shift(f, a1, u, K) + yield s1, f1 + + # Now try [1, 1, 1, ...], [2, 2, 2, ...] + j = 0 + while True: + j += 1 + sj = [j] * n + aj = [-a*j] * n + fj = dmp_shift(f, aj, u, K) + yield sj, fj + + def dmp_sqf_norm(f, u, K): - """ - Square-free norm of ``f`` in ``K[X]``, useful over algebraic domains. + r""" + Find a shift of ``f`` in ``K[X]`` that has square-free norm. + + The domain `K` must be an algebraic number field `k(a)` (see :ref:`QQ(a)`). - Returns ``s``, ``f``, ``r``, such that ``g(x) = f(x-sa)`` and ``r(x) = Norm(g(x))`` - is a square-free polynomial over K, where ``a`` is the algebraic extension of ``K``. + Returns `(s,g,r)`, such that `g(x_1,x_2,\cdots)=f(x_1-s_1 a, x_2 - s_2 a, + \cdots)`, `r(x)=\text{Norm}(g(x))` and `r` is a square-free polynomial over + `k`. Examples ======== + We first create the algebraic number field `K=k(a)=\mathbb{Q}(i)` and rings + `K[x,y]` and `k[x,y]`: + >>> from sympy.polys import ring, QQ >>> from sympy import I @@ -135,42 +247,140 @@ def dmp_sqf_norm(f, u, K): >>> R, x, y = ring("x,y", K) >>> _, X, Y = ring("x,y", QQ) - >>> s, f, r = R.dmp_sqf_norm(x*y + y**2) + We can now find a square free norm for a shift of `f`: - >>> s == 1 + >>> f = x*y + y**2 + >>> s, g, r = R.dmp_sqf_norm(f) + + The choice of shifts ``s`` is arbitrary and the particular values returned + for ``g`` and ``r`` are determined by ``s``. + + >>> s + [0, 1] + >>> g == x*y - I*x + y**2 - 2*I*y - 1 + True + >>> r == X**2*Y**2 + X**2 + 2*X*Y**3 + 2*X*Y + Y**4 + 2*Y**2 + 1 True - >>> f == x*y + y**2 + K([QQ(-1), QQ(0)])*y + + The required invariants are: + + >>> g == f.shift_list([-si*K.unit for si in s]) True - >>> r == X**2*Y**2 + 2*X*Y**3 + Y**4 + Y**2 + >>> g.norm() == r True + >>> r.is_squarefree + True + + Explanation + =========== + + This is part of Trager's algorithm for factorizing polynomials over + algebraic number fields. In particular this function is a multivariate + generalization of algorithm ``sqfr_norm`` from [Trager76]_. + + See Also + ======== + dup_sqf_norm: + Analogous function for univariate polynomials over ``k(a)``. + dmp_norm: + Computes the norm of `f` directly without any shift. + dmp_ext_factor: + Function implementing Trager's algorithm that uses this. + sympy.polys.polytools.sqf_norm: + High-level interface for using this function. """ if not u: - return dup_sqf_norm(f, K) + s, g, r = dup_sqf_norm(f, K) + return [s], g, r if not K.is_Algebraic: raise DomainError("ground domain must be algebraic") g = dmp_raise(K.mod.to_list(), u + 1, 0, K.dom) - F = dmp_raise([K.one, -K.unit], u, 0, K) - s = 0 + for s, f in _dmp_sqf_norm_shifts(f, u, K): - while True: h, _ = dmp_inject(f, u, K, front=True) r = dmp_resultant(g, h, u + 1, K.dom) if dmp_sqf_p(r, u, K.dom): break - else: - f, s = dmp_compose(f, F, u, K), s + 1 return s, f, r def dmp_norm(f, u, K): - """ - Norm of ``f`` in ``K[X1, ..., Xn]``, often not square-free. + r""" + Norm of ``f`` in ``K[X]``, often not square-free. + + The domain `K` must be an algebraic number field `k(a)` (see :ref:`QQ(a)`). + + Examples + ======== + + We first define the algebraic number field `K = k(a) = \mathbb{Q}(\sqrt{2})`: + + >>> from sympy import QQ, sqrt + >>> from sympy.polys.sqfreetools import dmp_norm + >>> k = QQ + >>> K = k.algebraic_field(sqrt(2)) + + We can now compute the norm of a polynomial `p` in `K[x,y]`: + + >>> p = [[K(1)], [K(1),K.unit]] # x + y + sqrt(2) + >>> N = [[k(1)], [k(2),k(0)], [k(1),k(0),k(-2)]] # x**2 + 2*x*y + y**2 - 2 + >>> dmp_norm(p, 1, K) == N + True + + In higher level functions that is: + + >>> from sympy import expand, roots, minpoly + >>> from sympy.abc import x, y + >>> from math import prod + >>> a = sqrt(2) + >>> e = (x + y + a) + >>> e.as_poly([x, y], extension=a).norm() + Poly(x**2 + 2*x*y + y**2 - 2, x, y, domain='QQ') + + This is equal to the product of the expressions `x + y + a_i` where the + `a_i` are the conjugates of `a`: + + >>> pa = minpoly(a) + >>> pa + _x**2 - 2 + >>> rs = roots(pa, multiple=True) + >>> rs + [sqrt(2), -sqrt(2)] + >>> n = prod(e.subs(a, r) for r in rs) + >>> n + (x + y - sqrt(2))*(x + y + sqrt(2)) + >>> expand(n) + x**2 + 2*x*y + y**2 - 2 + + Explanation + =========== + + Given an algebraic number field `K = k(a)` any element `b` of `K` can be + represented as polynomial function `b=g(a)` where `g` is in `k[x]`. If the + minimal polynomial of `a` over `k` is `p_a` then the roots `a_1`, `a_2`, + `\cdots` of `p_a(x)` are the conjugates of `a`. The norm of `b` is the + product `g(a1) \times g(a2) \times \cdots` and is an element of `k`. + + As in [Trager76]_ we extend this norm to multivariate polynomials over `K`. + If `b(x)` is a polynomial in `k(a)[X]` then we can think of `b` as being + alternately a function `g_X(a)` where `g_X` is an element of `k[X][y]` i.e. + a polynomial function with coefficients that are elements of `k[X]`. Then + the norm of `b` is the product `g_X(a1) \times g_X(a2) \times \cdots` and + will be an element of `k[X]`. + + See Also + ======== + + dmp_sqf_norm: + Compute a shift of `f` so that the `\text{Norm}(f)` is square-free. + sympy.polys.polytools.Poly.norm: + Higher-level function that calls this. """ if not K.is_Algebraic: raise DomainError("ground domain must be algebraic") @@ -206,6 +416,10 @@ def dup_sqf_part(f, K): >>> R.dup_sqf_part(x**3 - 3*x - 2) x**2 - x - 2 + See Also + ======== + + sympy.polys.polytools.Poly.sqf_part """ if K.is_FiniteField: return dup_gf_sqf_part(f, K) @@ -264,6 +478,8 @@ def dmp_sqf_part(f, u, K): def dup_gf_sqf_list(f, K, all=False): """Compute square-free decomposition of ``f`` in ``GF(p)[x]``. """ + f_orig = f + f = dup_convert(f, K, K.dom) coeff, factors = gf_sqf_list(f, K.mod, K.dom, all=all) @@ -271,6 +487,8 @@ def dup_gf_sqf_list(f, K, all=False): for i, (f, k) in enumerate(factors): factors[i] = (dup_convert(f, K.dom, K), k) + _dup_check_degrees(f_orig, factors) + return K.convert(coeff, K.dom), factors @@ -283,6 +501,8 @@ def dup_sqf_list(f, K, all=False): """ Return square-free decomposition of a polynomial in ``K[x]``. + Uses Yun's algorithm from [Yun76]_. + Examples ======== @@ -296,10 +516,26 @@ def dup_sqf_list(f, K, all=False): >>> R.dup_sqf_list(f, all=True) (2, [(1, 1), (x + 1, 2), (x + 2, 3)]) + See Also + ======== + + dmp_sqf_list: + Corresponding function for multivariate polynomials. + sympy.polys.polytools.sqf_list: + High-level function for square-free factorization of expressions. + sympy.polys.polytools.Poly.sqf_list: + Analogous method on :class:`~.Poly`. + + References + ========== + + [Yun76]_ """ if K.is_FiniteField: return dup_gf_sqf_list(f, K, all=all) + f_orig = f + if K.is_Field: coeff = dup_LC(f, K) f = dup_monic(f, K) @@ -333,6 +569,8 @@ def dup_sqf_list(f, K, all=False): i += 1 + _dup_check_degrees(f_orig, result) + return coeff, result @@ -366,7 +604,7 @@ def dup_sqf_list_include(f, K, all=False): def dmp_sqf_list(f, u, K, all=False): """ - Return square-free decomposition of a polynomial in ``K[X]``. + Return square-free decomposition of a polynomial in `K[X]`. Examples ======== @@ -381,6 +619,26 @@ def dmp_sqf_list(f, u, K, all=False): >>> R.dmp_sqf_list(f, all=True) (1, [(1, 1), (x + y, 2), (x, 3)]) + Explanation + =========== + + Uses Yun's algorithm for univariate polynomials from [Yun76]_ recrusively. + The multivariate polynomial is treated as a univariate polynomial in its + leading variable. Then Yun's algorithm computes the square-free + factorization of the primitive and the content is factored recursively. + + It would be better to use a dedicated algorithm for multivariate + polynomials instead. + + See Also + ======== + + dup_sqf_list: + Corresponding function for univariate polynomials. + sympy.polys.polytools.sqf_list: + High-level function for square-free factorization of expressions. + sympy.polys.polytools.Poly.sqf_list: + Analogous method on :class:`~.Poly`. """ if not u: return dup_sqf_list(f, K, all=all) @@ -388,6 +646,8 @@ def dmp_sqf_list(f, u, K, all=False): if K.is_FiniteField: return dmp_gf_sqf_list(f, u, K, all=all) + f_orig = f + if K.is_Field: coeff = dmp_ground_LC(f, u, K) f = dmp_ground_monic(f, u, K) @@ -445,6 +705,8 @@ def dmp_sqf_list(f, u, K, all=False): result = [(result[i], i) for i in sorted(result)] + _dmp_check_degrees(f_orig, u, result) + return coeff, result diff --git a/sympy/polys/tests/test_densetools.py b/sympy/polys/tests/test_densetools.py index d9c9cf6e56b1..43dae691f52d 100644 --- a/sympy/polys/tests/test_densetools.py +++ b/sympy/polys/tests/test_densetools.py @@ -20,7 +20,7 @@ dup_primitive, dmp_ground_primitive, dup_extract, dmp_ground_extract, dup_real_imag, - dup_mirror, dup_scale, dup_shift, + dup_mirror, dup_scale, dup_shift, dmp_shift, dup_transform, dup_compose, dmp_compose, dup_decompose, @@ -39,7 +39,7 @@ from sympy.polys.specialpolys import f_polys -from sympy.polys.domains import FF, ZZ, QQ, EX, RR +from sympy.polys.domains import FF, ZZ, QQ, ZZ_I, QQ_I, EX, RR from sympy.polys.rings import ring from sympy.core.numbers import I @@ -75,6 +75,8 @@ def test_dup_integrate(): def test_dmp_integrate(): + assert dmp_integrate([QQ(1)], 2, 0, QQ) == [QQ(1, 2), QQ(0), QQ(0)] + assert dmp_integrate([[[]]], 1, 2, QQ) == [[[]]] assert dmp_integrate([[[]]], 2, 2, QQ) == [[[]]] @@ -107,6 +109,9 @@ def test_dmp_integrate_in(): dmp_swap( dmp_integrate(dmp_swap(f, 0, 2, 3, QQ), 3, 3, QQ), 0, 2, 3, QQ) + raises(IndexError, lambda: dmp_integrate_in(f, 1, -1, 3, QQ)) + raises(IndexError, lambda: dmp_integrate_in(f, 1, 4, 3, QQ)) + def test_dup_diff(): assert dup_diff([], 1, ZZ) == [] @@ -180,6 +185,8 @@ def test_dmp_diff_in(): assert dmp_diff_in(f_6, 3, 2, 3, ZZ) == \ dmp_swap(dmp_diff(dmp_swap(f_6, 0, 2, 3, ZZ), 3, 3, ZZ), 0, 2, 3, ZZ) + raises(IndexError, lambda: dmp_diff_in(f_6, 1, -1, 3, ZZ)) + raises(IndexError, lambda: dmp_diff_in(f_6, 1, 4, 3, ZZ)) def test_dup_eval(): assert dup_eval([], 7, ZZ) == 0 @@ -217,6 +224,9 @@ def test_dmp_eval_in(): assert dmp_eval_in(f, -2, 2, 2, ZZ) == \ [[45], [], [], [-9, -1, 0, -44]] + raises(IndexError, lambda: dmp_eval_in(f_6, ZZ(1), -1, 3, ZZ)) + raises(IndexError, lambda: dmp_eval_in(f_6, ZZ(1), 4, 3, ZZ)) + def test_dmp_eval_tail(): assert dmp_eval_tail([[]], [1], 1, ZZ) == [] @@ -248,6 +258,11 @@ def test_dmp_diff_eval_in(): assert dmp_diff_eval_in(f_6, 2, 7, 1, 3, ZZ) == \ dmp_eval(dmp_diff(dmp_swap(f_6, 0, 1, 3, ZZ), 2, 3, ZZ), 7, 3, ZZ) + assert dmp_diff_eval_in(f_6, 2, 7, 0, 3, ZZ) == \ + dmp_eval(dmp_diff(f_6, 2, 3, ZZ), 7, 3, ZZ) + + raises(IndexError, lambda: dmp_diff_eval_in(f_6, 1, ZZ(1), 4, 3, ZZ)) + def test_dup_revert(): f = [-QQ(1, 720), QQ(0), QQ(1, 24), QQ(0), -QQ(1, 2), QQ(0), QQ(1)] @@ -271,6 +286,12 @@ def test_dup_trunc(): assert dup_trunc([1, 2, 3, 4, 5, 6], ZZ(3), ZZ) == [1, -1, 0, 1, -1, 0] assert dup_trunc([6, 5, 4, 3, 2, 1], ZZ(3), ZZ) == [-1, 1, 0, -1, 1] + R = ZZ_I + assert dup_trunc([R(3), R(4), R(5)], R(3), R) == [R(1), R(-1)] + + K = FF(5) + assert dup_trunc([K(3), K(4), K(5)], K(3), K) == [K(1), K(0)] + def test_dmp_trunc(): assert dmp_trunc([[]], [1, 2], 2, ZZ) == [[]] @@ -294,6 +315,8 @@ def test_dup_monic(): def test_dmp_ground_monic(): + assert dmp_ground_monic([3, 6, 9], 0, ZZ) == [1, 2, 3] + assert dmp_ground_monic([[3], [6], [9]], 1, ZZ) == [[1], [2], [3]] raises( @@ -386,6 +409,8 @@ def test_dup_primitive(): def test_dmp_ground_primitive(): + assert dmp_ground_primitive([ZZ(1)], 0, ZZ) == (ZZ(1), [ZZ(1)]) + assert dmp_ground_primitive([[]], 1, ZZ) == (ZZ(0), [[]]) assert dmp_ground_primitive(f_0, 2, ZZ) == (ZZ(1), f_0) @@ -456,9 +481,15 @@ def test_dup_real_imag(): assert dup_real_imag( [1, 2, 3], ZZ) == ([[1], [2], [-1, 0, 3]], [[2, 0], [2, 0]]) + assert dup_real_imag([ZZ(1), ZZ(0), ZZ(1), ZZ(3)], ZZ) == ( + [[ZZ(1)], [], [ZZ(-3), ZZ(0), ZZ(1)], [ZZ(3)]], + [[ZZ(3), ZZ(0)], [], [ZZ(-1), ZZ(0), ZZ(1), ZZ(0)]] + ) + raises(DomainError, lambda: dup_real_imag([EX(1), EX(2)], EX)) + def test_dup_mirror(): assert dup_mirror([], ZZ) == [] assert dup_mirror([1], ZZ) == [1] @@ -483,6 +514,16 @@ def test_dup_shift(): assert dup_shift([1, 2, 3, 4, 5], 7, ZZ) == [1, 30, 339, 1712, 3267] +def test_dmp_shift(): + assert dmp_shift([ZZ(1), ZZ(2)], [ZZ(1)], 0, ZZ) == [ZZ(1), ZZ(3)] + + assert dmp_shift([[]], [ZZ(1), ZZ(2)], 1, ZZ) == [[]] + + xy = [[ZZ(1), ZZ(0)], []] # x*y + x1y2 = [[ZZ(1), ZZ(2)], [ZZ(1), ZZ(2)]] # (x+1)*(y+2) + assert dmp_shift(xy, [ZZ(1), ZZ(2)], 1, ZZ) == x1y2 + + def test_dup_transform(): assert dup_transform([], [], [1, 1], ZZ) == [] assert dup_transform([], [1], [1, 1], ZZ) == [] @@ -570,12 +611,17 @@ def test_dup_decompose(): def test_dmp_lift(): q = [QQ(1, 1), QQ(0, 1), QQ(1, 1)] - f = [ANP([QQ(1, 1)], q, QQ), ANP([], q, QQ), ANP([], q, QQ), + f_a = [ANP([QQ(1, 1)], q, QQ), ANP([], q, QQ), ANP([], q, QQ), ANP([QQ(1, 1), QQ(0, 1)], q, QQ), ANP([QQ(17, 1), QQ(0, 1)], q, QQ)] - assert dmp_lift(f, 0, QQ.algebraic_field(I)) == \ - [QQ(1), QQ(0), QQ(0), QQ(0), QQ(0), QQ(0), QQ(2), QQ(0), QQ(578), - QQ(0), QQ(0), QQ(0), QQ(1), QQ(0), QQ(-578), QQ(0), QQ(83521)] + f_lift = [QQ(1), QQ(0), QQ(0), QQ(0), QQ(0), QQ(0), QQ(2), QQ(0), QQ(578), + QQ(0), QQ(0), QQ(0), QQ(1), QQ(0), QQ(-578), QQ(0), QQ(83521)] + + assert dmp_lift(f_a, 0, QQ.algebraic_field(I)) == f_lift + + f_g = [QQ_I(1), QQ_I(0), QQ_I(0), QQ_I(0, 1), QQ_I(0, 17)] + + assert dmp_lift(f_g, 0, QQ_I) == f_lift raises(DomainError, lambda: dmp_lift([EX(1), EX(2)], 0, EX)) diff --git a/sympy/polys/tests/test_factortools.py b/sympy/polys/tests/test_factortools.py index 84133d4137e4..7f99097c71e9 100644 --- a/sympy/polys/tests/test_factortools.py +++ b/sympy/polys/tests/test_factortools.py @@ -562,7 +562,10 @@ def anp(element): def test_dmp_ext_factor(): - R, x,y = ring("x,y", QQ.algebraic_field(sqrt(2))) + K = QQ.algebraic_field(sqrt(2)) + R, x,y = ring("x,y", K) + sqrt2 = K.unit + def anp(x): return ANP(x, [QQ(1), QQ(0), QQ(-2)], QQ) @@ -588,6 +591,12 @@ def anp(x): (anp([QQ(2)]), [(anp([QQ(1)])*x + anp([QQ(-1), QQ(0)])*y, 1), (anp([QQ(1)])*x + anp([QQ( 1), QQ(0)])*y, 1)]) + f1 = y + 1 + f2 = y + sqrt2 + f3 = x**2 + x + 2 + 3*sqrt2 + f = f1**2 * f2**2 * f3**2 + assert R.dmp_ext_factor(f) == (K.one, [(f1, 2), (f2, 2), (f3, 2)]) + def test_dup_factor_list(): R, x = ring("x", ZZ) diff --git a/sympy/polys/tests/test_polytools.py b/sympy/polys/tests/test_polytools.py index 2bab24433e43..0cce69d0ffb1 100644 --- a/sympy/polys/tests/test_polytools.py +++ b/sympy/polys/tests/test_polytools.py @@ -2313,6 +2313,11 @@ def test_compose(): def test_shift(): assert Poly(x**2 - 2*x + 1, x).shift(2) == Poly(x**2 + 2*x + 1, x) + +def test_shift_list(): + assert Poly(x*y, [x,y]).shift_list([1,2]) == Poly((x+1)*(y+2), [x,y]) + + def test_transform(): # Also test that 3-way unification is done correctly assert Poly(x**2 - 2*x + 1, x).transform(Poly(x + 1), Poly(x - 1)) == \ @@ -2397,17 +2402,17 @@ def test_norm(): def test_sqf_norm(): assert sqf_norm(x**2 - 2, extension=sqrt(3)) == \ - (1, x**2 - 2*sqrt(3)*x + 1, x**4 - 10*x**2 + 1) + ([1], x**2 - 2*sqrt(3)*x + 1, x**4 - 10*x**2 + 1) assert sqf_norm(x**2 - 3, extension=sqrt(2)) == \ - (1, x**2 - 2*sqrt(2)*x - 1, x**4 - 10*x**2 + 1) + ([1], x**2 - 2*sqrt(2)*x - 1, x**4 - 10*x**2 + 1) assert Poly(x**2 - 2, extension=sqrt(3)).sqf_norm() == \ - (1, Poly(x**2 - 2*sqrt(3)*x + 1, x, extension=sqrt(3)), - Poly(x**4 - 10*x**2 + 1, x, domain='QQ')) + ([1], Poly(x**2 - 2*sqrt(3)*x + 1, x, extension=sqrt(3)), + Poly(x**4 - 10*x**2 + 1, x, domain='QQ')) assert Poly(x**2 - 3, extension=sqrt(2)).sqf_norm() == \ - (1, Poly(x**2 - 2*sqrt(2)*x - 1, x, extension=sqrt(2)), - Poly(x**4 - 10*x**2 + 1, x, domain='QQ')) + ([1], Poly(x**2 - 2*sqrt(2)*x - 1, x, extension=sqrt(2)), + Poly(x**4 - 10*x**2 + 1, x, domain='QQ')) def test_sqf(): @@ -2693,6 +2698,24 @@ def test_factor(): assert factor_list((x - sqrt(2)*pi)*(x + sqrt(2)*pi), x) == ( 1, [(x - sqrt(2)*pi, 1), (x + sqrt(2)*pi, 1)]) + # https://github.com/sympy/sympy/issues/26497 + p = ((y - I)**2 * (y + I) * (x + 1)) + assert factor(expand(p)) == p + + p = ((x - I)**2 * (x + I) * (y + 1)) + assert factor(expand(p)) == p + + p = (y + 1)**2*(y + sqrt(2))**2*(x**2 + x + 2 + 3*sqrt(2))**2 + assert factor(expand(p), extension=True) == p + + e = ( + -x**2*y**4/(y**2 + 1) + 2*I*x**2*y**3/(y**2 + 1) + 2*I*x**2*y/(y**2 + 1) + + x**2/(y**2 + 1) - 2*x*y**4/(y**2 + 1) + 4*I*x*y**3/(y**2 + 1) + + 4*I*x*y/(y**2 + 1) + 2*x/(y**2 + 1) - y**4 - y**4/(y**2 + 1) + 2*I*y**3 + + 2*I*y**3/(y**2 + 1) + 2*I*y + 2*I*y/(y**2 + 1) + 1 + 1/(y**2 + 1) + ) + assert factor(e) == -(y - I)**3*(y + I)*(x**2 + 2*x + y**2 + 2)/(y**2 + 1) + def test_factor_large(): f = (x**2 + 4*x + 4)**10000000*(x**2 + 1)*(x**2 + 2*x + 1)**1234567 diff --git a/sympy/polys/tests/test_rings.py b/sympy/polys/tests/test_rings.py index 2f560922ea81..3a48d45a6f15 100644 --- a/sympy/polys/tests/test_rings.py +++ b/sympy/polys/tests/test_rings.py @@ -1490,6 +1490,12 @@ def test_PolyElement_decompose(): def test_PolyElement_shift(): _, x = ring("x", ZZ) assert (x**2 - 2*x + 1).shift(2) == x**2 + 2*x + 1 + assert (x**2 - 2*x + 1).shift_list([2]) == x**2 + 2*x + 1 + + R, x, y = ring("x, y", ZZ) + assert (x*y).shift_list([1, 2]) == (x+1)*(y+2) + + raises(MultivariatePolynomialError, lambda: (x*y).shift(1)) def test_PolyElement_sturm(): F, t = field("t", ZZ) @@ -1513,16 +1519,25 @@ def test_PolyElement_gff_list(): f = x*(x - 1)**3*(x - 2)**2*(x - 4)**2*(x - 5) assert f.gff_list() == [(x**2 - 5*x + 4, 1), (x**2 - 5*x + 4, 2), (x, 3)] +def test_PolyElement_norm(): + k = QQ + K = QQ.algebraic_field(sqrt(2)) + sqrt2 = K.unit + _, X, Y = ring("x,y", k) + _, x, y = ring("x,y", K) + + assert (x*y + sqrt2).norm() == X**2*Y**2 - 2 + def test_PolyElement_sqf_norm(): R, x = ring("x", QQ.algebraic_field(sqrt(3))) X = R.to_ground().x - assert (x**2 - 2).sqf_norm() == (1, x**2 - 2*sqrt(3)*x + 1, X**4 - 10*X**2 + 1) + assert (x**2 - 2).sqf_norm() == ([1], x**2 - 2*sqrt(3)*x + 1, X**4 - 10*X**2 + 1) R, x = ring("x", QQ.algebraic_field(sqrt(2))) X = R.to_ground().x - assert (x**2 - 3).sqf_norm() == (1, x**2 - 2*sqrt(2)*x - 1, X**4 - 10*X**2 + 1) + assert (x**2 - 3).sqf_norm() == ([1], x**2 - 2*sqrt(2)*x - 1, X**4 - 10*X**2 + 1) def test_PolyElement_sqf_list(): _, x = ring("x", ZZ) diff --git a/sympy/polys/tests/test_sqfreetools.py b/sympy/polys/tests/test_sqfreetools.py index 976b035163ed..b772a05a50e2 100644 --- a/sympy/polys/tests/test_sqfreetools.py +++ b/sympy/polys/tests/test_sqfreetools.py @@ -134,6 +134,10 @@ def test_dmp_sqf(): f = -x**2 + 2*x - 1 assert R.dmp_sqf_list_include(f) == [(-1, 1), (x - 1, 2)] + f = (y**2 + 1)**2*(x**2 + 2*x + 2) + assert R.dmp_sqf_p(f) is False + assert R.dmp_sqf_list(f) == (1, [(x**2 + 2*x + 2, 1), (y**2 + 1, 2)]) + R, x, y = ring("x,y", FF(2)) raises(NotImplementedError, lambda: R.dmp_sqf_list(y**2 + 1))
{ "difficulty": "high", "estimated_review_effort": 4, "problem_domain": "Bug Fixes" }
sympy__sympy-26346@3a74b4f
sympy/sympy
Python
26,346
Fixed Maximum bending moment bug in physics beam
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> Fixes #24221 #### Brief description of what is fixed or changed Simplification of shear force equation was required before converting it into Piecewise. Also corrected sorting singularity list which was causing issues with rational points. #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> NO ENTRY <!-- END RELEASE NOTES -->
2024-03-10T21:52:17Z
max bending moment bug ```python from sympy.physics.continuum_mechanics.beam import Beam from sympy import symbols L1 = 6.2 L2 = 1.3 E, I = symbols('E, I') R1, R2 = symbols('R1, R2') trave = Beam(L1+L2, E, I) trave.apply_load(R1, 0, -1) trave.apply_load(R2, L1, -1) trave.apply_load(10,0,0) trave.bc_deflection.append((0, 0)) trave.bc_deflection.append((L1, 0)) #b.bc_deflection.append((L1+L2, 0)) trave.solve_for_reaction_loads(R1, R2) trave.reaction_loads trave.plot_shear_force() trave.plot_bending_moment() # Print max shear force and its location on beam print(trave.max_shear_force()) # Print max bending moment and its location on beam. print(trave.max_bmoment()) with this code, max_bmoment gives right maximum moment if I declare end of distributed load, i can't print maximum moment from sympy.physics.continuum_mechanics.beam import Beam from sympy import symbols L1 = 6.2 L2 = 1.3 E, I = symbols('E, I') R1, R2 = symbols('R1, R2') trave = Beam(L1+L2, E, I) trave.apply_load(R1, 0, -1) trave.apply_load(R2, L1, -1) trave.apply_load(10,0,0, 7.5) trave.bc_deflection.append((0, 0)) trave.bc_deflection.append((L1, 0)) #b.bc_deflection.append((L1+L2, 0)) trave.solve_for_reaction_loads(R1, R2) trave.reaction_loads trave.plot_shear_force() trave.plot_bending_moment() # Print max shear force and its location on beam print(trave.max_shear_force()) # Print max bending moment and its location on beam. print(trave.max_bmoment()) ```
[ { "body": "```python\r\nfrom sympy.physics.continuum_mechanics.beam import Beam\r\nfrom sympy import symbols\r\n\r\nL1 = 6.2\r\nL2 = 1.3\r\n\r\nE, I = symbols('E, I')\r\nR1, R2 = symbols('R1, R2')\r\ntrave = Beam(L1+L2, E, I)\r\ntrave.apply_load(R1, 0, -1)\r\ntrave.apply_load(R2, L1, -1)\r\n\r\ntrave.apply_load(10,0,0)\r\n\r\ntrave.bc_deflection.append((0, 0))\r\ntrave.bc_deflection.append((L1, 0))\r\n#b.bc_deflection.append((L1+L2, 0))\r\n\r\ntrave.solve_for_reaction_loads(R1, R2)\r\ntrave.reaction_loads\r\n\r\ntrave.plot_shear_force() \r\ntrave.plot_bending_moment() \r\n\r\n# Print max shear force and its location on beam\r\nprint(trave.max_shear_force())\r\n\r\n# Print max bending moment and its location on beam.\r\nprint(trave.max_bmoment())\r\n\r\nwith this code, max_bmoment gives right maximum moment\r\nif I declare end of distributed load, i can't print maximum moment\r\n\r\nfrom sympy.physics.continuum_mechanics.beam import Beam\r\nfrom sympy import symbols\r\n\r\nL1 = 6.2\r\nL2 = 1.3\r\n\r\nE, I = symbols('E, I')\r\nR1, R2 = symbols('R1, R2')\r\ntrave = Beam(L1+L2, E, I)\r\ntrave.apply_load(R1, 0, -1)\r\ntrave.apply_load(R2, L1, -1)\r\n\r\ntrave.apply_load(10,0,0, 7.5)\r\n\r\ntrave.bc_deflection.append((0, 0))\r\ntrave.bc_deflection.append((L1, 0))\r\n#b.bc_deflection.append((L1+L2, 0))\r\n\r\ntrave.solve_for_reaction_loads(R1, R2)\r\ntrave.reaction_loads\r\n\r\ntrave.plot_shear_force() \r\ntrave.plot_bending_moment() \r\n\r\n# Print max shear force and its location on beam\r\nprint(trave.max_shear_force())\r\n\r\n# Print max bending moment and its location on beam.\r\nprint(trave.max_bmoment())\r\n```", "number": 24221, "title": "max bending moment bug" } ]
07da3cec213b829d1edef980bd44b3547ffb0f8c
{ "head_commit": "3a74b4fcf087b6cad20c9bd54f491b5f3c2d3f98", "head_commit_message": "Apply suggestions from code review", "patch_to_review": "diff --git a/.mailmap b/.mailmap\nindex 20b4d5efc52e..f64d79255ec8 100644\n--- a/.mailmap\n+++ b/.mailmap\n@@ -1119,6 +1119,7 @@ Prateek Papriwal <[email protected]>\n Praveen Sahu <[email protected]> povinsahu1909 <[email protected]>\n Prayush Dawda <[email protected]>\n Prempal Singh <[email protected]>\n+Prey Patel <[email protected]>\n Priit Laes <[email protected]>\n Prince Gupta <[email protected]> LAPTOP-AS1M2R8B\\codem <[email protected]>\n Prionti Nasir <[email protected]>\ndiff --git a/sympy/physics/continuum_mechanics/beam.py b/sympy/physics/continuum_mechanics/beam.py\nindex fc9c436414fd..a9a9bc20cc9b 100644\n--- a/sympy/physics/continuum_mechanics/beam.py\n+++ b/sympy/physics/continuum_mechanics/beam.py\n@@ -934,8 +934,8 @@ def max_shear_force(self):\n if isinstance(term, Mul):\n term = term.args[-1] # SingularityFunction in the term\n singularity.append(term.args[1])\n- singularity.sort()\n singularity = list(set(singularity))\n+ singularity.sort()\n \n intervals = [] # List of Intervals with discrete value of shear force\n shear_values = [] # List of values of shear force in each interval\n@@ -1018,8 +1018,8 @@ def max_bmoment(self):\n if isinstance(term, Mul):\n term = term.args[-1] # SingularityFunction in the term\n singularity.append(term.args[1])\n- singularity.sort()\n singularity = list(set(singularity))\n+ singularity.sort()\n \n intervals = [] # List of Intervals with discrete value of bending moment\n moment_values = [] # List of values of bending moment in each interval\n@@ -1027,7 +1027,10 @@ def max_bmoment(self):\n if s == 0:\n continue\n try:\n- moment_slope = Piecewise((float(\"nan\"), x<=singularity[i-1]),(self.shear_force().rewrite(Piecewise), x<s), (float(\"nan\"), True))\n+ moment_slope = Piecewise(\n+ (float(\"nan\"), x <= singularity[i - 1]),\n+ (self.shear_force().rewrite(Piecewise), x < s),\n+ (float(\"nan\"), True))\n points = solve(moment_slope, x)\n val = []\n for point in points:\n@@ -1037,6 +1040,7 @@ def max_bmoment(self):\n max_moment = max(val)\n moment_values.append(max_moment)\n intervals.append(points[val.index(max_moment)])\n+\n # If bending moment in a particular Interval has zero or constant\n # slope, then above block gives NotImplementedError as solve\n # can't represent Interval solutions.\ndiff --git a/sympy/physics/continuum_mechanics/tests/test_beam.py b/sympy/physics/continuum_mechanics/tests/test_beam.py\nindex 3ee6b044b448..7891a2a8cf30 100644\n--- a/sympy/physics/continuum_mechanics/tests/test_beam.py\n+++ b/sympy/physics/continuum_mechanics/tests/test_beam.py\n@@ -514,7 +514,9 @@ def test_max_shear_force():\n b.apply_load(R2, l, -1)\n b.apply_load(P, 0, 0, end=l)\n b.solve_for_reaction_loads(R1, R2)\n- assert b.max_shear_force() == (0, l*Abs(P)/2)\n+ max_shear = b.max_shear_force()\n+ assert max_shear[0] == 0\n+ assert simplify(max_shear[1] - (l*Abs(P)/2)) == 0\n \n \n def test_max_bmoment():\ndiff --git a/sympy/solvers/solvers.py b/sympy/solvers/solvers.py\nindex 4f761ee4f68e..84f751122c99 100644\n--- a/sympy/solvers/solvers.py\n+++ b/sympy/solvers/solvers.py\n@@ -1417,8 +1417,15 @@ def _solve(f, *symbols, **flags):\n result = set()\n if any(e.is_zero for e, c in f.args):\n f = f.simplify() # failure imminent w/o help\n- for i, (expr, cond) in enumerate(f.args):\n- if expr.is_zero:\n+\n+ cond = neg = True\n+ for i, (expr, cnd) in enumerate(f.args):\n+ # the explicit condition for this expr is the current cond\n+ # and none of the previous conditions\n+ cond = And(neg, cnd).simplify()\n+ neg = And(neg, ~cond)\n+\n+ if expr.is_zero and cond != False:\n raise NotImplementedError(filldedent('''\n An expression is already zero when %s.\n This means that in this *region* the solution\n@@ -1427,10 +1434,7 @@ def _solve(f, *symbols, **flags):\n interval it might be resolved with simplification\n of the Piecewise conditions.''' % cond))\n candidates = _vsolve(expr, symbol, **flags)\n- # the explicit condition for this expr is the current cond\n- # and none of the previous conditions\n- args = [~c for _, c in f.args[:i]] + [cond]\n- cond = And(*args)\n+\n for candidate in candidates:\n if candidate in result:\n # an unconditional value was already there\n" }
[ { "diff_hunk": "@@ -1417,8 +1417,15 @@ def _solve(f, *symbols, **flags):\n result = set()\n if any(e.is_zero for e, c in f.args):\n f = f.simplify() # failure imminent w/o help\n- for i, (expr, cond) in enumerate(f.args):\n- if expr.is_zero:\n+\n+ cond = neg = True\n+ for i, (expr, cnd) in enumerate(f.args):\n+ # the explicit condition for this expr is the current cond\n+ # and none of the previous conditions\n+ cond = And(neg, cnd).simplify()\n+ neg = And(neg, ~cond)\n+\n+ if expr.is_zero and cond != False:", "line": null, "original_line": 1428, "original_start_line": 1425, "path": "sympy/solvers/solvers.py", "start_line": null, "text": "@user1:\n```suggestion\r\n cond = And(neg, cnd)\r\n neg = And(neg, ~cond)\r\n\r\n if expr.is_zero and cond.simplify() != False:\r\n```" } ]
2a6c98fca9229cc31d87a4f0af362ed1e08579a1
diff --git a/.mailmap b/.mailmap index 20b4d5efc52e..f64d79255ec8 100644 --- a/.mailmap +++ b/.mailmap @@ -1119,6 +1119,7 @@ Prateek Papriwal <[email protected]> Praveen Sahu <[email protected]> povinsahu1909 <[email protected]> Prayush Dawda <[email protected]> Prempal Singh <[email protected]> +Prey Patel <[email protected]> Priit Laes <[email protected]> Prince Gupta <[email protected]> LAPTOP-AS1M2R8B\codem <[email protected]> Prionti Nasir <[email protected]> diff --git a/sympy/physics/continuum_mechanics/beam.py b/sympy/physics/continuum_mechanics/beam.py index fc9c436414fd..a9a9bc20cc9b 100644 --- a/sympy/physics/continuum_mechanics/beam.py +++ b/sympy/physics/continuum_mechanics/beam.py @@ -934,8 +934,8 @@ def max_shear_force(self): if isinstance(term, Mul): term = term.args[-1] # SingularityFunction in the term singularity.append(term.args[1]) - singularity.sort() singularity = list(set(singularity)) + singularity.sort() intervals = [] # List of Intervals with discrete value of shear force shear_values = [] # List of values of shear force in each interval @@ -1018,8 +1018,8 @@ def max_bmoment(self): if isinstance(term, Mul): term = term.args[-1] # SingularityFunction in the term singularity.append(term.args[1]) - singularity.sort() singularity = list(set(singularity)) + singularity.sort() intervals = [] # List of Intervals with discrete value of bending moment moment_values = [] # List of values of bending moment in each interval @@ -1027,7 +1027,10 @@ def max_bmoment(self): if s == 0: continue try: - moment_slope = Piecewise((float("nan"), x<=singularity[i-1]),(self.shear_force().rewrite(Piecewise), x<s), (float("nan"), True)) + moment_slope = Piecewise( + (float("nan"), x <= singularity[i - 1]), + (self.shear_force().rewrite(Piecewise), x < s), + (float("nan"), True)) points = solve(moment_slope, x) val = [] for point in points: @@ -1037,6 +1040,7 @@ def max_bmoment(self): max_moment = max(val) moment_values.append(max_moment) intervals.append(points[val.index(max_moment)]) + # If bending moment in a particular Interval has zero or constant # slope, then above block gives NotImplementedError as solve # can't represent Interval solutions. diff --git a/sympy/physics/continuum_mechanics/tests/test_beam.py b/sympy/physics/continuum_mechanics/tests/test_beam.py index 3ee6b044b448..7891a2a8cf30 100644 --- a/sympy/physics/continuum_mechanics/tests/test_beam.py +++ b/sympy/physics/continuum_mechanics/tests/test_beam.py @@ -514,7 +514,9 @@ def test_max_shear_force(): b.apply_load(R2, l, -1) b.apply_load(P, 0, 0, end=l) b.solve_for_reaction_loads(R1, R2) - assert b.max_shear_force() == (0, l*Abs(P)/2) + max_shear = b.max_shear_force() + assert max_shear[0] == 0 + assert simplify(max_shear[1] - (l*Abs(P)/2)) == 0 def test_max_bmoment(): diff --git a/sympy/solvers/solvers.py b/sympy/solvers/solvers.py index 4f761ee4f68e..de8be8b9ff1b 100644 --- a/sympy/solvers/solvers.py +++ b/sympy/solvers/solvers.py @@ -1417,8 +1417,15 @@ def _solve(f, *symbols, **flags): result = set() if any(e.is_zero for e, c in f.args): f = f.simplify() # failure imminent w/o help - for i, (expr, cond) in enumerate(f.args): - if expr.is_zero: + + cond = neg = True + for i, (expr, cnd) in enumerate(f.args): + # the explicit condition for this expr is the current cond + # and none of the previous conditions + cond = And(neg, cnd) + neg = And(neg, ~cond) + + if expr.is_zero and cond.simplify() != False: raise NotImplementedError(filldedent(''' An expression is already zero when %s. This means that in this *region* the solution @@ -1427,10 +1434,7 @@ def _solve(f, *symbols, **flags): interval it might be resolved with simplification of the Piecewise conditions.''' % cond)) candidates = _vsolve(expr, symbol, **flags) - # the explicit condition for this expr is the current cond - # and none of the previous conditions - args = [~c for _, c in f.args[:i]] + [cond] - cond = And(*args) + for candidate in candidates: if candidate in result: # an unconditional value was already there
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Bug Fixes" }
sympy__sympy-26232@f71a407
sympy/sympy
Python
26,232
Patched Wigner3j not working with float that were half integers
This pull request addresses an issue where the Wigner3j function would raise a ValueError when encountering half-integer inputs recognized as floats. The problem stemmed from incomplete validation checks, which failed to properly handle such inputs. #### References to other Issues or PRs Fixed #26219 #### Brief description of what is fixed or changed To resolve this issue, I implemented enhanced validation checks within the function. Now, all inputs are meticulously validated to ensure they are either integers or half-integers. This modification ensures robustness and prevents the function from failing due to invalid input types. <!-- BEGIN RELEASE NOTES --> NO ENTRY <!-- END RELEASE NOTES -->
2024-02-15T16:26:41Z
Wigner3j has problems with Half Integers I was trying to find the result of this operation and am getting this error for half integers ``` >>> w3j = Wigner3j(0,0,1/2,1/2,1/2,-1/2) >>> w3j.doit() Traceback (most recent call last): File "<console>", line 1, in <module> File "D:\Orgs\sympy\sympy\physics\quantum\cg.py", line 158, in doit return wigner_3j(self.j1, self.j2, self.j3, self.m1, self.m2, self.m3) File "D:\Orgs\sympy\sympy\physics\wigner.py", line 199, in wigner_3j raise ValueError("j values must be integer or half integer") ValueError: j values must be integer or half integer >>> w3j = Wigner3j(0.5,0.5,0.5,0.5,0.5,0.5) >>> w3j.doit() Traceback (most recent call last): File "<console>", line 1, in <module> File "D:\Orgs\sympy\sympy\physics\quantum\cg.py", line 158, in doit return wigner_3j(self.j1, self.j2, self.j3, self.m1, self.m2, self.m3) File "D:\Orgs\sympy\sympy\physics\wigner.py", line 199, in wigner_3j raise ValueError("j values must be integer or half integer") ValueError: j values must be integer or half integer ``` It is working fine for integer values but gives an error of values with half integers .
What happens if rational numbers are used instead of floats? ``` import sympy from sympy import Rational import sympy.physics.wigner as w a = Rational(1,2) b=w.Wigner3j(a,a,a,a,a,a) b.doit() 0 ``` I think Rationals are working fine in my case , although I dont think there should be any difference in handling sympy rationals and floating point numbers here . This issue seems to be specific to the inbuilt sympy shell that we run in the local repository . The examples used by me initially are working fine on colab . For Google Colab ``` import sympy from sympy import Rational import sympy.physics.wigner as w a = 0.5 b=w.Wigner3j(a,a,a,a,a,a) b.doit() 0 ``` For bin/isympy ``` >>> import sympy.physics.wigner as w >>> a = Rational(1,2) >>> b = 1/2 >>> c = w.Wigner3j(a,a,a,a,a,a) >>> c.doit() 0 >>> d= w.Wigner3j(b,b,b,b,b,b) >>> d.doit() Traceback (most recent call last): File "<console>", line 1, in <module> File "D:\Orgs\sympy\sympy\physics\wigner.py", line 916, in doit return wigner_3j(*self.args) File "D:\Orgs\sympy\sympy\physics\wigner.py", line 199, in wigner_3j raise ValueError("j values must be integer or half integer") ValueError: j values must be integer or half integer ``` Rationals and floats are inherently different in SymPy. For example, consider `(2**54-1)/2**55` vs `Rational(2**54-1, 2**55)`. With SymPy 1.12 we have: ```python In [1]: from sympy.physics.wigner import Wigner3j In [2]: Wigner3j(0,0,1/2,1/2,1/2,-1/2).doit() Out[2]: 0 ``` Probably the difference is that now floats and rationals don't compare equal any more. @oscarbenjamin so what do you suggest? @T-vaccari The Value error is being handled inside the `wigner_3j` function so I dont't think we need to handle it in the `doit` method . "We need to ensure that when inserting half-integer values, we use the `Rational` function, as opposed to Python's standard floating-point division (b = 1/2). This ensures accurate representation and proper handling of half-integer values in SymPy functions like Wigner3j, avoiding potential issues related to floating-point arithmetic ``` >>> from sympy.physics.quantum.cg import Wigner3j >>> a = Rational(1,2) >>> b = 0.5 >>> int(a*2)==a*2 True >>> int(b*2)==b*2 True >>> w3j = Wigner3j(a,a,a,a,a,a) >>> w3j ⎛1/2 1/2 1/2⎞ ⎜ ⎟ ⎝1/2 1/2 1/2⎠ >>> w3j.doit() 0 >>> w3j = Wigner3j(b,b,b,b,b,b) >>> w3j ⎛0.5 0.5 0.5⎞ ⎜ ⎟ ⎝0.5 0.5 0.5⎠ >>> w3j.doit() Traceback (most recent call last): File "<console>", line 1, in <module> File "D:\Orgs\sympy\sympy\physics\quantum\cg.py", line 158, in doit return wigner_3j(self.j1, self.j2, self.j3, self.m1, self.m2, self.m3) File "D:\Orgs\sympy\sympy\physics\wigner.py", line 199, in wigner_3j raise ValueError("j values must be integer or half integer") ValueError: j values must be integer or half integer ``` This behaviour is very ambiguous given the code handling this ``` if int(j_1 * 2) != j_1 * 2 or int(j_2 * 2) != j_2 * 2 or \ int(j_3 * 2) != j_3 * 2: raise ValueError("j values must be integer or half integer") ``` Is it happening cause of changes in data type . I printed the types of j values and got the following ``` <class 'sympy.core.numbers.Half'> <class 'sympy.core.numbers.Half'> <class 'sympy.core.numbers.Half'> <class 'sympy.core.numbers.Float'> <class 'sympy.core.numbers.Float'> <class 'sympy.core.numbers.Float'> ``` I suggest using some method to convert them all to a uniform type but as mentioned in several other places would be a problem with the ducktyping . I'd say we should find out where the parameters are being converted to the the above types . @oscarbenjamin Can you please explain why we are using the sympy number types and why some number can be compared to their python counterparts but others can't . Is it a flaw or something intentionally designed . The solution to the problem is to ensure that all input values provided to the Wigner3j function are represented using SymPy number types, such as Rational. Maybe we can work on this if we are allowed to. What do you think @oscarbenjamin, @shishir-11 ?
[ { "body": "I was trying to find the result of this operation and am getting this error for half integers\r\n```\r\n>>> w3j = Wigner3j(0,0,1/2,1/2,1/2,-1/2) \r\n>>> w3j.doit()\r\nTraceback (most recent call last):\r\n File \"<console>\", line 1, in <module>\r\n File \"D:\\Orgs\\sympy\\sympy\\physics\\quantum\\cg.py\", line 158, in doit \r\n return wigner_3j(self.j1, self.j2, self.j3, self.m1, self.m2, self.m3)\r\n File \"D:\\Orgs\\sympy\\sympy\\physics\\wigner.py\", line 199, in wigner_3j\r\n raise ValueError(\"j values must be integer or half integer\") \r\nValueError: j values must be integer or half integer\r\n>>> w3j = Wigner3j(0.5,0.5,0.5,0.5,0.5,0.5)\r\n>>> w3j.doit()\r\nTraceback (most recent call last):\r\n File \"<console>\", line 1, in <module>\r\n File \"D:\\Orgs\\sympy\\sympy\\physics\\quantum\\cg.py\", line 158, in doit \r\n return wigner_3j(self.j1, self.j2, self.j3, self.m1, self.m2, self.m3)\r\n File \"D:\\Orgs\\sympy\\sympy\\physics\\wigner.py\", line 199, in wigner_3j raise ValueError(\"j values must be integer or half integer\") \r\nValueError: j values must be integer or half integer\r\n```\r\nIt is working fine for integer values but gives an error of values with half integers . ", "number": 26219, "title": "Wigner3j has problems with Half Integers " } ]
101e6d0215b5ada022577f1b77e512e54f24b59e
{ "head_commit": "f71a407033a38de213510c3f4a3a51c085690ae2", "head_commit_message": " Wigner3j : Fix inner function and add test case\n\nThis commit addresses an issue with the inner function in the\nwigner_3j calculation method in the sympy.physics.wigner module.\nThe inner function has been fixed to ensure accurate calculation\nof the Wigner 3j symbol.\n\nAdditionally, a new test case has been added to the\nsympy.physics.quantum.tests.test_cg module to validate the\ncorrectness of the Wigner 3j calculation. This test case covers\na scenario involving specific input values to thoroughly\ntest the functionality.\nExample : Right now you can provide imput as follow without getting valuerror : Wigner3j(1/2,1/2,1/2,1/2,1/2,1/2)\nReferences: #12345", "patch_to_review": "diff --git a/sympy/physics/quantum/tests/test_cg.py b/sympy/physics/quantum/tests/test_cg.py\nindex 7f04d386d713..e5ff16265f8e 100644\n--- a/sympy/physics/quantum/tests/test_cg.py\n+++ b/sympy/physics/quantum/tests/test_cg.py\n@@ -171,6 +171,8 @@ def test_cg_simp_sum():\n \n def test_doit():\n assert Wigner3j(S.Half, Rational(-1, 2), S.Half, S.Half, 0, 0).doit() == -sqrt(2)/2\n+ assert Wigner3j(1/2,1/2,1/2,1/2,1/2,1/2).doit() == 0\n+ assert Wigner3j(9/2,9/2,9/2,9/2,9/2,9/2).doit() == 0\n assert Wigner6j(1, 2, 3, 2, 1, 2).doit() == sqrt(21)/105\n assert Wigner6j(3, 1, 2, 2, 2, 1).doit() == sqrt(21) / 105\n assert Wigner9j(\ndiff --git a/sympy/physics/wigner.py b/sympy/physics/wigner.py\nindex e49360df15f7..3a6f4f5bb4e1 100644\n--- a/sympy/physics/wigner.py\n+++ b/sympy/physics/wigner.py\n@@ -107,7 +107,7 @@ def _calc_factlist(nn):\n _Factlist.append(_Factlist[ii - 1] * ii)\n return _Factlist[:int(nn) + 1]\n \n-\n+from fractions import Fraction\n def wigner_3j(j_1, j_2, j_3, m_1, m_2, m_3):\n r\"\"\"\n Calculate the Wigner 3j symbol `\\operatorname{Wigner3j}(j_1,j_2,j_3,m_1,m_2,m_3)`.\n@@ -194,12 +194,34 @@ def wigner_3j(j_1, j_2, j_3, m_1, m_2, m_3):\n \n - Jens Rasch (2009-03-24): initial version\n \"\"\"\n- if int(j_1 * 2) != j_1 * 2 or int(j_2 * 2) != j_2 * 2 or \\\n- int(j_3 * 2) != j_3 * 2:\n+\n+ def convert_float_to_rational_if_half_integer(value):\n+ if isinstance(value, float):\n+ fraction_value = Fraction(value).limit_denominator()\n+ if fraction_value.denominator == 2:\n+ return fraction_value\n+ return value\n+\n+ # Example list of values\n+ values = [j_1, j_2, j_3, m_1, m_2, m_3]\n+\n+ # Apply the conversion function to each value using a list comprehension\n+ converted_values = [convert_float_to_rational_if_half_integer(value) for value in values]\n+ j_1, j_2, j_3, m_1, m_2, m_3 = converted_values\n+\n+ # Define a tolerance for comparison\n+ tolerance = 1e-10\n+\n+ # Check if each value is either an integer or a half-integer\n+ if abs(j_1 * 2 - int(j_1 * 2)) > tolerance or \\\n+ abs(j_2 * 2 - int(j_2 * 2)) > tolerance or \\\n+ abs(j_3 * 2 - int(j_3 * 2)) > tolerance:\n raise ValueError(\"j values must be integer or half integer\")\n- if int(m_1 * 2) != m_1 * 2 or int(m_2 * 2) != m_2 * 2 or \\\n- int(m_3 * 2) != m_3 * 2:\n+ if abs(m_1 * 2 - int(m_1 * 2)) > tolerance or \\\n+ abs(m_2 * 2 - int(m_2 * 2)) > tolerance or \\\n+ abs(m_3 * 2 - int(m_3 * 2)) > tolerance:\n raise ValueError(\"m values must be integer or half integer\")\n+\n if m_1 + m_2 + m_3 != 0:\n return S.Zero\n prefid = Integer((-1) ** int(j_1 - j_2 - m_3))\n" }
[ { "diff_hunk": "@@ -107,7 +107,7 @@ def _calc_factlist(nn):\n _Factlist.append(_Factlist[ii - 1] * ii)\n return _Factlist[:int(nn) + 1]\n \n-\n+from fractions import Fraction", "line": null, "original_line": 110, "original_start_line": null, "path": "sympy/physics/wigner.py", "start_line": null, "text": "@user1:\nWhy are you importung here?\n\n@author:\nWIthout it was not working\r\n\n\n@user1:\nOk but why are you importing here and not at the top of the script? Is there a reason?\n\n@author:\nNo there's no reason , I can modify it if you want, and put it at the top of the code. I want to say that this is my first pull request so I am open to receive every type of advice. Do you know why the code didn't pass the benchmark test ?\n\n@user1:\nThe error says:\r\n```\r\nCouldn't load asv.plugins._mamba_helpers because\r\nNo module named 'libmambapy'\r\n```\r\nTherefore, I don't think is due to our changes. Try to re-run just this workflow manually if you can. Otherwise I cannot help you. I'm not an expert of libmambapay.\r\n\r\nAbout the import. I will put the import at the beginning of the script because is the place where people are expecting imports. You import in the middle of a script just in special cases.\r\n\n\n@author:\nThank you for your feedback, @user1. I appreciate your review. I'll implement your suggestion by moving the import statement to the beginning of the script for better clarity and conformity with best practices. Additionally, I'll rerun the test to verify if the changes resolve the issue. If there are any further concerns or recommendations, please don't hesitate to share them." } ]
c24377369348c8070167785e1ba5ac303eeae6d8
diff --git a/.mailmap b/.mailmap index 20b4d5efc52e..327176212189 100644 --- a/.mailmap +++ b/.mailmap @@ -1378,6 +1378,7 @@ Sushant Hiray <[email protected]> Susumu Ishizuka <[email protected]> Swapnil Agarwal <[email protected]> Szymon Mieszczak <[email protected]> +T-vaccari <[email protected]> Takafumi Arakaki <[email protected]> Takumasa Nakamura <[email protected]> Tanay Agrawal <[email protected]> diff --git a/sympy/physics/quantum/tests/test_cg.py b/sympy/physics/quantum/tests/test_cg.py index 7f04d386d713..e5ff16265f8e 100644 --- a/sympy/physics/quantum/tests/test_cg.py +++ b/sympy/physics/quantum/tests/test_cg.py @@ -171,6 +171,8 @@ def test_cg_simp_sum(): def test_doit(): assert Wigner3j(S.Half, Rational(-1, 2), S.Half, S.Half, 0, 0).doit() == -sqrt(2)/2 + assert Wigner3j(1/2,1/2,1/2,1/2,1/2,1/2).doit() == 0 + assert Wigner3j(9/2,9/2,9/2,9/2,9/2,9/2).doit() == 0 assert Wigner6j(1, 2, 3, 2, 1, 2).doit() == sqrt(21)/105 assert Wigner6j(3, 1, 2, 2, 2, 1).doit() == sqrt(21) / 105 assert Wigner9j( diff --git a/sympy/physics/wigner.py b/sympy/physics/wigner.py index e49360df15f7..346ef8707e5b 100644 --- a/sympy/physics/wigner.py +++ b/sympy/physics/wigner.py @@ -56,7 +56,7 @@ from sympy.core.add import Add from sympy.core.numbers import int_valued from sympy.core.function import Function -from sympy.core.numbers import (I, Integer, pi) +from sympy.core.numbers import (Float, I, Integer, pi, Rational, equal_valued) from sympy.core.singleton import S from sympy.core.symbol import Dummy from sympy.core.sympify import sympify @@ -108,6 +108,21 @@ def _calc_factlist(nn): return _Factlist[:int(nn) + 1] +def _Integer_or_halfInteger(value): + if isinstance(value, int): + return Integer(value) + elif isinstance(value, (float, Float)): + if isinstance(value, float) and value.is_integer(): + return Integer(int(value)) + elif (equal_valued((v:=2*value), (i:=int(v)))): + return Rational(i, 2) + elif isinstance(value, Integer): + return value + elif isinstance(value, Rational) and value.q == 2: + return value + raise ValueError("expecting integer or half-integer, got %s" % value) + + def wigner_3j(j_1, j_2, j_3, m_1, m_2, m_3): r""" Calculate the Wigner 3j symbol `\operatorname{Wigner3j}(j_1,j_2,j_3,m_1,m_2,m_3)`. @@ -194,16 +209,12 @@ def wigner_3j(j_1, j_2, j_3, m_1, m_2, m_3): - Jens Rasch (2009-03-24): initial version """ - if int(j_1 * 2) != j_1 * 2 or int(j_2 * 2) != j_2 * 2 or \ - int(j_3 * 2) != j_3 * 2: - raise ValueError("j values must be integer or half integer") - if int(m_1 * 2) != m_1 * 2 or int(m_2 * 2) != m_2 * 2 or \ - int(m_3 * 2) != m_3 * 2: - raise ValueError("m values must be integer or half integer") + + j_1, j_2, j_3, m_1, m_2, m_3 = map(_Integer_or_halfInteger, + [j_1, j_2, j_3, m_1, m_2, m_3]) + if m_1 + m_2 + m_3 != 0: return S.Zero - prefid = Integer((-1) ** int(j_1 - j_2 - m_3)) - m_3 = -m_3 a1 = j_1 + j_2 - j_3 if a1 < 0: return S.Zero @@ -216,6 +227,8 @@ def wigner_3j(j_1, j_2, j_3, m_1, m_2, m_3): if (abs(m_1) > j_1) or (abs(m_2) > j_2) or (abs(m_3) > j_3): return S.Zero + prefid = Integer((-1) ** int(j_1 - j_2 - m_3)) + m_3 = -m_3 maxfact = max(j_1 + j_2 + j_3 + 1, j_1 + abs(m_1), j_2 + abs(m_2), j_3 + abs(m_3)) _calc_factlist(int(maxfact)) @@ -337,6 +350,8 @@ def _big_delta_coeff(aa, bb, cc, prec=None): 1/2*sqrt(1/6) """ + # the triangle test will only pass if a) all 3 values are ints or + # b) 1 is an int and the other two are half-ints if not int_valued(aa + bb - cc): raise ValueError("j values must be integer or half integer and fulfill the triangle relation") if not int_valued(aa + cc - bb):
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Bug Fixes" }
sympy__sympy-26180@03df4d9
sympy/sympy
Python
26,180
Polys: Refine dup_clear_denoms function and update tests
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> Fixes: #26176 #### Brief description of what is fixed or changed Refined the `dup_clear_denoms` function within the polys module, focusing on enhancing its efficiency and reliability when dealing with exact domains. The adjustments ensure that the function more accurately clears denominators, especially in scenarios involving symbolic expressions and rational numbers. #### Other comments Updated tests reflect these changes by focusing on: - Ensuring the convert flag functions as intended across different scenarios. #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> * polys * Improved `dup_clear_denoms` function's efficiency in symbolic computations. <!-- END RELEASE NOTES -->
2024-02-05T14:02:18Z
CoercionFailed when integrating exponential functions. But only for some values - why? I am using sympy to solve and evaluate integrals. For some values, I get the error about coercion failed: _raise CoercionFailed("Cannot convert %s of type %s from %s to %s" % (element, type(element), base, self)) sympy.polys.polyerrors.CoercionFailed: Cannot convert (8.48717*_Dummy_6413 + 2.999)/(8.0089*_Dummy_6413 + 2.83) of type <class 'sympy.polys.fields.FracElement'> from RR(_Dummy_6413) to RR[_Dummy_6413]_ Even debugging and going through other issues here in github couldn't help me to find a solution. The integral is bounded, and only using slightly different values works. This is a minimal example: `import sympy as sym maximum_age = 1 a = sym.symbols('a') # Example 1: these values work [constant_1, constant_2] = [2.83, 2.99] # Example 2: these values work [constant_1, constant_2] = [1, 2.999] # Example 3: these values don't work [constant_1, constant_2] = [2.83, 2.999] # This is the mathematical operation to do factor_1 = 1 / (constant_1 * a + 1) factor_2 = constant_2 * sym.exp(-a) integral_term = sym.integrate(factor_1 * factor_2, (a, 0, maximum_age))`
The problem is: ```python In [11]: integrate(2.999*exp(-x)/(2.83*x + 1), (x, 0, 1)) --------------------------------------------------------------------------- CoercionFailed ``` More directly the problem is: ```python In [18]: Poly([F(8.48717/(8.0089*x + 2.83)), F(0.0)], y, domain=F).clear_denoms() Out[18]: (1.0⋅x + 0.353356890459364, Poly((8.48717*x + 2.999)/(8.0089*x + 2.83)*y, y, domain='RR(x)')) In [19]: Poly([F(8.48717/(8.0089*x + 2.83)), F(0.0)], y, domain=F).clear_denoms(convert=True) --------------------------------------------------------------------------- CoercionFailed ``` This is what happens if QQ is used instead of RR: ```python In [20]: F2 = QQ.frac_field(x) In [21]: Poly([F2(8.48717/(8.0089*x + 2.83)), F2(0.0)], y, domain=F2).clear_denoms() Out[21]: ⎛ 5570104133810989 ↪ ⎜x + ─────────────────, Poly(3900929712011235641763145417/3681104063018271642939543012*y, y, domain ↪ ⎝ 15763394698685100 ↪ ↪ ⎞ ↪ ='QQ(x)')⎟ ↪ ⎠ In [22]: Poly([F2(8.48717/(8.0089*x + 2.83)), F2(0.0)], y, domain=F2).clear_denoms(convert=True) Out[22]: ⎛ 5570104133810989 ↪ ⎜x + ─────────────────, Poly(3900929712011235641763145417/3681104063018271642939543012*y, y, domain ↪ ⎝ 15763394698685100 ↪ ↪ ⎞ ↪ ='QQ[x]')⎟ ↪ ⎠ ``` So the problem seems to be in `clear_denoms`. When the domain is `RR(x)` is does not extract the polynomial denominator properly. The problem seems to be this code: https://github.com/sympy/sympy/blob/0fef9fb5da57ab3ae58a7cbd4f63981f243e2747/sympy/polys/densetools.py#L1199-L1208 It assumes that multiplying by the lcm of the denominator will give a rational function without polynomia denominator but for an inexact domain like `RR` it is possible that the denominators do not cancel completely due to rounding error. Actually the error looks much bigger than a typical rounding error: ``` 1201 common = K1.lcm(common, K0.denom(c)) 1202 1203 if not K1.is_one(common): 1204 f = dup_mul_ground(f, common, K0) 1205 1206 -> if not convert: 1207 return common, f 1208 else: 1209 return common, dup_convert(f, K0, K1) 1210 1211 (Pdb) p f [(8.48717*x + 2.999)/(8.0089*x + 2.83), 0.0] (Pdb) p common x + 0.353356890459364 ``` It would be worth trying to understand how the error is so large but in any case `dup_clear_denoms` should be written differently to handle inexact domains. Rather than just multiplying by the LCM of the denominators and hoping that it cancels it should separate the numerator explicitly and multiply each numerator by the lcm divided by the denominator. Actually that is possibly a more efficient approach for exact fields including `QQ` and `QQ(x)` etc as well. This seems to be the fix: ```diff diff --git a/sympy/polys/densetools.py b/sympy/polys/densetools.py index b56a4a773b..ecd1c98c3b 100644 --- a/sympy/polys/densetools.py +++ b/sympy/polys/densetools.py @@ -1199,13 +1199,20 @@ def dup_clear_denoms(f, K0, K1=None, convert=False): for c in f: common = K1.lcm(common, K0.denom(c)) - if not K1.is_one(common): - f = dup_mul_ground(f, common, K0) + if K1.is_one(common): + if not convert: + return common, f + else: + return common, dup_convert(f, K0, K1) + + # Use quo rather than exquo to handle inexact domains by discarding the + # remainder. + f = [K0.numer(c)*K1.quo(common, K0.denom(c)) for c in f] if not convert: - return common, f + return common, dup_convert(f, K1, K0) else: - return common, dup_convert(f, K0, K1) + return common, f def _rec_clear_denoms(g, v, K0, K1): ``` > It would be worth trying to understand how the error is so large Actually the error is not as big as I thought: ```python In [5]: monic((8.48717*x + 2.999)) Out[5]: 1.0⋅x + 0.353356890459364 In [6]: monic(8.0089*x + 2.83) Out[6]: 1.0⋅x + 0.353356890459364 In [7]: cancel((8.48717*x + 2.999)/(8.0089*x + 2.83)) Out[7]: 8.48717⋅x + 2.999 ───────────────── 8.0089⋅x + 2.83 ``` Interestingly with the fix above the integral evaluates but only for floats and not for rationals: ```python In [9]: integrate(3*exp(-x)/(2*x + 1), (x, 0, 1)) Out[9]: 1 ⌠ ⎮ 1 3⋅⎮ ─────────── dx ⎮ x x ⎮ 2⋅x⋅ℯ + ℯ ⌡ 0 In [10]: integrate(3.0*exp(-x)/(2.0*x + 1), (x, 0, 1)) Out[10]: ⎛ ⅈ⋅π⎞ ⎛ ⅈ⋅π⎞ - 2.47308190605019⋅Ei⎝0.5⋅ℯ ⎠ + 2.47308190605019⋅Ei⎝1.5⋅ℯ ⎠ ``` In any case the issue can be closed with the diff above and a test for `clear_denoms`. @oscarbenjamin Can you please tell me the reason for interchanging the return statments in the `dup_clear_denoms` ? There are two different domains `K0` and `K1`. The changed way of computing `f` gives a result over `K1 = RR[x]` rather than `K0 = RR(x)`. See: https://docs.sympy.org/latest/modules/polys/domainsintro.html
[ { "body": "I am using sympy to solve and evaluate integrals. For some values, I get the error about coercion failed:\r\n\r\n_raise CoercionFailed(\"Cannot convert %s of type %s from %s to %s\" % (element, type(element), base, self))\r\nsympy.polys.polyerrors.CoercionFailed: Cannot convert (8.48717*_Dummy_6413 + 2.999)/(8.0089*_Dummy_6413 + 2.83) of type <class 'sympy.polys.fields.FracElement'> from RR(_Dummy_6413) to RR[_Dummy_6413]_\r\n\r\nEven debugging and going through other issues here in github couldn't help me to find a solution. The integral is bounded, and only using slightly different values works.\r\nThis is a minimal example:\r\n\r\n\r\n`import sympy as sym\r\nmaximum_age = 1\r\na = sym.symbols('a')\r\n\r\n# Example 1: these values work\r\n[constant_1, constant_2] = [2.83, 2.99]\r\n\r\n# Example 2: these values work\r\n[constant_1, constant_2] = [1, 2.999]\r\n\r\n# Example 3: these values don't work\r\n[constant_1, constant_2] = [2.83, 2.999]\r\n\r\n# This is the mathematical operation to do\r\nfactor_1 = 1 / (constant_1 * a + 1)\r\nfactor_2 = constant_2 * sym.exp(-a)\r\nintegral_term = sym.integrate(factor_1 * factor_2, (a, 0, maximum_age))`", "number": 26176, "title": "CoercionFailed when integrating exponential functions. But only for some values - why?" } ]
a2122a82cd64d5b4cbd93ed4bba0641f6afb4770
{ "head_commit": "03df4d9ec94b384e9c79871fcf650bde211ddea9", "head_commit_message": "This commit refines the `dup_clear_denoms` function within the polys module, focusing on enhancing its efficiency and reliability when dealing with exact domains. The adjustments ensure that the function more accurately clears denominators, especially in scenarios involving symbolic expressions and rational numbers.\n\nChanges made include:\n- Fine-tuning the logic for handling denominators to ensure robust performance across a variety of use cases.\n- Streamlining the conversion process to guarantee that expressions are correctly converted to the target domain when requested.\n\nThe primary aim of these modifications was to address feedback and improve the function's handling of symbolic and rational expressions, thereby increasing the utility and reliability of polynomial operations within Sympy.\n\nUpdated tests reflect these changes by focusing on:\n- Verifying the accurate clearing of denominators in exact domains.\n- Ensuring the `convert` flag functions as intended across different scenarios.\n\nWhile initial considerations included enhancements for inexact domain handling, this version concentrates on exact domains to solidify the foundation of `dup_clear_denoms`. Future work may revisit inexact domain improvements.\n\nFixes issue: #26176 (https://github.com/sympy/sympy/issues/26176)\n\nimproved code quality\n\nUpdated PR with comments\n\nUpdated code quality\n\nUpdated test cases for domain handling\n\nUpdated code quality\n\nAdded test with simplified expression\n\nUpdated test\n\nImproved code quality\n\nImproved code quality\n\nimproved code quality\n\nUpdated PR with comments\n\nUpdated code quality\n\nUpdated test cases for domain handling\n\nUpdated code quality\n\nAdded test with simplified expression\n\nUpdated test\n\nImproved code quality\n\nImproved code quality\n\nSquashed commits\n\nUpdated PR with comments\n\nUpdated code quality\n\nUpdated test cases for domain handling\n\nUpdated code quality\n\nAdded test with simplified expression\n\nUpdated test\n\nImproved code quality\n\nImproved code quality", "patch_to_review": "diff --git a/sympy/polys/densetools.py b/sympy/polys/densetools.py\nindex b56a4a773b6b..ecd1c98c3be3 100644\n--- a/sympy/polys/densetools.py\n+++ b/sympy/polys/densetools.py\n@@ -1199,13 +1199,20 @@ def dup_clear_denoms(f, K0, K1=None, convert=False):\n for c in f:\n common = K1.lcm(common, K0.denom(c))\n \n- if not K1.is_one(common):\n- f = dup_mul_ground(f, common, K0)\n+ if K1.is_one(common):\n+ if not convert:\n+ return common, f\n+ else:\n+ return common, dup_convert(f, K0, K1)\n+\n+ # Use quo rather than exquo to handle inexact domains by discarding the\n+ # remainder.\n+ f = [K0.numer(c)*K1.quo(common, K0.denom(c)) for c in f]\n \n if not convert:\n- return common, f\n+ return common, dup_convert(f, K1, K0)\n else:\n- return common, dup_convert(f, K0, K1)\n+ return common, f\n \n \n def _rec_clear_denoms(g, v, K0, K1):\ndiff --git a/sympy/polys/tests/test_densetools.py b/sympy/polys/tests/test_densetools.py\nindex e8efb52f5230..4bd2f8029e1c 100644\n--- a/sympy/polys/tests/test_densetools.py\n+++ b/sympy/polys/tests/test_densetools.py\n@@ -1,5 +1,6 @@\n \"\"\"Tests for dense recursive polynomials' tools. \"\"\"\n \n+from sympy.core.symbol import symbols\n from sympy.polys.densebasic import (\n dup_normal, dmp_normal,\n dup_from_raw_dict,\n@@ -28,7 +29,6 @@\n dup_sign_variations,\n dup_revert, dmp_revert,\n )\n-\n from sympy.polys.polyclasses import ANP\n \n from sympy.polys.polyerrors import (\n@@ -40,7 +40,7 @@\n \n from sympy.polys.specialpolys import f_polys\n \n-from sympy.polys.domains import FF, ZZ, QQ, EX\n+from sympy.polys.domains import FF, ZZ, QQ, EX, RR\n from sympy.polys.rings import ring\n \n from sympy.core.numbers import I\n@@ -48,7 +48,6 @@\n from sympy.functions.elementary.trigonometric import sin\n \n from sympy.abc import x\n-\n from sympy.testing.pytest import raises\n \n f_0, f_1, f_2, f_3, f_4, f_5, f_6 = [ f.to_dense() for f in f_polys() ]\n@@ -613,6 +612,9 @@ def test_dup_sign_variations():\n \n \n def test_dup_clear_denoms():\n+\n+ x = symbols('x')\n+\n assert dup_clear_denoms([], QQ, ZZ) == (ZZ(1), [])\n \n assert dup_clear_denoms([QQ(1)], QQ, ZZ) == (ZZ(1), [QQ(1)])\n@@ -637,6 +639,9 @@ def test_dup_clear_denoms():\n assert dup_clear_denoms([EX(7)], EX) == (EX(1), [EX(7)])\n assert dup_clear_denoms([EX(sin(x)/x), EX(0)], EX) == (EX(x), [EX(sin(x)), EX(0)])\n \n+ F = RR.frac_field(x)\n+ result = dup_clear_denoms([F(8.48717/(8.0089*x + 2.83)), F(0.0)], F)\n+ assert str(result) == \"(x + 0.353356890459364, [1.05971731448763, 0.0])\"\n \n def test_dmp_clear_denoms():\n assert dmp_clear_denoms([[]], 1, QQ, ZZ) == (ZZ(1), [[]])\n" }
[ { "diff_hunk": "@@ -613,6 +612,9 @@ def test_dup_sign_variations():\n \n \n def test_dup_clear_denoms():\n+\n+ x = symbols('x')\n+", "line": null, "original_line": 617, "original_start_line": 615, "path": "sympy/polys/tests/test_densetools.py", "start_line": null, "text": "@user1:\n```suggestion\r\n```" } ]
56018d272ea7076fa8e29968d5019be518667fa4
diff --git a/sympy/polys/densetools.py b/sympy/polys/densetools.py index b56a4a773b6b..ecd1c98c3be3 100644 --- a/sympy/polys/densetools.py +++ b/sympy/polys/densetools.py @@ -1199,13 +1199,20 @@ def dup_clear_denoms(f, K0, K1=None, convert=False): for c in f: common = K1.lcm(common, K0.denom(c)) - if not K1.is_one(common): - f = dup_mul_ground(f, common, K0) + if K1.is_one(common): + if not convert: + return common, f + else: + return common, dup_convert(f, K0, K1) + + # Use quo rather than exquo to handle inexact domains by discarding the + # remainder. + f = [K0.numer(c)*K1.quo(common, K0.denom(c)) for c in f] if not convert: - return common, f + return common, dup_convert(f, K1, K0) else: - return common, dup_convert(f, K0, K1) + return common, f def _rec_clear_denoms(g, v, K0, K1): diff --git a/sympy/polys/tests/test_densetools.py b/sympy/polys/tests/test_densetools.py index e8efb52f5230..d9c9cf6e56b1 100644 --- a/sympy/polys/tests/test_densetools.py +++ b/sympy/polys/tests/test_densetools.py @@ -28,7 +28,6 @@ dup_sign_variations, dup_revert, dmp_revert, ) - from sympy.polys.polyclasses import ANP from sympy.polys.polyerrors import ( @@ -40,7 +39,7 @@ from sympy.polys.specialpolys import f_polys -from sympy.polys.domains import FF, ZZ, QQ, EX +from sympy.polys.domains import FF, ZZ, QQ, EX, RR from sympy.polys.rings import ring from sympy.core.numbers import I @@ -48,7 +47,6 @@ from sympy.functions.elementary.trigonometric import sin from sympy.abc import x - from sympy.testing.pytest import raises f_0, f_1, f_2, f_3, f_4, f_5, f_6 = [ f.to_dense() for f in f_polys() ] @@ -637,6 +635,9 @@ def test_dup_clear_denoms(): assert dup_clear_denoms([EX(7)], EX) == (EX(1), [EX(7)]) assert dup_clear_denoms([EX(sin(x)/x), EX(0)], EX) == (EX(x), [EX(sin(x)), EX(0)]) + F = RR.frac_field(x) + result = dup_clear_denoms([F(8.48717/(8.0089*x + 2.83)), F(0.0)], F) + assert str(result) == "(x + 0.353356890459364, [1.05971731448763, 0.0])" def test_dmp_clear_denoms(): assert dmp_clear_denoms([[]], 1, QQ, ZZ) == (ZZ(1), [[]])
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Bug Fixes" }
sympy__sympy-26173@edc3e5b
sympy/sympy
Python
26,173
[integrals] fix substitution formula in meijerint
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> Fixes #25786 #### Brief description of what is fixed or changed fix substitution formula in meijerint #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * integrals * Fixed a bug with integrating over negative real numbers using meijerint algorithm. Formerly, `integrate(exp(-x**2), (x,-5,0), meijerg=True)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> * integrals * Fixed a bug with integrating over negative real numbers using meijerint algorithm. Formerly, `integrate(exp(-x**2), (x,-5,0), meijerg=True)` incorrectly gave `-sqrt(pi)/2 * erf(5)` instead of `sqrt(pi)/2 * erf(5)`. <!-- END RELEASE NOTES -->
2024-02-04T06:31:35Z
Wrong result for a simple integral `sympy.integrate` returns a wrong result for a simple integral of a Gaussian function. The following code ```python import sympy as sp x = sp.Symbol('x') f = sp.exp(-0.8*x**2) sp.integrate(f, (x, -3, -0.2)) ``` returns $−0.447288590994492\sqrt{\pi}$, which has the correct magnitude, but the wrong sign. This was observed with version 1.12 of `sympy`. Interestingly, if the constant in the exponential is defined as a `Rational`: ```python f = sp.exp(-sp.Rational(8,10)*x**2) ``` then the result is correct. If we use a different constant, for instance ```python f = sp.exp(-0.4*x**2) ``` the result is also correct.
In fact if we calculate the indefinite integral first and then substitute limits, sympy gives a correct result : ``` indefinite_integral = sp.integrate(f, x) result = indefinite_integral.subs(x, -0.2) - indefinite_integral.subs(x, -3) ``` One of the reason for this disparity I believe is because sympy evaluates definite integrals symbolically, while indefinite integrals are more of a step by step approach, _**Although Ideally both should have same answer**_ The correct output is seen if both risch and meijerg are disabled: ```python In [21]: integrate(f, (x, -3, -0.2), meijerg=False, risch=False) Out[21]: 0.447288590994492⋅√π ``` It may be related to erf.eval @ sympy/sympy/functions/special/error_functions.py ``` # Try to pull out factors of -1 if arg.could_extract_minus_sign(): return -cls(-arg) ``` Consider the following snippet: ``` from sympy import Symbol, exp from sympy.integrals.meijerint import meijerint_indefinite x = Symbol('x', negative=True) f = exp(-x**2) print(meijerint_indefinite(f, x)) ``` This will print "-sqrt(pi)*erf(x)/2" instead of sqrt(pi)*erf(x)/2. Is this intended? Consider the following substitution t=a*x**b: ``` integral G(params, a*x**b) dx = 1/(a*b) integral x**(1-b) G(params, a*x**b) * ab x**(b-1) dx = 1/(a*b) integral (t/a)**(1/b-1) G(params, t) * dt = 1/(a**(1/b) b) integral G(params + 1/b-1, t) * dt = 1/(a**(1/b) b) G(1, params + 1/b, 0, t) = 1/(a**(1/b) b) G(1, params + 1/b, 0, a*x**b) ``` I think the code in [meijerint.py#L1729](https://github.com/sympy/sympy/blob/ffa18fad4a63d4a3a999b5bdb933bda2af1dce2f/sympy/integrals/meijerint.py#L1729) is based on this idea. But this is not quite right, e.g. ``` integral G(((),()),((0),()),x**2) dx = integral exp(-x**2) dx = sqrt(pi)*erf(x)/2 ``` but `1/2 G(((1),()),((1/2),(0)),x^2)=sqrt(pi)*erf(x)/2 * sqrt(x**2)/x` > Consider the following snippet: > > ``` > from sympy import Symbol, exp > from sympy.integrals.meijerint import meijerint_indefinite > > x = Symbol('x', negative=True) > f = exp(-x**2) > > print(meijerint_indefinite(f, x)) > ``` > > This will print "-sqrt(pi)*erf(x)/2" instead of sqrt(pi)*erf(x)/2. Is this intended? > > Consider the following substitution t=a*x**b: > > ``` > integral G(params, a*x**b) dx > = 1/(a*b) integral x**(1-b) G(params, a*x**b) * ab x**(b-1) dx > = 1/(a*b) integral (t/a)**(1/b-1) G(params, t) * dt > = 1/(a**(1/b) b) integral G(params + 1/b-1, t) * dt > = 1/(a**(1/b) b) G(1, params + 1/b, 0, t) > = 1/(a**(1/b) b) G(1, params + 1/b, 0, a*x**b) > ``` > > I think the code in [meijerint.py#L1729](https://github.com/sympy/sympy/blob/ffa18fad4a63d4a3a999b5bdb933bda2af1dce2f/sympy/integrals/meijerint.py#L1729) is based on this idea. But this is not quite right, e.g. > > ``` > integral G(((),()),((0),()),x**2) dx > = integral exp(-x**2) dx > = sqrt(pi)*erf(x)/2 > ``` > > but `1/2 G(((1),()),((1/2),(0)),x^2)=sqrt(pi)*erf(x)/2 * sqrt(x**2)/x` Regarding the problematic substitution I asked on [math.stackexchange](https://math.stackexchange.com/questions/4831298/antiderivative-of-meijer-g-function) and [mathoverflow](https://mathoverflow.net/questions/461606/antiderivative-of-meijer-g-function) > I think the code in [meijerint.py#L1729](https://github.com/sympy/sympy/blob/ffa18fad4a63d4a3a999b5bdb933bda2af1dce2f/sympy/integrals/meijerint.py#L1729) is based on this idea. But this is not quite right, e.g. > > ``` > integral G(((),()),((0),()),x**2) dx > = integral exp(-x**2) dx > = sqrt(pi)*erf(x)/2 > ``` > > but `1/2 G(((1),()),((1/2),(0)),x^2)=sqrt(pi)*erf(x)/2 * sqrt(x**2)/x` Note the all to `powdenest` and the comment here: https://github.com/sympy/sympy/blob/ffa18fad4a63d4a3a999b5bdb933bda2af1dce2f/sympy/integrals/meijerint.py#L1738-L1740 ```python In [5]: e = sqrt(x**2) In [6]: e Out[6]: ____ ╱ 2 ╲╱ x In [7]: powdenest(e) Out[7]: ____ ╱ 2 ╲╱ x In [8]: powdenest(e, polar=True) Out[8]: x In [9]: x = symbols('x', negative=True) In [10]: e2 = sqrt(x**2) In [11]: e2 Out[11]: -x In [13]: powdenest(e2, polar=True) Out[13]: -x ``` Is that the problem here? The premise of the meijerg code seems to be that `sqrt(x**2)` would remain unevaluated so that `powdenest` can simplify it later. When there is a need to take care of branches I think that meijerg uses `exp_polar` which is what the `polar=True` references. I don't fully understand how meijerg uses `exp_polar` and how to do it correctly but the impression I have is that the code depends on not allowing the sort of assumptions based "simplification" that we see when the symbol has `negative=True`. It is possibly important to prevent the integration variable from having any assumptions set that would cause this.
[ { "body": "`sympy.integrate` returns a wrong result for a simple integral of a Gaussian function. The following code\r\n```python\r\nimport sympy as sp\r\nx = sp.Symbol('x')\r\nf = sp.exp(-0.8*x**2)\r\nsp.integrate(f, (x, -3, -0.2))\r\n```\r\nreturns $−0.447288590994492\\sqrt{\\pi}$, which has the correct magnitude, but the wrong sign. This was observed with version 1.12 of `sympy`.\r\n\r\nInterestingly, if the constant in the exponential is defined as a `Rational`:\r\n```python\r\nf = sp.exp(-sp.Rational(8,10)*x**2)\r\n```\r\nthen the result is correct. If we use a different constant, for instance\r\n```python\r\nf = sp.exp(-0.4*x**2)\r\n```\r\nthe result is also correct.", "number": 25786, "title": "Wrong result for a simple integral" } ]
d447356bacf8c6d0db1c4f118c7c1188c4ef33a5
{ "head_commit": "edc3e5bef72ec8afa801a1f4e89f01f4ab1cdf93", "head_commit_message": "25786 fix test", "patch_to_review": "diff --git a/.mailmap b/.mailmap\nindex 0a73b94bf2b5..7ceb7715d2e9 100644\n--- a/.mailmap\n+++ b/.mailmap\n@@ -945,6 +945,7 @@ Matthew Wardrop <[email protected]>\n Matthias Bussonnier <[email protected]>\n Matthias Geier <[email protected]>\n Matthias Köppe <[email protected]> Matthias Koeppe <[email protected]>\n+Matthias Liesenfeld <[email protected]>\n Matthias Rettl <[email protected]>\n Matthias Toews <[email protected]>\n Mauro Garavello <[email protected]>\ndiff --git a/sympy/integrals/meijerint.py b/sympy/integrals/meijerint.py\nindex aa9dac6db166..5b518fdf2a16 100644\n--- a/sympy/integrals/meijerint.py\n+++ b/sympy/integrals/meijerint.py\n@@ -1706,8 +1706,8 @@ def _meijerint_indefinite_1(f, x):\n c += s\n \n # we do a substitution t=a*x**b, get integrand fac*t**rho*g\n- fac_ = fac * C / (b*a**((1 + c)/b))\n- rho = (c + 1)/b - 1\n+ fac_ = fac * C * x**(1 + c) / b\n+ rho = (c + 1)/b\n \n # we now use t**rho*G(params, t) = G(params + rho, t)\n # [L, page 150, equation (4)]\n@@ -1720,13 +1720,13 @@ def _meijerint_indefinite_1(f, x):\n t = _dummy('t', 'meijerint-indefinite', S.One)\n \n def tr(p):\n- return [a + rho + 1 for a in p]\n+ return [a + rho for a in p]\n if any(b.is_integer and (b <= 0) == True for b in tr(g.bm)):\n r = -meijerg(\n- tr(g.an), tr(g.aother) + [1], tr(g.bm) + [0], tr(g.bother), t)\n+ list(g.an), list(g.aother) + [1-rho], list(g.bm) + [-rho], list(g.bother), t)\n else:\n r = meijerg(\n- tr(g.an) + [1], tr(g.aother), tr(g.bm), tr(g.bother) + [0], t)\n+ list(g.an) + [1-rho], list(g.aother), list(g.bm), list(g.bother) + [-rho], t)\n # The antiderivative is most often expected to be defined\n # in the neighborhood of x = 0.\n if b.is_extended_nonnegative and not f.subs(x, 0).has(S.NaN, S.ComplexInfinity):\ndiff --git a/sympy/integrals/tests/test_meijerint.py b/sympy/integrals/tests/test_meijerint.py\nindex f23975e65c03..8cb7121f1959 100644\n--- a/sympy/integrals/tests/test_meijerint.py\n+++ b/sympy/integrals/tests/test_meijerint.py\n@@ -755,10 +755,15 @@ def test_issue_6462():\n \n def test_indefinite_1_bug():\n assert integrate((b + t)**(-a), t, meijerg=True\n- ) == -b**(1 - a)*(1 + t/b)**(1 - a)/(a - 1)\n+ ).equals(-b**(1 - a)*(1 + t/b)**(1 - a)/(a - 1))\n \n \n def test_pr_23583():\n # This result is wrong. Check whether new result is correct when this test fail.\n assert integrate(1/sqrt((x - I)**2-1), meijerg=True) == \\\n Piecewise((acosh(x - I), Abs((x - I)**2) > 1), (-I*asin(x - I), True))\n+\n+\n+# 25786\n+def test_integrate_function_of_square_over_negatives():\n+ assert integrate(exp(-x**2), (x,-5,0), meijerg=True) == sqrt(pi)/2 * erf(5)\ndiff --git a/sympy/physics/continuum_mechanics/tests/test_beam.py b/sympy/physics/continuum_mechanics/tests/test_beam.py\nindex 2c33fca5f9ae..3ee6b044b448 100644\n--- a/sympy/physics/continuum_mechanics/tests/test_beam.py\n+++ b/sympy/physics/continuum_mechanics/tests/test_beam.py\n@@ -336,7 +336,7 @@ def test_variable_moment():\n assert b.slope().expand() == ((10*x*SingularityFunction(x, 0, 0)\n - 10*(x - 4)*SingularityFunction(x, 4, 0))/E).expand()\n assert b.deflection().expand() == ((5*x**2*SingularityFunction(x, 0, 0)\n- - 10*Piecewise((0, Abs(x)/4 < 1), (16*meijerg(((3, 1), ()), ((), (2, 0)), x/4), True))\n+ - 10*Piecewise((0, Abs(x)/4 < 1), (x**2*meijerg(((-1, 1), ()), ((), (-2, 0)), x/4), True))\n + 40*SingularityFunction(x, 4, 1))/E).expand()\n \n b = Beam(4, E - x, I)\n" }
[ { "diff_hunk": "@@ -755,10 +755,15 @@ def test_issue_6462():\n \n def test_indefinite_1_bug():\n assert integrate((b + t)**(-a), t, meijerg=True\n- ) == -b**(1 - a)*(1 + t/b)**(1 - a)/(a - 1)\n+ ).equals(-b**(1 - a)*(1 + t/b)**(1 - a)/(a - 1))", "line": null, "original_line": 758, "original_start_line": 756, "path": "sympy/integrals/tests/test_meijerint.py", "start_line": null, "text": "@user1:\nIt would be better to change this test answer to show what is now returned.\r\n\r\nThe old result was:\r\n```python\r\nIn [4]: a, b, c = symbols(\"a b c\")\r\n\r\nIn [5]: integrate((b + t)**(-a), t, meijerg=True)\r\nOut[5]: \r\n 1 - a \r\n 1 - a ⎛ t⎞ \r\n-b ⋅⎜1 + ─⎟ \r\n ⎝ b⎠ \r\n─────────────────────\r\n a - 1 \r\n```\r\nThe new result is\r\n```python\r\nIn [2]: integrate((b + t)**(-a), t, meijerg=True)\r\nOut[2]: \r\n 1 - a \r\n ⎛ t⎞ \r\n-b⋅⎜1 + ─⎟ \r\n ⎝ b⎠ \r\n────────────────\r\n a a \r\n a⋅b - b \r\n```" } ]
7f246d740ad6d780f03fca4b415ddcb632c0133b
diff --git a/.mailmap b/.mailmap index 0a73b94bf2b5..7ceb7715d2e9 100644 --- a/.mailmap +++ b/.mailmap @@ -945,6 +945,7 @@ Matthew Wardrop <[email protected]> Matthias Bussonnier <[email protected]> Matthias Geier <[email protected]> Matthias Köppe <[email protected]> Matthias Koeppe <[email protected]> +Matthias Liesenfeld <[email protected]> Matthias Rettl <[email protected]> Matthias Toews <[email protected]> Mauro Garavello <[email protected]> diff --git a/sympy/integrals/meijerint.py b/sympy/integrals/meijerint.py index aa9dac6db166..5b518fdf2a16 100644 --- a/sympy/integrals/meijerint.py +++ b/sympy/integrals/meijerint.py @@ -1706,8 +1706,8 @@ def _meijerint_indefinite_1(f, x): c += s # we do a substitution t=a*x**b, get integrand fac*t**rho*g - fac_ = fac * C / (b*a**((1 + c)/b)) - rho = (c + 1)/b - 1 + fac_ = fac * C * x**(1 + c) / b + rho = (c + 1)/b # we now use t**rho*G(params, t) = G(params + rho, t) # [L, page 150, equation (4)] @@ -1720,13 +1720,13 @@ def _meijerint_indefinite_1(f, x): t = _dummy('t', 'meijerint-indefinite', S.One) def tr(p): - return [a + rho + 1 for a in p] + return [a + rho for a in p] if any(b.is_integer and (b <= 0) == True for b in tr(g.bm)): r = -meijerg( - tr(g.an), tr(g.aother) + [1], tr(g.bm) + [0], tr(g.bother), t) + list(g.an), list(g.aother) + [1-rho], list(g.bm) + [-rho], list(g.bother), t) else: r = meijerg( - tr(g.an) + [1], tr(g.aother), tr(g.bm), tr(g.bother) + [0], t) + list(g.an) + [1-rho], list(g.aother), list(g.bm), list(g.bother) + [-rho], t) # The antiderivative is most often expected to be defined # in the neighborhood of x = 0. if b.is_extended_nonnegative and not f.subs(x, 0).has(S.NaN, S.ComplexInfinity): diff --git a/sympy/integrals/tests/test_meijerint.py b/sympy/integrals/tests/test_meijerint.py index f23975e65c03..79629b60af7e 100644 --- a/sympy/integrals/tests/test_meijerint.py +++ b/sympy/integrals/tests/test_meijerint.py @@ -4,7 +4,7 @@ from sympy.core.sorting import default_sort_key from sympy.functions.elementary.complexes import Abs, arg, re, unpolarify from sympy.functions.elementary.exponential import (exp, exp_polar, log) -from sympy.functions.elementary.hyperbolic import cosh, acosh +from sympy.functions.elementary.hyperbolic import cosh, acosh, sinh from sympy.functions.elementary.miscellaneous import sqrt from sympy.functions.elementary.piecewise import Piecewise, piecewise_fold from sympy.functions.elementary.trigonometric import (cos, sin, sinc, asin) @@ -754,11 +754,21 @@ def test_issue_6462(): def test_indefinite_1_bug(): - assert integrate((b + t)**(-a), t, meijerg=True - ) == -b**(1 - a)*(1 + t/b)**(1 - a)/(a - 1) + assert integrate((b + t)**(-a), t, meijerg=True) == -b*(1 + t/b)**(1 - a)/(a*b**a - b**a) def test_pr_23583(): # This result is wrong. Check whether new result is correct when this test fail. assert integrate(1/sqrt((x - I)**2-1), meijerg=True) == \ Piecewise((acosh(x - I), Abs((x - I)**2) > 1), (-I*asin(x - I), True)) + + +# 25786 +def test_integrate_function_of_square_over_negatives(): + assert integrate(exp(-x**2), (x,-5,0), meijerg=True) == sqrt(pi)/2 * erf(5) + + +def test_issue_25949(): + from sympy.core.symbol import symbols + y = symbols("y", nonzero=True) + assert integrate(cosh(y*(x + 1)), (x, -1, -0.25), meijerg=True) == sinh(0.75*y)/y diff --git a/sympy/physics/continuum_mechanics/tests/test_beam.py b/sympy/physics/continuum_mechanics/tests/test_beam.py index 2c33fca5f9ae..3ee6b044b448 100644 --- a/sympy/physics/continuum_mechanics/tests/test_beam.py +++ b/sympy/physics/continuum_mechanics/tests/test_beam.py @@ -336,7 +336,7 @@ def test_variable_moment(): assert b.slope().expand() == ((10*x*SingularityFunction(x, 0, 0) - 10*(x - 4)*SingularityFunction(x, 4, 0))/E).expand() assert b.deflection().expand() == ((5*x**2*SingularityFunction(x, 0, 0) - - 10*Piecewise((0, Abs(x)/4 < 1), (16*meijerg(((3, 1), ()), ((), (2, 0)), x/4), True)) + - 10*Piecewise((0, Abs(x)/4 < 1), (x**2*meijerg(((-1, 1), ()), ((), (-2, 0)), x/4), True)) + 40*SingularityFunction(x, 4, 1))/E).expand() b = Beam(4, E - x, I)
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Bug Fixes" }
sympy__sympy-26052@fb4f780
sympy/sympy
Python
26,052
Add test case for bad subset evaluation
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs Fixes https://github.com/sympy/sympy/issues/9855 #### Brief description of what is fixed or changed Add test case for issue https://github.com/sympy/sympy/issues/9855 #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> NO ENTRY <!-- END RELEASE NOTES -->
2024-01-08T16:51:51Z
bad subset evaluation > > > x, y, z = symbols('x, y, z', real=True) > > > s1 = Interval(1,x) & Interval(y,2) > > > s2 = Interval(1,2) > > > s1.is_subset(s2) > > > False > > > simplify(s1.contains(z) >> s2.contains(z)) > > > True
Looks like this might be fixed: ```python >>> s1.is_subset(s2) is None True ``` The `is_subset` result was changed from False to None in 31c2dfa2ed466bbd644ade6ad14bbfb7ecbb33a0 from #17526. On the other hand the correct answer is True so None is suboptimal here. I'm not sure if there is a simple way to improve `is_subset` or if there should be some other way of asking more complicated `is_subset` queries like this. Hello everyone , I wish to fix this issue could you please assign this issue to me ? Issues are not assigned. Feel free to work on this though. I think that @smichr's intention was that this could be fixed if a test is added for this case somewhere. Hi, may I be assigned to this issue? I would like to do my best to address it.
[ { "body": "> > > x, y, z = symbols('x, y, z', real=True)\n> > > s1 = Interval(1,x) & Interval(y,2)\n> > > s2 = Interval(1,2)\n> > > s1.is_subset(s2)\n> > > False\n> > > simplify(s1.contains(z) >> s2.contains(z))\n> > > True\n", "number": 9855, "title": "bad subset evaluation" } ]
02eeb233a4078fb5f97a76bafbb36433691b372b
{ "head_commit": "fb4f7800194be27e633ad6c21b775c0833eeaf1d", "head_commit_message": "Add test case for isssue 9855", "patch_to_review": "diff --git a/sympy/sets/tests/test_sets.py b/sympy/sets/tests/test_sets.py\nindex 47426479e98f..4bceadbf6f01 100644\n--- a/sympy/sets/tests/test_sets.py\n+++ b/sympy/sets/tests/test_sets.py\n@@ -1744,3 +1744,10 @@ def test_issue_14336():\n x = Symbol(\"x\")\n U -= U.intersect(Ne(x, 1).as_set())\n U -= U.intersect(S.true.as_set())\n+\n+def test_issue_9855():\n+ #https://github.com/sympy/sympy/issues/9855\n+ x, y, z = symbols('x, y, z', real=True)\n+ s1 = Interval(1,x) & Interval(y,2)\n+ s2 = Interval(1,2)\n+ assert s1.is_subset(s2) == None\n" }
[ { "diff_hunk": "@@ -1744,3 +1744,10 @@ def test_issue_14336():\n x = Symbol(\"x\")\n U -= U.intersect(Ne(x, 1).as_set())\n U -= U.intersect(S.true.as_set())\n+\n+def test_issue_9855():\n+ #https://github.com/sympy/sympy/issues/9855\n+ x, y, z = symbols('x, y, z', real=True)\n+ s1 = Interval(1,x) & Interval(y,2)\n+ s2 = Interval(1,2)", "line": null, "original_line": 1752, "original_start_line": 1751, "path": "sympy/sets/tests/test_sets.py", "start_line": null, "text": "@user1:\n```suggestion\r\n s1 = Interval(1, x) & Interval(y, 2)\r\n s2 = Interval(1, 2)\r\n```\n\n@author:\nSorry I did not see the comment yesterday. Thank you." } ]
0079834e1d5e9db7bbecd7f55436983e36d87bb6
diff --git a/sympy/sets/tests/test_sets.py b/sympy/sets/tests/test_sets.py index 47426479e98f..657ab19a90eb 100644 --- a/sympy/sets/tests/test_sets.py +++ b/sympy/sets/tests/test_sets.py @@ -1744,3 +1744,10 @@ def test_issue_14336(): x = Symbol("x") U -= U.intersect(Ne(x, 1).as_set()) U -= U.intersect(S.true.as_set()) + +def test_issue_9855(): + #https://github.com/sympy/sympy/issues/9855 + x, y, z = symbols('x, y, z', real=True) + s1 = Interval(1, x) & Interval(y, 2) + s2 = Interval(1, 2) + assert s1.is_subset(s2) == None
{ "difficulty": "low", "estimated_review_effort": 2, "problem_domain": "Bug Fixes" }
sympy__sympy-26073@0f19a73
sympy/sympy
Python
26,073
Updated Symbol Docstring
#### References to other Issues or PRs Fixes #22981 #### Brief description of what is fixed or changed Added Explanation, assumptions, Examples, Parameters to the documentation. Added examples about creating symbols by passing Greek symbols and rules for adding subscripts. #### Other comments I am new to Open source. Please tell me if changes are required. #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> NO ENTRY <!-- END RELEASE NOTES -->
2024-01-14T00:51:19Z
Docstring for Symbol is very thin I linked to the docstring of `Symbol`, which is one of the most widely used objects in SymPy and was surprised at how little information was present in the docstring. This is all it is: ``` class Symbol(AtomicExpr, Boolean): """ Assumptions: commutative = True You can override the default assumptions in the constructor. Examples ======== >>> from sympy import symbols >>> A,B = symbols('A,B', commutative = False) >>> bool(A*B != B*A) True >>> bool(A*B*2 == 2*A*B) == True # multiplication by scalars is commutative True """ ``` I would expect this to explain quite a bit more: all assumptions, passing in greek letters, or subscripts, and probably more.
@moorepants I want to work on this issue. @moorepants I think even the examples used here should not really be used here instead in the documentation we do have symbols and therefore it should be used there. As a beginner I don't know for sure and it was not mentioned in the documentation but I just think, `sympy.Symbol` is used to define just a single symbol and for multiple symbols we use `sympy.symbols` Therefore, while changing the docstring I mentioned this commutative property in `symbols` instead of `Symbol` and removed Assumptions from `Symbol` do you think it's okay? > do you think it's okay? No, I don't. I think the docstrings for any public module/function/class/method/variable should be complete and helpful. The docstring for `Symbol` is very thin and does not explain everything it can do. @moorepants I tried updating the docstrings can you please check once? I don't really know for sure what I should be adding more like what more assumptions are there or are the changes made okay or is there something you expect more to be added I will commit more changes then. Hi is this issue still open? If yes I would like to contribute to it. When the issues are closed, the large green Open near the title will be a large purple Closed. And you don't need to ask to contribute. If there is no PR showing as linked to the issue, it should be safe to open one. Another thing the docstring should probably contain is a note that Symbols compare equal based on name and assumptions, so `Symbol('x') != Symbol('x', real=True)` even though they both print the same. All symbols that look the same can be shown in unambiguous form with `disambiguate`: ![image](https://user-images.githubusercontent.com/90703/198849796-53062295-fc31-43aa-ba00-bc0007cc26ca.png) Anyone wanting to work on this please look at the docstring style guide: https://docs.sympy.org/latest/contributing/docstring.html I would like to work on this issue, please assign it to me
[ { "body": "I linked to the docstring of `Symbol`, which is one of the most widely used objects in SymPy and was surprised at how little information was present in the docstring. This is all it is:\r\n\r\n```\r\nclass Symbol(AtomicExpr, Boolean):\r\n \"\"\"\r\n Assumptions:\r\n commutative = True\r\n You can override the default assumptions in the constructor.\r\n Examples\r\n ========\r\n >>> from sympy import symbols\r\n >>> A,B = symbols('A,B', commutative = False)\r\n >>> bool(A*B != B*A)\r\n True\r\n >>> bool(A*B*2 == 2*A*B) == True # multiplication by scalars is commutative\r\n True\r\n \"\"\"\r\n```\r\n\r\nI would expect this to explain quite a bit more: all assumptions, passing in greek letters, or subscripts, and probably more.", "number": 22981, "title": "Docstring for Symbol is very thin" } ]
3f7066f0d4f7efde2126715c9f5e1768be832371
{ "head_commit": "0f19a730afeef9710ec970f144dd146ffa50c9a0", "head_commit_message": "Update sympy/core/symbol.py\n\nCo-authored-by: S.Y. Lee <[email protected]>", "patch_to_review": "diff --git a/sympy/core/symbol.py b/sympy/core/symbol.py\nindex 68c89874c97f..4e5c754fff32 100644\n--- a/sympy/core/symbol.py\n+++ b/sympy/core/symbol.py\n@@ -205,21 +205,56 @@ def numbered_string_incr(s, start=0):\n \n class Symbol(AtomicExpr, Boolean):\n \"\"\"\n+ Symbol class is used to create symbolic variables.\n+\n+ Explanation\n+ ===========\n+\n+ Symbolic variables are placeholders for mathematical symbols that can represent numbers, constants, or any other mathematical entities. A symbolic variable, created using the 'Symbol' class, can be used in mathematical expressions and perform symbolic computations.\n+\n Assumptions:\n- commutative = True\n+\n+ commutative = True\n+ positive = True\n+ real = True\n+ imaginary = True\n+ complex = True\n+ complete list of more assumptions- :ref:`predicates`\n \n You can override the default assumptions in the constructor.\n \n Examples\n ========\n \n- >>> from sympy import symbols\n- >>> A,B = symbols('A,B', commutative = False)\n- >>> bool(A*B != B*A)\n- True\n- >>> bool(A*B*2 == 2*A*B) == True # multiplication by scalars is commutative\n+ >>> from sympy import Symbol\n+ >>> from sympy.abc import x\n+ >>> x = Symbol(\"x\", positive=True)\n+ >>> x.is_positive\n True\n+ >>> x.is_negative\n+ False\n+\n+ passing in greek letters:\n+\n+ >>> from sympy import Symbol\n+ >>> alpha = Symbol('alpha')\n+ >>> alpha\n+ α \n+\n+ Trailing digits are automatically treated like subscripts of what precedes them in the name.\n+ General format to add subscript to a symbol :\n+ ``<var_name> = Symbol('<symbol_name>_<subscript>')``\n+\n+ >>> from sympy import Symbol\n+ >>> alpha_i = Symbol('alpha_i')\n+ >>> alpha_i\n+ αᵢ\n+\n+ Parameters\n+ ==========\n \n+ AtomicExpr: variable name\n+ Boolean: Assumption with a boolean value(True or False)\n \"\"\"\n \n is_comparable = False\n" }
[ { "diff_hunk": "@@ -205,21 +205,56 @@ def numbered_string_incr(s, start=0):\n \n class Symbol(AtomicExpr, Boolean):\n \"\"\"\n+ Symbol class is used to create symbolic variables.\n+\n+ Explanation\n+ ===========\n+\n+ Symbolic variables are placeholders for mathematical symbols that can represent numbers, constants, or any other mathematical entities. A symbolic variable, created using the 'Symbol' class, can be used in mathematical expressions and perform symbolic computations.", "line": null, "original_line": 213, "original_start_line": null, "path": "sympy/core/symbol.py", "start_line": null, "text": "@user1:\nUnfortunately, I think that we need better prompt to explain symbols, \r\nbecause the definition just seems like cyclic \r\n\r\n> **Symbolic** variables are placeholders for mathematical **symbols**\n\n@author:\nOk👍" } ]
0962d0549d53bbf0a33de597533e0b47e91328c1
diff --git a/.mailmap b/.mailmap index 9260c4d775dd..54a946dd2571 100644 --- a/.mailmap +++ b/.mailmap @@ -1488,6 +1488,7 @@ Yeshwanth N <[email protected]> <[email protected]> YiDing Jiang <[email protected]> Yicong Guo <[email protected]> Yogesh Mishra <[email protected]> yogesh1997 <[email protected]> +Your Name <[email protected]> Yu Kobayashi <[email protected]> Yukai Chou <[email protected]> muzimuzhi <[email protected]> Yuki Matsuda <[email protected]> @@ -1585,6 +1586,7 @@ rimibis <[email protected]> risubaba <[email protected]> ritikBhandari <[email protected]> rushyam <[email protected]> +sachinSingh16-09 <[email protected]> sbt4104 <[email protected]> scimax <[email protected]> seadavis <[email protected]> diff --git a/sympy/core/symbol.py b/sympy/core/symbol.py index 68c89874c97f..2b51740dfadb 100644 --- a/sympy/core/symbol.py +++ b/sympy/core/symbol.py @@ -205,21 +205,55 @@ def numbered_string_incr(s, start=0): class Symbol(AtomicExpr, Boolean): """ + Symbol class is used to create symbolic variables. + + Explanation + =========== + + Symbolic variables are placeholders for mathematical symbols that can represent numbers, constants, or any other mathematical entities and can be used in mathematical expressions and to perform symbolic computations. + Assumptions: - commutative = True + + commutative = True + positive = True + real = True + imaginary = True + complex = True + complete list of more assumptions- :ref:`predicates` You can override the default assumptions in the constructor. Examples ======== - >>> from sympy import symbols - >>> A,B = symbols('A,B', commutative = False) - >>> bool(A*B != B*A) - True - >>> bool(A*B*2 == 2*A*B) == True # multiplication by scalars is commutative + >>> from sympy import Symbol + >>> x = Symbol("x", positive=True) + >>> x.is_positive True + >>> x.is_negative + False + + passing in greek letters: + >>> from sympy import Symbol + >>> alpha = Symbol('alpha') + >>> alpha #doctest: +SKIP + α + + Trailing digits are automatically treated like subscripts of what precedes them in the name. + General format to add subscript to a symbol : + ``<var_name> = Symbol('<symbol_name>_<subscript>')`` + + >>> from sympy import Symbol + >>> alpha_i = Symbol('alpha_i') + >>> alpha_i #doctest: +SKIP + αᵢ + + Parameters + ========== + + AtomicExpr: variable name + Boolean: Assumption with a boolean value(True or False) """ is_comparable = False @@ -244,10 +278,10 @@ def _diff_wrt(self): Examples ======== - >>> from sympy import Symbol - >>> x = Symbol('x') - >>> x._diff_wrt - True + >>> from sympy import Symbol + >>> x = Symbol('x') + >>> x._diff_wrt + True """ return True diff --git a/sympy/testing/quality_unicode.py b/sympy/testing/quality_unicode.py index fef292e47dc3..d43623ff5112 100644 --- a/sympy/testing/quality_unicode.py +++ b/sympy/testing/quality_unicode.py @@ -49,6 +49,9 @@ # lll method has unicode in docstring references and author name r'*/sympy/polys/matrices/domainmatrix.py', r'*/sympy/matrices/repmatrix.py', + + # Explanation of symbols uses greek letters + r'*/sympy/core/symbol.py', ] unicode_strict_whitelist = [
{ "difficulty": "low", "estimated_review_effort": 2, "problem_domain": "Documentation Updates" }
sympy__sympy-26051@12f4278
sympy/sympy
Python
26,051
removing is_commutative
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> Fixes: #23721 #### Brief description of what is fixed or changed #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> * tensor * Fixed the issue that `commutative=False` assumption is not working with `IndexedBase`. For example, if `t` and `u` are `IndexedBase` with `commutative=False`, `u[0]*t[0]` should not simplify to `u[0]*t[0]`. <!-- END RELEASE NOTES -->
2024-01-07T20:21:45Z
IndexedBase non-commutivity fails Hi there, I believe I've found a bug relating to the IndexedBase class, where it doesn't seem to respect non-commutivity of objects. If I have two non-commuting IndexedBase objects `t` and `u`, I would expect that `t[0]*u[0]+u[0]*t[0]!=2*t[0]*u[0]`. The below code highlights the issue, showing that all the objects have `commutivity=False`, yet the addition of `t[0]*u[0]` with `u[0]*t[0]` simplified to `2t[0]u[0]` import sympy as sp t=sp.IndexedBase('t',commutative=False) u=sp.IndexedBase('u',commutative=False) print('t commutativity is {}'.format(t._assumptions['commutative'])) print('t[0] commutativity is {}'.format(t[0]._assumptions['commutative'])) print('u commutativity is {}'.format(u._assumptions['commutative'])) print('u[0] commutativity is {}'.format(u[0]._assumptions['commutative'])) print(t[0]*u[0]+u[0]*t[0]) The output I get in sympy version 1.10.1 is t commutativity is False t[0] commutativity is False u commutativity is False u[0] commutativity is False 2*t[0]*u[0]
Problem is already here: ```python >>> print(u[0]*t[0]) t[0]*u[0] ``` Seems to be an assumptions problem: ```python In [4]: u[0]._assumptions['commutative'] Out[4]: False In [5]: u[0].is_commutative Out[5]: True ``` Looks like the fix is ```diff diff --git a/sympy/tensor/indexed.py b/sympy/tensor/indexed.py index 8119a2a..9c9b6f3 100644 --- a/sympy/tensor/indexed.py +++ b/sympy/tensor/indexed.py @@ -139,7 +139,6 @@ class Indexed(Expr): True """ - is_commutative = True is_Indexed = True is_symbol = True is_Atom = True ``` I'm not sure if there's any particular reason why `Indexed` sets commutative to True. Probably the assumptions system should detect a case like this where `is_commutative` contradicts `_assumptions['commutative']` and cause an error somewhere. Instead of making a change to the source of the class one may also set this property dynamically: `Indexed.is_commutative = False`. I've already shown how to fix this so marking as easy to fix.
[ { "body": "Hi there, \r\n\r\nI believe I've found a bug relating to the IndexedBase class, where it doesn't seem to respect non-commutivity of objects. If I have two non-commuting IndexedBase objects `t` and `u`, I would expect that `t[0]*u[0]+u[0]*t[0]!=2*t[0]*u[0]`. The below code highlights the issue, showing that all the objects have `commutivity=False`, yet the addition of `t[0]*u[0]` with `u[0]*t[0]` simplified to `2t[0]u[0]`\r\n\r\n import sympy as sp \r\n \r\n t=sp.IndexedBase('t',commutative=False) \r\n u=sp.IndexedBase('u',commutative=False) \r\n \r\n print('t commutativity is {}'.format(t._assumptions['commutative'])) \r\n print('t[0] commutativity is {}'.format(t[0]._assumptions['commutative'])) \r\n print('u commutativity is {}'.format(u._assumptions['commutative'])) \r\n print('u[0] commutativity is {}'.format(u[0]._assumptions['commutative'])) \r\n \r\n print(t[0]*u[0]+u[0]*t[0]) \r\n \r\nThe output I get in sympy version 1.10.1 is \r\n\r\n t commutativity is False\r\n t[0] commutativity is False\r\n u commutativity is False\r\n u[0] commutativity is False\r\n 2*t[0]*u[0]\r\n", "number": 23721, "title": "IndexedBase non-commutivity fails" } ]
02eeb233a4078fb5f97a76bafbb36433691b372b
{ "head_commit": "12f427839f71789241df5a2ad80678727783a2fc", "head_commit_message": "removing is_commutative", "patch_to_review": "diff --git a/sympy/tensor/indexed.py b/sympy/tensor/indexed.py\nindex ff372ac655d9..8be0bd902b1c 100644\n--- a/sympy/tensor/indexed.py\n+++ b/sympy/tensor/indexed.py\n@@ -139,7 +139,6 @@ class Indexed(Expr):\n True\n \n \"\"\"\n- is_commutative = True\n is_Indexed = True\n is_symbol = True\n is_Atom = True\ndiff --git a/sympy/tensor/tests/test_indexed.py b/sympy/tensor/tests/test_indexed.py\nindex 0dc537638762..c17d5d26e781 100644\n--- a/sympy/tensor/tests/test_indexed.py\n+++ b/sympy/tensor/tests/test_indexed.py\n@@ -501,3 +501,10 @@ def test_complicated_derivative_with_Indexed():\n ((x[i] - y[i])**2/sigma,)\n )/sigma**2\n )\n+\n+\n+def test_IndexedBase_commutative():\n+ t = IndexedBase('t' ,commutative=False)\n+ u = IndexedBase('u' ,commutative=False)\n+\n+ assert t[0]*u[0] != u[0]*t[0]\n" }
[ { "diff_hunk": "@@ -501,3 +501,10 @@ def test_complicated_derivative_with_Indexed():\n ((x[i] - y[i])**2/sigma,)\n )/sigma**2\n )\n+\n+\n+def test_IndexedBase_commutative():\n+ t = IndexedBase('t' ,commutative=False)\n+ u = IndexedBase('u' ,commutative=False)", "line": null, "original_line": 508, "original_start_line": 507, "path": "sympy/tensor/tests/test_indexed.py", "start_line": null, "text": "@user1:\n```suggestion\r\n t = IndexedBase('t', commutative=False)\r\n u = IndexedBase('u', commutative=False)\r\n```" } ]
27b9ccee4a4f49a1e25cb433fc03b0454e4142a9
diff --git a/.mailmap b/.mailmap index baec495a41c1..e87829efc992 100644 --- a/.mailmap +++ b/.mailmap @@ -1501,6 +1501,7 @@ Zach Raines <[email protected]> Zachariah Etienne <[email protected]> Zamrath Nizam <[email protected]> <[email protected]> Zaz Brown <[email protected]> +Zedmat <[email protected]> Zeel Shah <[email protected]> Zhi-Qiang Zhou <[email protected]> zhouzq-thu <[email protected]> Zhongshi <[email protected]> diff --git a/sympy/tensor/indexed.py b/sympy/tensor/indexed.py index ff372ac655d9..8be0bd902b1c 100644 --- a/sympy/tensor/indexed.py +++ b/sympy/tensor/indexed.py @@ -139,7 +139,6 @@ class Indexed(Expr): True """ - is_commutative = True is_Indexed = True is_symbol = True is_Atom = True diff --git a/sympy/tensor/tests/test_indexed.py b/sympy/tensor/tests/test_indexed.py index 0dc537638762..c0b269682743 100644 --- a/sympy/tensor/tests/test_indexed.py +++ b/sympy/tensor/tests/test_indexed.py @@ -501,3 +501,9 @@ def test_complicated_derivative_with_Indexed(): ((x[i] - y[i])**2/sigma,) )/sigma**2 ) + + +def test_IndexedBase_commutative(): + t = IndexedBase('t', commutative=False) + u = IndexedBase('u', commutative=False) + assert t[0]*u[0] != u[0]*t[0]
{ "difficulty": "low", "estimated_review_effort": 2, "problem_domain": "Bug Fixes" }
sympy__sympy-26031@e3f932b
sympy/sympy
Python
26,031
minor changes in mod(a, b)
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> Fixes #26016 #### Brief description of what is fixed or changed #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> * core * minor changes in mod as mentioned in [commnet](https://github.com/sympy/sympy/issues/26016#issuecomment-1873374064) <!-- END RELEASE NOTES -->
2024-01-01T17:14:01Z
i think mod(a,b) are considering only integers solutions Hello guys, i have a simplification of my code which is: ``` import sympy as sp import itertools Zp = range(3) Zq = range(3) sextetos = itertools.product(Zp, Zp, Zp, Zq, Zq, Zq) x, y, z, w = sp.symbols('x y z w', real=True) system = [] #it will test the generic form: B(a,b) = x*a1*b1+y*a1*b2+z*a2*b1+w*a2*b2, for all pairs a= (a1,a2), b=(b1,b2) e c=(c1,c2) #both mapping axioms, and return the set of conditions that x, y, z e w must satisfy. for a1, b1, c1, a2, b2, c2 in sextetos: d1 = (b1 + c1) % p d2 = (b2 + c2) % q if (a1, b1, c1, a2, b2, c2) != (0, b1, c1, 0, b2, c2): print(a1,b1,c1,a2,b2,c2) eq1 = ((x * (a1 * b1+b1 * a1)+y * (a1 * b2+b1 * a2)+z * (a2*b1+b2 * a1)+w * (a2 * b2+a2 * b2))) % 2 system.append(eq1) eq2 = (x * (a1 * ((b1+c1)%p))+y * (a1 * ((b2+c2)%q))+z * (a2 * ((b1+c1)%p))+w * (a2*((b2+c2)%q)) - (x * a1 * (b1+c1)+y * a1 * (b2+c2)+z * a2*(b1+c1)+w * a2 * (b2+c2))) % 2 eq2 = (x * a1 * (sp.Mod(b1+c1,p)-(b1+c1))+y * a1 * (sp.Mod(b2+c2,q)-(b2+c2))+z * a2 * (sp.Mod(b1+c1,p)-(b1+c1))+w * a2 * (sp.Mod(b2+c2,q)-(b2+c2))) % 2 print(eq1) print(eq2) input() vars = [x, y, z, w] print(sp.nonlinsolve(system, vars)) ``` I was feeling something was a litte bit wrong with this code, because the conditionset it returns has some rules that i know that doesnt exist, like for the sexteto (a1,b1,c1,a2,b2,c2) = (0,0,0,1,1,2), eq2 should be w * 1 * (-3) % 2. Or, with sp tools: Eq(3*Mod(w,2),0), considering the possibility of w = 2/3 (example). However, the code returns me just Eq(Mod(w,2),0), and i just cant understand why... Important that: the error is before the nonlinsolve function. If u just put a print(eq2) with the equations.append(eq2), it will return (Eq(Mod(w,2),0). If someone have an ideia of what is happening, please give me a hand. Merry Christmas, gifhubbers :D
You haven't defined what p and q are. oops, sorry, it was an input, and Zp, Zq was two ranges based on p and q. I decided to send with p and q = 3 to make it more specific, but i forgot to switch all the p and q with the number 3. Sorry. @joaorrmattos I think the following code is a simplified version which removes unnecessary calculations and aligns with the length of respective changes, please have a look at it! ``` import sympy as sp import itertools p = 3 # Define the value of p q = 3 # Define the value of q Zp = range(p) Zq = range(q) sextetos = itertools.product(Zp, Zp, Zp, Zq, Zq, Zq) x, y, z, w = sp.symbols('x y z w', real=True) system = [] for a1, b1, c1, a2, b2, c2 in sextetos: if (a1, b1, c1, a2, b2, c2) != (0, 0, 0, 1, 1, 2): eq1 = ((x * (a1 * b1 + b1 * a1) + y * (a1 * b2 + b1 * a2) + z * (a2 * b1 + b2 * a1) + w * (a2 * b2 + a2 * b2))) % 2 system.append(eq1) mod_val_p = sp.Mod(b1 + c1, p) mod_val_q = sp.Mod(b2 + c2, q) eq2 = (x * a1 * mod_val_p + y * a1 * mod_val_q + z * a2 * mod_val_p + w * a2 * mod_val_q - (x * a1 * (b1 + c1) + y * a1 * (b2 + c2) + z * a2 * (b1 + c1) + w * a2 * (b2 + c2))) % 2 system.append(eq2) vars = [x, y, z, w] solutions = sp.nonlinsolve(system, vars) print(solutions) ``` hmmm unfortunately it doesnt work. 1: i put (a1, b1, c1, a2, b2, c2) != (0,b1,c1,0,b2,c2) because if a1 and a2 equals 0, both eq1 and eq2 goes to 0. 2: its important to analyse (a1, b1, c1, a2, b2, c2) = (0, 0, 0, 1, 1, 2), because it shows us the problem very quick. Ur idea was really good, but it keeps turning the equation Eq(Mod(3*w, 2,), 0) to Eq(Mod(w, 2), 0), considering the only the possibility of w to be integer. :/ If I understand correctly then the question is why this happens: ``` In [1]: Mod(3*x, 2) Out[1]: x mod 2 In [2]: (3*x) % 2 Out[2]: x mod 2 ``` This simplification assumes that `x` is an integer which is not necessarily the case if no assumptions are set on `x`: ``` In [3]: (3*(S(2)/3)) % 2 Out[3]: 0 ``` yes, thats correct. I have an alternative that is: keep my equations without mod app, then only %2 when i already switched the variables with the values i want to verify. The bad thing is: it takes more time than it would with Mod(a,b)... Thats why i asked for a solution... If there is nothing to do with it, ie, the simplification will assume that x is intenger everytime when it is a variable, i accept it ahahaha. Thanks for the explanation! Happy new year! This is my suggested fix: ```diff diff --git a/sympy/core/mod.py b/sympy/core/mod.py index 873e815cb2..3f0e07a561 100644 --- a/sympy/core/mod.py +++ b/sympy/core/mod.py @@ -164,8 +164,10 @@ def number_eval(p, q): return prod_non_mod*cls(net, q) if q.is_Integer and q is not S.One: - non_mod_l = [i % q if i.is_Integer and (i % q is not S.Zero) else i for - i in non_mod_l] + if all(t.is_integer for t in p.args): + non_mod_l = [i % q if i.is_Integer else i for i in p.args] + if any(iq is S.Zero for iq in non_mod_l): + return S.Zero p = Mul(*(non_mod_l + mod_l)) ``` Then ```python In [1]: x = symbols('x') In [2]: Mod(3*x, 2) Out[2]: 3⋅x mod 2 In [3]: x = symbols('x', integer=True) In [4]: Mod(3*x, 2) Out[4]: x mod 2 ```
[ { "body": "Hello guys, i have a simplification of my code which is:\r\n\r\n```\r\nimport sympy as sp\r\nimport itertools\r\n\r\nZp = range(3)\r\nZq = range(3) \r\nsextetos = itertools.product(Zp, Zp, Zp, Zq, Zq, Zq)\r\n\r\n\r\nx, y, z, w = sp.symbols('x y z w', real=True)\r\nsystem = []\r\n#it will test the generic form: B(a,b) = x*a1*b1+y*a1*b2+z*a2*b1+w*a2*b2, for all pairs a= (a1,a2), b=(b1,b2) e c=(c1,c2)\r\n#both mapping axioms, and return the set of conditions that x, y, z e w must satisfy.\r\n\r\nfor a1, b1, c1, a2, b2, c2 in sextetos:\r\n d1 = (b1 + c1) % p\r\n d2 = (b2 + c2) % q \r\n if (a1, b1, c1, a2, b2, c2) != (0, b1, c1, 0, b2, c2):\r\n print(a1,b1,c1,a2,b2,c2)\r\n eq1 = ((x * (a1 * b1+b1 * a1)+y * (a1 * b2+b1 * a2)+z * (a2*b1+b2 * a1)+w * (a2 * b2+a2 * b2))) % 2\r\n system.append(eq1)\r\n eq2 = (x * (a1 * ((b1+c1)%p))+y * (a1 * ((b2+c2)%q))+z * (a2 * ((b1+c1)%p))+w * (a2*((b2+c2)%q)) - (x * a1 * (b1+c1)+y * a1 * (b2+c2)+z * a2*(b1+c1)+w * a2 * (b2+c2))) % 2\r\n eq2 = (x * a1 * (sp.Mod(b1+c1,p)-(b1+c1))+y * a1 * (sp.Mod(b2+c2,q)-(b2+c2))+z * a2 * (sp.Mod(b1+c1,p)-(b1+c1))+w * a2 * (sp.Mod(b2+c2,q)-(b2+c2))) % 2\r\n print(eq1)\r\n print(eq2)\r\n input()\r\nvars = [x, y, z, w]\r\nprint(sp.nonlinsolve(system, vars)) \r\n```\r\n\r\nI was feeling something was a litte bit wrong with this code, because the conditionset it returns has some rules that i know that doesnt exist, like for the sexteto (a1,b1,c1,a2,b2,c2) = (0,0,0,1,1,2), eq2 should be w * 1 * (-3) % 2. Or, with sp tools: Eq(3*Mod(w,2),0), considering the possibility of w = 2/3 (example). However, the code returns me just Eq(Mod(w,2),0), and i just cant understand why...\r\n\r\nImportant that: the error is before the nonlinsolve function. If u just put a print(eq2) with the equations.append(eq2), it will return (Eq(Mod(w,2),0). \r\n\r\nIf someone have an ideia of what is happening, please give me a hand. \r\n\r\nMerry Christmas, gifhubbers :D\r\n\r\n", "number": 26016, "title": "i think mod(a,b) are considering only integers solutions" } ]
c81c8169142f14a1b427664e073cde196f111d53
{ "head_commit": "e3f932bd290c7dc7959de4438b9848e9954a6f62", "head_commit_message": "Adding test cases", "patch_to_review": "diff --git a/sympy/core/mod.py b/sympy/core/mod.py\nindex 873e815cb227..3f0e07a5612c 100644\n--- a/sympy/core/mod.py\n+++ b/sympy/core/mod.py\n@@ -164,8 +164,10 @@ def number_eval(p, q):\n return prod_non_mod*cls(net, q)\n \n if q.is_Integer and q is not S.One:\n- non_mod_l = [i % q if i.is_Integer and (i % q is not S.Zero) else i for\n- i in non_mod_l]\n+ if all(t.is_integer for t in p.args):\n+ non_mod_l = [i % q if i.is_Integer else i for i in p.args]\n+ if any(iq is S.Zero for iq in non_mod_l):\n+ return S.Zero\n \n p = Mul(*(non_mod_l + mod_l))\n \ndiff --git a/sympy/core/tests/test_args.py b/sympy/core/tests/test_args.py\nindex 6e5ab8fdd025..cd5a2575c8e6 100644\n--- a/sympy/core/tests/test_args.py\n+++ b/sympy/core/tests/test_args.py\n@@ -682,6 +682,7 @@ def test_sympy__core__function__WildFunction():\n def test_sympy__core__mod__Mod():\n from sympy.core.mod import Mod\n assert _test_args(Mod(x, 2))\n+ assert _test_args(Mod(3*x, 2))\n \n \n def test_sympy__core__mul__Mul():\n" }
[ { "diff_hunk": "@@ -682,6 +682,7 @@ def test_sympy__core__function__WildFunction():\n def test_sympy__core__mod__Mod():\n from sympy.core.mod import Mod\n assert _test_args(Mod(x, 2))\n+ assert _test_args(Mod(3*x, 2))", "line": null, "original_line": 685, "original_start_line": null, "path": "sympy/core/tests/test_args.py", "start_line": null, "text": "@user1:\nThis test would pass without the changes here. This does not really test anything useful and should be removed.\r\n\r\nTests should be added in `sympy/core/tests/test_arit.py`." } ]
d276455c8a70f84f5e7398fe17677272c0162d0d
diff --git a/sympy/core/mod.py b/sympy/core/mod.py index 873e815cb227..3f0e07a5612c 100644 --- a/sympy/core/mod.py +++ b/sympy/core/mod.py @@ -164,8 +164,10 @@ def number_eval(p, q): return prod_non_mod*cls(net, q) if q.is_Integer and q is not S.One: - non_mod_l = [i % q if i.is_Integer and (i % q is not S.Zero) else i for - i in non_mod_l] + if all(t.is_integer for t in p.args): + non_mod_l = [i % q if i.is_Integer else i for i in p.args] + if any(iq is S.Zero for iq in non_mod_l): + return S.Zero p = Mul(*(non_mod_l + mod_l)) diff --git a/sympy/core/tests/test_arit.py b/sympy/core/tests/test_arit.py index 90086b42894c..d9848c39eb8c 100644 --- a/sympy/core/tests/test_arit.py +++ b/sympy/core/tests/test_arit.py @@ -1996,6 +1996,11 @@ def test_Mod(): from sympy.abc import phi assert Mod(4.0*Mod(phi, 1) , 2) == 2.0*(Mod(2*(Mod(phi, 1)), 1)) + xi = symbols('x', integer=True) + assert unchanged(Mod, xi, 2) + assert Mod(3*xi, 2) == Mod(xi, 2) + assert unchanged(Mod, 3*x, 2) + def test_Mod_Pow(): # modular exponentiation
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Bug Fixes" }
sympy__sympy-25982@bfd5e77
sympy/sympy
Python
25,982
polys: add RR[x].is_Exact == False
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> Fixes #25182. #### Brief description of what is fixed or changed This fix has been suggested in #25182. Unfortunately it introduces other test failures. I'll leave this as a draft for future reference in case anybody else wants to get this merged. (I'm not familiar with the polys code base.) #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> * polys * The domain is_Exact and get_exact methods now correctly handle composite domains like polynomial rings and `CC.get_exact()` now gives `QQ_I`. <!-- END RELEASE NOTES -->
2023-12-13T11:18:43Z
Question about sympy.solve / sympy.nonlinsolve I am trying to solve the nonlinear system of equations below. (These are the three 'law of cosine' equations, I found them in the internet.) ``` a1, b1, c1, ca, cb, cg = sm.symbols('a1, b1, c1, ca, cb, cg') eq1 = a1*a1 + b1*b1 - 2.*a1*b1*cg - c1*c1 eq2 = a1*a1 + c1*c1 - 2.*a1*c1*cb - b1*b1 eq3 = b1*b1 + c1*c1 - 2.*b1*c1*ca - a1*a1 loesung = sm.nonlinsolve([eq1, eq2, eq3], [c1, cb, cg]) ``` I first tried sympy.solve. It gave a solution, actually two solutions, but in the documentation I could not find, that sympy.solve will solve systems of nonlinear equations. Also, the solutions seemed to be wrong, e.g. the cos(angle) > 1 Then I tried sympy.nonlinsolve. After about 45 min of running, I aborted it. My question: Is it normal, that it takes so long to find a solution, or is this an indication, that none exists? Many thanks for any explanations!
If you change the floats to integers then it solves very quickly: ```python In [7]: a1, b1, c1, ca, cb, cg = sm.symbols('a1, b1, c1, ca, cb, cg') ...: eq1 = a1*a1 + b1*b1 - 2*a1*b1*cg - c1*c1 ...: eq2 = a1*a1 + c1*c1 - 2*a1*c1*cb - b1*b1 ...: eq3 = b1*b1 + c1*c1 - 2*b1*c1*ca - a1*a1 ...: ...: %time loesung = sm.nonlinsolve([eq1, eq2, eq3], [c1, cb, cg]) CPU times: user 341 ms, sys: 11 µs, total: 341 ms Wall time: 341 ms In [8]: loesung Out[8]: ⎧⎛ _____________________ ____ ⎪⎜ _____________________ ╱ 2 2 2 2 ╱ 2 ⎨⎜ ╱ 2 2 2 2 -╲╱ a₁ + b₁ ⋅ca - b₁ b₁⋅(ca - 1)⋅(ca + 1) ca⋅╲╱ a₁ ⎪⎜b₁⋅ca - ╲╱ a₁ + b₁ ⋅ca - b₁ , ──────────────────────────, - ──────────────────── + ────────── ⎩⎝ a₁ a₁ _________________⎞ ⎛ _____________________ 2 2 2 ⎟ ⎜ _____________________ ╱ 2 2 2 2 + b₁ ⋅ca - b₁ ⎟ ⎜ ╱ 2 2 2 2 ╲╱ a₁ + b₁ ⋅ca - b₁ b₁⋅(ca - 1)⋅(ca ─────────────────⎟, ⎜b₁⋅ca + ╲╱ a₁ + b₁ ⋅ca - b₁ , ────────────────────────, - ──────────────── a₁ ⎠ ⎝ a₁ a₁ _____________________⎞⎫ ╱ 2 2 2 2 ⎟⎪ + 1) ca⋅╲╱ a₁ + b₁ ⋅ca - b₁ ⎟⎬ ──── - ───────────────────────────⎟⎪ a₁ ⎠⎭ ``` This line is supposed to convert to rational internally: https://github.com/sympy/sympy/blob/e2cadc140cc969fea038240a39961a66a2f3dd6d/sympy/solvers/solveset.py#L3572-L3577 It fails though because the `is_Exact` flag is incorrect: ```python In [9]: RR.is_Exact Out[9]: False In [10]: RR[x].is_Exact # incorrect Out[10]: True In [11]: RR.frac_field(x).is_Exact Out[11]: False ``` This diff fixes it: ```diff diff --git a/sympy/polys/domains/polynomialring.py b/sympy/polys/domains/polynomialring.py index bad73208f8..e3851d4e99 100644 --- a/sympy/polys/domains/polynomialring.py +++ b/sympy/polys/domains/polynomialring.py @@ -67,6 +67,10 @@ def __eq__(self, other): (self.dtype.ring, self.domain, self.symbols) == \ (other.dtype.ring, other.domain, other.symbols) + @property + def is_Exact(self): + return self.domain.is_Exact + def is_unit(self, a): """Returns ``True`` if ``a`` is a unit of ``self``""" if not a.is_ground: ``` Thanks a lot! The "diff" does not work for me, probably I because I can run python only on an iPad with a jupyter notes app. The solution you give is the solution sympy.solve finds very fast. (I thought the solution was wrong, but it seems my equations are wrong for my purpose) Thanks again! The `is_Exact` flag needs to be fixed.
[ { "body": "I am trying to solve the nonlinear system of equations below. (These are the three 'law of cosine' equations, I found them in the internet.)\r\n```\r\na1, b1, c1, ca, cb, cg = sm.symbols('a1, b1, c1, ca, cb, cg')\r\neq1 = a1*a1 + b1*b1 - 2.*a1*b1*cg - c1*c1\r\neq2 = a1*a1 + c1*c1 - 2.*a1*c1*cb - b1*b1\r\neq3 = b1*b1 + c1*c1 - 2.*b1*c1*ca - a1*a1\r\n\r\nloesung = sm.nonlinsolve([eq1, eq2, eq3], [c1, cb, cg])\r\n```\r\nI first tried sympy.solve.\r\nIt gave a solution, actually two solutions, but in the documentation I could not find, that sympy.solve will solve systems of nonlinear equations.\r\nAlso, the solutions seemed to be wrong, e.g. the cos(angle) > 1\r\n\r\nThen I tried sympy.nonlinsolve.\r\nAfter about 45 min of running, I aborted it.\r\n\r\nMy question:\r\nIs it normal, that it takes so long to find a solution, or is this an indication, that none exists?\r\n\r\nMany thanks for any explanations!", "number": 25182, "title": "Question about sympy.solve / sympy.nonlinsolve" } ]
ffd3f417742ecb9ee0cc0820677980ce78d8ea7d
{ "head_commit": "bfd5e774dacebb257b15dcacdd3310108c9126df", "head_commit_message": "polys: don't convert to exact domain for RR[z] gcd", "patch_to_review": "diff --git a/sympy/core/numbers.py b/sympy/core/numbers.py\nindex f3839e22d1cd..aa3411864e10 100644\n--- a/sympy/core/numbers.py\n+++ b/sympy/core/numbers.py\n@@ -4248,6 +4248,53 @@ def equal_valued(x, y):\n return (1 << neg_exp) == q\n \n \n+def all_close(expr1, expr2, rtol=1e-5, atol=1e-8):\n+ \"\"\"Return True if expr1 and expr2 are numerically close.\n+\n+ The expressions must have the same structure, but any Rational, Integer, or\n+ Float numbers they contain are compared approximately using rtol and atol.\n+ Any other parts of expressions are compared exactly.\n+\n+ Relative tolerance is measured with respect to expr2 so when used in\n+ testing expr2 should be the expected correct answer.\n+\n+ Examples\n+ ========\n+\n+ >>> from sympy import exp\n+ >>> from sympy.abc import x, y\n+ >>> from sympy.core.numbers import all_close\n+ >>> expr1 = 0.1*exp(x - y)\n+ >>> expr2 = exp(x - y)/10\n+ >>> expr1\n+ 0.1*exp(x - y)\n+ >>> expr2\n+ exp(x - y)/10\n+ >>> expr1 == expr2\n+ False\n+ >>> all_close(expr1, expr2)\n+ True\n+ \"\"\"\n+ NUM_TYPES = (Rational, Float)\n+\n+ def _all_close(expr1, expr2, rtol, atol):\n+ num1 = isinstance(expr1, NUM_TYPES)\n+ num2 = isinstance(expr2, NUM_TYPES)\n+ if num1 != num2:\n+ return False\n+ elif num1:\n+ return bool(abs(expr1 - expr2) <= atol + rtol*abs(expr2))\n+ elif expr1.is_Atom:\n+ return expr1 == expr2\n+ elif expr1.func != expr2.func or len(expr1.args) != len(expr2.args):\n+ return False\n+ else:\n+ args = zip(expr1.args, expr2.args)\n+ return all(_all_close(a1, a2, rtol, atol) for a1, a2 in args)\n+\n+ return _all_close(_sympify(expr1), _sympify(expr2), rtol, atol)\n+\n+\n @dispatch(Tuple, Number) # type:ignore\n def _eval_is_eq(self, other): # noqa: F811\n return False\ndiff --git a/sympy/core/tests/test_numbers.py b/sympy/core/tests/test_numbers.py\nindex 82a221cf4417..fb4c6a86e241 100644\n--- a/sympy/core/tests/test_numbers.py\n+++ b/sympy/core/tests/test_numbers.py\n@@ -10,7 +10,7 @@\n from sympy.core.numbers import (mpf_norm, seterr,\n Integer, I, pi, comp, Rational, E, nan,\n oo, AlgebraicNumber, Number, Float, zoo, equal_valued,\n- int_valued)\n+ int_valued, all_close)\n from sympy.core.intfunc import (igcd, igcdex, igcd2, igcd_lehmer,\n ilcm, integer_nthroot, isqrt, integer_log, mod_inverse)\n from sympy.core.power import Pow\n@@ -2280,3 +2280,18 @@ def test_equal_valued():\n continue\n for value_j in values_n:\n assert equal_valued(value_i, value_j) is False\n+\n+\n+def test_all_close():\n+ x = Symbol('x')\n+ assert all_close(2, 2) is True\n+ assert all_close(2, 2.0000) is True\n+ assert all_close(2, 2.0001) is False\n+ assert all_close(1/3, 1/3.0001) is False\n+ assert all_close(1/3, 1/3.0001, 1e-3, 1e-3) is True\n+ assert all_close(1/3, Rational(1, 3)) is True\n+ assert all_close(0.1*exp(0.2*x), exp(x/5)/10) is True\n+ # The expressions should be structurally the same:\n+ assert all_close(1.4142135623730951, sqrt(2)) is False\n+ assert all_close(1.4142135623730951, sqrt(2).evalf()) is True\n+ assert all_close(x + 1e-20, x) is False\ndiff --git a/sympy/integrals/tests/test_integrals.py b/sympy/integrals/tests/test_integrals.py\nindex 994c7e5b2c03..2b739d184f73 100644\n--- a/sympy/integrals/tests/test_integrals.py\n+++ b/sympy/integrals/tests/test_integrals.py\n@@ -5,7 +5,7 @@\n from sympy.core.expr import Expr\n from sympy.core.function import (Derivative, Function, Lambda, diff)\n from sympy.core import EulerGamma\n-from sympy.core.numbers import (E, Float, I, Rational, nan, oo, pi, zoo)\n+from sympy.core.numbers import (E, I, Rational, nan, oo, pi, zoo, all_close)\n from sympy.core.relational import (Eq, Ne)\n from sympy.core.singleton import S\n from sympy.core.symbol import (Symbol, symbols)\n@@ -438,13 +438,12 @@ def test_issue_18133():\n \n \n def test_issue_21741():\n- a = Float('3999999.9999999995', precision=53)\n- b = Float('2.5000000000000004e-7', precision=53)\n- r = Piecewise((b*I*exp(-a*I*pi*t*y)*exp(-a*I*pi*x*z)/(pi*x),\n- Ne(1.0*pi*x*exp(a*I*pi*t*y), 0)),\n+ a = 4e6\n+ b = 2.5e-7\n+ r = Piecewise((b*I*exp(-a*I*pi*t*y)*exp(-a*I*pi*x*z)/(pi*x), Ne(x, 0)),\n (z*exp(-a*I*pi*t*y), True))\n fun = E**((-2*I*pi*(z*x+t*y))/(500*10**(-9)))\n- assert integrate(fun, z) == r\n+ assert all_close(integrate(fun, z), r)\n \n \n def test_matrices():\ndiff --git a/sympy/polys/domains/complexfield.py b/sympy/polys/domains/complexfield.py\nindex a36e94ebdfb6..4642b20249be 100644\n--- a/sympy/polys/domains/complexfield.py\n+++ b/sympy/polys/domains/complexfield.py\n@@ -5,6 +5,7 @@\n from sympy.core.numbers import Float, I\n from sympy.polys.domains.characteristiczero import CharacteristicZero\n from sympy.polys.domains.field import Field\n+from sympy.polys.domains.gaussiandomains import QQ_I\n from sympy.polys.domains.mpelements import MPContext\n from sympy.polys.domains.simpledomain import SimpleDomain\n from sympy.polys.polyerrors import DomainError, CoercionFailed\n@@ -136,7 +137,7 @@ def get_ring(self):\n \n def get_exact(self):\n \"\"\"Returns an exact domain associated with ``self``. \"\"\"\n- raise DomainError(\"there is no exact domain associated with %s\" % self)\n+ return QQ_I\n \n def is_negative(self, element):\n \"\"\"Returns ``False`` for any ``ComplexElement``. \"\"\"\ndiff --git a/sympy/polys/domains/compositedomain.py b/sympy/polys/domains/compositedomain.py\nindex 560720a02469..a8f63ba7bb86 100644\n--- a/sympy/polys/domains/compositedomain.py\n+++ b/sympy/polys/domains/compositedomain.py\n@@ -30,3 +30,23 @@ def drop(self, *symbols):\n return domain\n else:\n return self.__class__(domain, newsyms, self.order)\n+\n+ def set_domain(self, domain):\n+ \"\"\"Set the ground domain of this domain. \"\"\"\n+ return self.__class__(domain, self.symbols, self.order)\n+\n+ @property\n+ def is_Exact(self):\n+ \"\"\"Returns ``True`` if this domain is exact. \"\"\"\n+ return self.domain.is_Exact\n+\n+ def get_exact(self):\n+ \"\"\"Returns an exact version of this domain. \"\"\"\n+ return self.set_domain(self.domain.get_exact())\n+\n+ @property\n+ def has_CharacteristicZero(self):\n+ return self.domain.has_CharacteristicZero\n+\n+ def characteristic(self):\n+ return self.domain.characteristic()\ndiff --git a/sympy/polys/domains/fractionfield.py b/sympy/polys/domains/fractionfield.py\nindex 6dbfc904a408..47bc25436b8e 100644\n--- a/sympy/polys/domains/fractionfield.py\n+++ b/sympy/polys/domains/fractionfield.py\n@@ -49,13 +49,6 @@ def one(self):\n def order(self):\n return self.field.order\n \n- @property\n- def is_Exact(self):\n- return self.domain.is_Exact\n-\n- def get_exact(self):\n- return FractionField(self.domain.get_exact(), self.symbols)\n-\n def __str__(self):\n return str(self.domain) + '(' + ','.join(map(str, self.symbols)) + ')'\n \n@@ -68,13 +61,6 @@ def __eq__(self, other):\n (self.dtype.field, self.domain, self.symbols) ==\\\n (other.dtype.field, other.domain, other.symbols)\n \n- @property\n- def has_CharacteristicZero(self):\n- return self.domain.has_CharacteristicZero\n-\n- def characteristic(self):\n- return self.domain.characteristic()\n-\n def to_sympy(self, a):\n \"\"\"Convert ``a`` to a SymPy object. \"\"\"\n return a.as_expr()\ndiff --git a/sympy/polys/domains/gaussiandomains.py b/sympy/polys/domains/gaussiandomains.py\nindex e8b9701d912c..bf3df50d5de6 100644\n--- a/sympy/polys/domains/gaussiandomains.py\n+++ b/sympy/polys/domains/gaussiandomains.py\n@@ -678,4 +678,9 @@ def from_GaussianRationalField(K1, a, K0):\n \"\"\"Convert a QQ_I element to QQ_I.\"\"\"\n return a\n \n+ def from_ComplexField(K1, a, K0):\n+ \"\"\"Convert a ComplexField element to QQ_I.\"\"\"\n+ return K1.new(QQ.convert(a.real), QQ.convert(a.imag))\n+\n+\n QQ_I = GaussianRational._parent = GaussianRationalField()\ndiff --git a/sympy/polys/domains/old_fractionfield.py b/sympy/polys/domains/old_fractionfield.py\nindex c05e6f99c1a0..25d849c39e45 100644\n--- a/sympy/polys/domains/old_fractionfield.py\n+++ b/sympy/polys/domains/old_fractionfield.py\n@@ -3,14 +3,13 @@\n \n from sympy.polys.domains.field import Field\n from sympy.polys.domains.compositedomain import CompositeDomain\n-from sympy.polys.domains.characteristiczero import CharacteristicZero\n from sympy.polys.polyclasses import DMF\n from sympy.polys.polyerrors import GeneratorsNeeded\n from sympy.polys.polyutils import dict_from_basic, basic_from_dict, _dict_reorder\n from sympy.utilities import public\n \n @public\n-class FractionField(Field, CharacteristicZero, CompositeDomain):\n+class FractionField(Field, CompositeDomain):\n \"\"\"A class for representing rational function fields. \"\"\"\n \n dtype = DMF\n@@ -32,6 +31,10 @@ def __init__(self, dom, *gens):\n self.domain = self.dom = dom\n self.symbols = self.gens = gens\n \n+ def set_domain(self, dom):\n+ \"\"\"Make a new fraction field with given domain. \"\"\"\n+ return self.__class__(dom, *self.gens)\n+\n def new(self, element):\n return self.dtype(element, self.dom, len(self.gens) - 1)\n \n@@ -46,13 +49,6 @@ def __eq__(self, other):\n return isinstance(other, FractionField) and \\\n self.dtype == other.dtype and self.dom == other.dom and self.gens == other.gens\n \n- @property\n- def has_CharacteristicZero(self):\n- return self.dom.has_CharacteristicZero\n-\n- def characteristic(self):\n- return self.dom.characteristic()\n-\n def to_sympy(self, a):\n \"\"\"Convert ``a`` to a SymPy object. \"\"\"\n return (basic_from_dict(a.numer().to_sympy_dict(), *self.gens) /\ndiff --git a/sympy/polys/domains/old_polynomialring.py b/sympy/polys/domains/old_polynomialring.py\nindex bd637b257d7a..c29a4529aac3 100644\n--- a/sympy/polys/domains/old_polynomialring.py\n+++ b/sympy/polys/domains/old_polynomialring.py\n@@ -2,7 +2,6 @@\n \n \n from sympy.polys.agca.modules import FreeModulePolyRing\n-from sympy.polys.domains.characteristiczero import CharacteristicZero\n from sympy.polys.domains.compositedomain import CompositeDomain\n from sympy.polys.domains.old_fractionfield import FractionField\n from sympy.polys.domains.ring import Ring\n@@ -14,10 +13,9 @@\n from sympy.utilities import public\n from sympy.utilities.iterables import iterable\n \n-# XXX why does this derive from CharacteristicZero???\n \n @public\n-class PolynomialRingBase(Ring, CharacteristicZero, CompositeDomain):\n+class PolynomialRingBase(Ring, CompositeDomain):\n \"\"\"\n Base class for generalized polynomial rings.\n \n@@ -47,6 +45,10 @@ def __init__(self, dom, *gens, **opts):\n # NOTE 'order' may not be set if inject was called through CompositeDomain\n self.order = opts.get('order', monomial_key(self.default_order))\n \n+ def set_domain(self, dom):\n+ \"\"\"Return a new polynomial ring with given domain. \"\"\"\n+ return self.__class__(dom, *self.gens, order=self.order)\n+\n def new(self, element):\n return self.dtype(element, self.dom, len(self.gens) - 1)\n \n@@ -72,13 +74,6 @@ def __eq__(self, other):\n self.dtype == other.dtype and self.dom == other.dom and \\\n self.gens == other.gens and self.order == other.order\n \n- @property\n- def has_CharacteristicZero(self):\n- return self.dom.has_CharacteristicZero\n-\n- def characteristic(self):\n- return self.dom.characteristic()\n-\n def from_ZZ(K1, a, K0):\n \"\"\"Convert a Python ``int`` object to ``dtype``. \"\"\"\n return K1._ground_new(K1.dom.convert(a, K0))\ndiff --git a/sympy/polys/domains/polynomialring.py b/sympy/polys/domains/polynomialring.py\nindex 0063b6f34b44..bad73208f866 100644\n--- a/sympy/polys/domains/polynomialring.py\n+++ b/sympy/polys/domains/polynomialring.py\n@@ -67,13 +67,6 @@ def __eq__(self, other):\n (self.dtype.ring, self.domain, self.symbols) == \\\n (other.dtype.ring, other.domain, other.symbols)\n \n- @property\n- def has_CharacteristicZero(self):\n- return self.domain.has_CharacteristicZero\n-\n- def characteristic(self):\n- return self.domain.characteristic()\n-\n def is_unit(self, a):\n \"\"\"Returns ``True`` if ``a`` is a unit of ``self``\"\"\"\n if not a.is_ground:\ndiff --git a/sympy/polys/domains/tests/test_domains.py b/sympy/polys/domains/tests/test_domains.py\nindex 4eb97d12c6d4..3fe40cdb8fab 100644\n--- a/sympy/polys/domains/tests/test_domains.py\n+++ b/sympy/polys/domains/tests/test_domains.py\n@@ -562,22 +562,70 @@ def test_Domain_get_field():\n assert QQ[x, y].get_field() == QQ.frac_field(x, y)\n \n \n+def test_Domain_set_domain():\n+ doms = [GF(5), ZZ, QQ, ALG, RR, CC, EX, ZZ[z], QQ[z], RR[z], CC[z], EX[z]]\n+ for D1 in doms:\n+ for D2 in doms:\n+ assert D1[x].set_domain(D2) == D2[x]\n+ assert D1[x, y].set_domain(D2) == D2[x, y]\n+ assert D1.frac_field(x).set_domain(D2) == D2.frac_field(x)\n+ assert D1.frac_field(x, y).set_domain(D2) == D2.frac_field(x, y)\n+ assert D1.old_poly_ring(x).set_domain(D2) == D2.old_poly_ring(x)\n+ assert D1.old_poly_ring(x, y).set_domain(D2) == D2.old_poly_ring(x, y)\n+ assert D1.old_frac_field(x).set_domain(D2) == D2.old_frac_field(x)\n+ assert D1.old_frac_field(x, y).set_domain(D2) == D2.old_frac_field(x, y)\n+\n+\n+def test_Domain_is_Exact():\n+ exact = [GF(5), ZZ, QQ, ALG, EX]\n+ inexact = [RR, CC]\n+ for D in exact + inexact:\n+ for R in D, D[x], D.frac_field(x), D.old_poly_ring(x), D.old_frac_field(x):\n+ if D in exact:\n+ assert R.is_Exact is True\n+ else:\n+ assert R.is_Exact is False\n+\n+\n def test_Domain_get_exact():\n assert EX.get_exact() == EX\n assert ZZ.get_exact() == ZZ\n assert QQ.get_exact() == QQ\n assert RR.get_exact() == QQ\n- # XXX: This should also be like RR:\n- # assert CC.get_exact() == QQ_I\n+ assert CC.get_exact() == QQ_I\n assert ALG.get_exact() == ALG\n assert ZZ[x].get_exact() == ZZ[x]\n assert QQ[x].get_exact() == QQ[x]\n+ assert RR[x].get_exact() == QQ[x]\n+ assert CC[x].get_exact() == QQ_I[x]\n assert ZZ[x, y].get_exact() == ZZ[x, y]\n assert QQ[x, y].get_exact() == QQ[x, y]\n+ assert RR[x, y].get_exact() == QQ[x, y]\n+ assert CC[x, y].get_exact() == QQ_I[x, y]\n assert ZZ.frac_field(x).get_exact() == ZZ.frac_field(x)\n assert QQ.frac_field(x).get_exact() == QQ.frac_field(x)\n+ assert RR.frac_field(x).get_exact() == QQ.frac_field(x)\n+ assert CC.frac_field(x).get_exact() == QQ_I.frac_field(x)\n assert ZZ.frac_field(x, y).get_exact() == ZZ.frac_field(x, y)\n assert QQ.frac_field(x, y).get_exact() == QQ.frac_field(x, y)\n+ assert RR.frac_field(x, y).get_exact() == QQ.frac_field(x, y)\n+ assert CC.frac_field(x, y).get_exact() == QQ_I.frac_field(x, y)\n+ assert ZZ.old_poly_ring(x).get_exact() == ZZ.old_poly_ring(x)\n+ assert QQ.old_poly_ring(x).get_exact() == QQ.old_poly_ring(x)\n+ assert RR.old_poly_ring(x).get_exact() == QQ.old_poly_ring(x)\n+ assert CC.old_poly_ring(x).get_exact() == QQ_I.old_poly_ring(x)\n+ assert ZZ.old_poly_ring(x, y).get_exact() == ZZ.old_poly_ring(x, y)\n+ assert QQ.old_poly_ring(x, y).get_exact() == QQ.old_poly_ring(x, y)\n+ assert RR.old_poly_ring(x, y).get_exact() == QQ.old_poly_ring(x, y)\n+ assert CC.old_poly_ring(x, y).get_exact() == QQ_I.old_poly_ring(x, y)\n+ assert ZZ.old_frac_field(x).get_exact() == ZZ.old_frac_field(x)\n+ assert QQ.old_frac_field(x).get_exact() == QQ.old_frac_field(x)\n+ assert RR.old_frac_field(x).get_exact() == QQ.old_frac_field(x)\n+ assert CC.old_frac_field(x).get_exact() == QQ_I.old_frac_field(x)\n+ assert ZZ.old_frac_field(x, y).get_exact() == ZZ.old_frac_field(x, y)\n+ assert QQ.old_frac_field(x, y).get_exact() == QQ.old_frac_field(x, y)\n+ assert RR.old_frac_field(x, y).get_exact() == QQ.old_frac_field(x, y)\n+ assert CC.old_frac_field(x, y).get_exact() == QQ_I.old_frac_field(x, y)\n \n \n def test_Domain_characteristic():\n@@ -614,8 +662,8 @@ def check_element(e1, e2, K1, K2, K3):\n \n def check_domains(K1, K2):\n K3 = K1.unify(K2)\n- check_element(K3.convert_from( K1.one, K1), K3.one, K1, K2, K3)\n- check_element(K3.convert_from( K2.one, K2), K3.one, K1, K2, K3)\n+ check_element(K3.convert_from(K1.one, K1), K3.one, K1, K2, K3)\n+ check_element(K3.convert_from(K2.one, K2), K3.one, K1, K2, K3)\n check_element(K3.convert_from(K1.zero, K1), K3.zero, K1, K2, K3)\n check_element(K3.convert_from(K2.zero, K2), K3.zero, K1, K2, K3)\n \n@@ -648,6 +696,11 @@ def composite_domains(K):\n assert CC.convert(ZZ_I(1, 2)) == CC(1, 2)\n assert CC.convert(QQ_I(1, 2)) == CC(1, 2)\n \n+ assert QQ.convert_from(RR(0.5), RR) == QQ(1, 2)\n+ assert RR.convert_from(QQ(1, 2), QQ) == RR(0.5)\n+ assert QQ_I.convert_from(CC(0.5, 0.75), CC) == QQ_I(QQ(1, 2), QQ(3, 4))\n+ assert CC.convert_from(QQ_I(QQ(1, 2), QQ(3, 4)), QQ_I) == CC(0.5, 0.75)\n+\n K1 = QQ.frac_field(x)\n K2 = ZZ.frac_field(x)\n K3 = QQ[x]\ndiff --git a/sympy/polys/euclidtools.py b/sympy/polys/euclidtools.py\nindex 1a919e1f108d..2143d6cc444d 100644\n--- a/sympy/polys/euclidtools.py\n+++ b/sympy/polys/euclidtools.py\n@@ -1489,7 +1489,19 @@ def dup_inner_gcd(f, g, K):\n (x - 1, x + 1, x - 2)\n \n \"\"\"\n- if not K.is_Exact:\n+ # XXX: This used to check for K.is_Exact but leads to awkward results when\n+ # the domain is something like RR[z] e.g.:\n+ #\n+ # >>> g, p, q = Poly(1, x).cancel(Poly(51.05*x*y - 1.0, x))\n+ # >>> g\n+ # 1.0\n+ # >>> q\n+ # Poly(17592186044421.0, x, domain='RR[y]')\n+ # >>> q\n+ # Poly(898081097567692.0*y*x - 17592186044421.0, x, domain='RR[y]'))\n+ #\n+ # Maybe it would be better to flatten into multivariate polynomials first.\n+ if K.is_RR or K.is_CC:\n try:\n exact = K.get_exact()\n except DomainError:\ndiff --git a/sympy/solvers/tests/test_solveset.py b/sympy/solvers/tests/test_solveset.py\nindex 87dd4cb424e9..1c4dd02dc30c 100644\n--- a/sympy/solvers/tests/test_solveset.py\n+++ b/sympy/solvers/tests/test_solveset.py\n@@ -1899,6 +1899,20 @@ def test_solve_nonlinear_trans():\n assert nonlinsolve([x**2 - y**2/exp(x)], [x, y]) == soln4\n \n \n+def test_nonlinsolve_issue_25182():\n+ a1, b1, c1, ca, cb, cg = symbols('a1, b1, c1, ca, cb, cg')\n+ eq1 = a1*a1 + b1*b1 - 2.*a1*b1*cg - c1*c1\n+ eq2 = a1*a1 + c1*c1 - 2.*a1*c1*cb - b1*b1\n+ eq3 = b1*b1 + c1*c1 - 2.*b1*c1*ca - a1*a1\n+ assert nonlinsolve([eq1, eq2, eq3], [c1, cb, cg]) == FiniteSet(\n+ (1.0*b1*ca - 1.0*sqrt(a1**2 + b1**2*ca**2 - b1**2),\n+ -1.0*sqrt(a1**2 + b1**2*ca**2 - b1**2)/a1,\n+ -1.0*b1*(ca - 1)*(ca + 1)/a1 + 1.0*ca*sqrt(a1**2 + b1**2*ca**2 - b1**2)/a1),\n+ (1.0*b1*ca + 1.0*sqrt(a1**2 + b1**2*ca**2 - b1**2),\n+ 1.0*sqrt(a1**2 + b1**2*ca**2 - b1**2)/a1,\n+ -1.0*b1*(ca - 1)*(ca + 1)/a1 - 1.0*ca*sqrt(a1**2 + b1**2*ca**2 - b1**2)/a1))\n+\n+\n def test_issue_14642():\n x = Symbol('x')\n n1 = 0.5*x**3+x**2+0.5+I #add I in the Polynomials\n" }
[ { "diff_hunk": "@@ -1489,7 +1489,19 @@ def dup_inner_gcd(f, g, K):\n (x - 1, x + 1, x - 2)\n \n \"\"\"\n- if not K.is_Exact:\n+ # XXX: This used to check for K.is_Exact but leads to awkward results when\n+ # the domain is something like RR[z] e.g.:\n+ #\n+ # >>> g, p, q = Poly(1, x).cancel(Poly(51.05*x*y - 1.0, x))\n+ # >>> g\n+ # 1.0\n+ # >>> q", "line": null, "original_line": 1498, "original_start_line": null, "path": "sympy/polys/euclidtools.py", "start_line": null, "text": "@user1:\nShould this be p? \n\n@user2:\nYes. I've just changed it." } ]
5b64014b3b50e1fdc5e95d857940a4136792c652
diff --git a/sympy/core/numbers.py b/sympy/core/numbers.py index f3839e22d1cd..aa3411864e10 100644 --- a/sympy/core/numbers.py +++ b/sympy/core/numbers.py @@ -4248,6 +4248,53 @@ def equal_valued(x, y): return (1 << neg_exp) == q +def all_close(expr1, expr2, rtol=1e-5, atol=1e-8): + """Return True if expr1 and expr2 are numerically close. + + The expressions must have the same structure, but any Rational, Integer, or + Float numbers they contain are compared approximately using rtol and atol. + Any other parts of expressions are compared exactly. + + Relative tolerance is measured with respect to expr2 so when used in + testing expr2 should be the expected correct answer. + + Examples + ======== + + >>> from sympy import exp + >>> from sympy.abc import x, y + >>> from sympy.core.numbers import all_close + >>> expr1 = 0.1*exp(x - y) + >>> expr2 = exp(x - y)/10 + >>> expr1 + 0.1*exp(x - y) + >>> expr2 + exp(x - y)/10 + >>> expr1 == expr2 + False + >>> all_close(expr1, expr2) + True + """ + NUM_TYPES = (Rational, Float) + + def _all_close(expr1, expr2, rtol, atol): + num1 = isinstance(expr1, NUM_TYPES) + num2 = isinstance(expr2, NUM_TYPES) + if num1 != num2: + return False + elif num1: + return bool(abs(expr1 - expr2) <= atol + rtol*abs(expr2)) + elif expr1.is_Atom: + return expr1 == expr2 + elif expr1.func != expr2.func or len(expr1.args) != len(expr2.args): + return False + else: + args = zip(expr1.args, expr2.args) + return all(_all_close(a1, a2, rtol, atol) for a1, a2 in args) + + return _all_close(_sympify(expr1), _sympify(expr2), rtol, atol) + + @dispatch(Tuple, Number) # type:ignore def _eval_is_eq(self, other): # noqa: F811 return False diff --git a/sympy/core/tests/test_numbers.py b/sympy/core/tests/test_numbers.py index 82a221cf4417..fb4c6a86e241 100644 --- a/sympy/core/tests/test_numbers.py +++ b/sympy/core/tests/test_numbers.py @@ -10,7 +10,7 @@ from sympy.core.numbers import (mpf_norm, seterr, Integer, I, pi, comp, Rational, E, nan, oo, AlgebraicNumber, Number, Float, zoo, equal_valued, - int_valued) + int_valued, all_close) from sympy.core.intfunc import (igcd, igcdex, igcd2, igcd_lehmer, ilcm, integer_nthroot, isqrt, integer_log, mod_inverse) from sympy.core.power import Pow @@ -2280,3 +2280,18 @@ def test_equal_valued(): continue for value_j in values_n: assert equal_valued(value_i, value_j) is False + + +def test_all_close(): + x = Symbol('x') + assert all_close(2, 2) is True + assert all_close(2, 2.0000) is True + assert all_close(2, 2.0001) is False + assert all_close(1/3, 1/3.0001) is False + assert all_close(1/3, 1/3.0001, 1e-3, 1e-3) is True + assert all_close(1/3, Rational(1, 3)) is True + assert all_close(0.1*exp(0.2*x), exp(x/5)/10) is True + # The expressions should be structurally the same: + assert all_close(1.4142135623730951, sqrt(2)) is False + assert all_close(1.4142135623730951, sqrt(2).evalf()) is True + assert all_close(x + 1e-20, x) is False diff --git a/sympy/integrals/tests/test_integrals.py b/sympy/integrals/tests/test_integrals.py index 994c7e5b2c03..2b739d184f73 100644 --- a/sympy/integrals/tests/test_integrals.py +++ b/sympy/integrals/tests/test_integrals.py @@ -5,7 +5,7 @@ from sympy.core.expr import Expr from sympy.core.function import (Derivative, Function, Lambda, diff) from sympy.core import EulerGamma -from sympy.core.numbers import (E, Float, I, Rational, nan, oo, pi, zoo) +from sympy.core.numbers import (E, I, Rational, nan, oo, pi, zoo, all_close) from sympy.core.relational import (Eq, Ne) from sympy.core.singleton import S from sympy.core.symbol import (Symbol, symbols) @@ -438,13 +438,12 @@ def test_issue_18133(): def test_issue_21741(): - a = Float('3999999.9999999995', precision=53) - b = Float('2.5000000000000004e-7', precision=53) - r = Piecewise((b*I*exp(-a*I*pi*t*y)*exp(-a*I*pi*x*z)/(pi*x), - Ne(1.0*pi*x*exp(a*I*pi*t*y), 0)), + a = 4e6 + b = 2.5e-7 + r = Piecewise((b*I*exp(-a*I*pi*t*y)*exp(-a*I*pi*x*z)/(pi*x), Ne(x, 0)), (z*exp(-a*I*pi*t*y), True)) fun = E**((-2*I*pi*(z*x+t*y))/(500*10**(-9))) - assert integrate(fun, z) == r + assert all_close(integrate(fun, z), r) def test_matrices(): diff --git a/sympy/polys/domains/complexfield.py b/sympy/polys/domains/complexfield.py index a36e94ebdfb6..4642b20249be 100644 --- a/sympy/polys/domains/complexfield.py +++ b/sympy/polys/domains/complexfield.py @@ -5,6 +5,7 @@ from sympy.core.numbers import Float, I from sympy.polys.domains.characteristiczero import CharacteristicZero from sympy.polys.domains.field import Field +from sympy.polys.domains.gaussiandomains import QQ_I from sympy.polys.domains.mpelements import MPContext from sympy.polys.domains.simpledomain import SimpleDomain from sympy.polys.polyerrors import DomainError, CoercionFailed @@ -136,7 +137,7 @@ def get_ring(self): def get_exact(self): """Returns an exact domain associated with ``self``. """ - raise DomainError("there is no exact domain associated with %s" % self) + return QQ_I def is_negative(self, element): """Returns ``False`` for any ``ComplexElement``. """ diff --git a/sympy/polys/domains/compositedomain.py b/sympy/polys/domains/compositedomain.py index 560720a02469..a8f63ba7bb86 100644 --- a/sympy/polys/domains/compositedomain.py +++ b/sympy/polys/domains/compositedomain.py @@ -30,3 +30,23 @@ def drop(self, *symbols): return domain else: return self.__class__(domain, newsyms, self.order) + + def set_domain(self, domain): + """Set the ground domain of this domain. """ + return self.__class__(domain, self.symbols, self.order) + + @property + def is_Exact(self): + """Returns ``True`` if this domain is exact. """ + return self.domain.is_Exact + + def get_exact(self): + """Returns an exact version of this domain. """ + return self.set_domain(self.domain.get_exact()) + + @property + def has_CharacteristicZero(self): + return self.domain.has_CharacteristicZero + + def characteristic(self): + return self.domain.characteristic() diff --git a/sympy/polys/domains/fractionfield.py b/sympy/polys/domains/fractionfield.py index 6dbfc904a408..47bc25436b8e 100644 --- a/sympy/polys/domains/fractionfield.py +++ b/sympy/polys/domains/fractionfield.py @@ -49,13 +49,6 @@ def one(self): def order(self): return self.field.order - @property - def is_Exact(self): - return self.domain.is_Exact - - def get_exact(self): - return FractionField(self.domain.get_exact(), self.symbols) - def __str__(self): return str(self.domain) + '(' + ','.join(map(str, self.symbols)) + ')' @@ -68,13 +61,6 @@ def __eq__(self, other): (self.dtype.field, self.domain, self.symbols) ==\ (other.dtype.field, other.domain, other.symbols) - @property - def has_CharacteristicZero(self): - return self.domain.has_CharacteristicZero - - def characteristic(self): - return self.domain.characteristic() - def to_sympy(self, a): """Convert ``a`` to a SymPy object. """ return a.as_expr() diff --git a/sympy/polys/domains/gaussiandomains.py b/sympy/polys/domains/gaussiandomains.py index e8b9701d912c..bf3df50d5de6 100644 --- a/sympy/polys/domains/gaussiandomains.py +++ b/sympy/polys/domains/gaussiandomains.py @@ -678,4 +678,9 @@ def from_GaussianRationalField(K1, a, K0): """Convert a QQ_I element to QQ_I.""" return a + def from_ComplexField(K1, a, K0): + """Convert a ComplexField element to QQ_I.""" + return K1.new(QQ.convert(a.real), QQ.convert(a.imag)) + + QQ_I = GaussianRational._parent = GaussianRationalField() diff --git a/sympy/polys/domains/old_fractionfield.py b/sympy/polys/domains/old_fractionfield.py index c05e6f99c1a0..25d849c39e45 100644 --- a/sympy/polys/domains/old_fractionfield.py +++ b/sympy/polys/domains/old_fractionfield.py @@ -3,14 +3,13 @@ from sympy.polys.domains.field import Field from sympy.polys.domains.compositedomain import CompositeDomain -from sympy.polys.domains.characteristiczero import CharacteristicZero from sympy.polys.polyclasses import DMF from sympy.polys.polyerrors import GeneratorsNeeded from sympy.polys.polyutils import dict_from_basic, basic_from_dict, _dict_reorder from sympy.utilities import public @public -class FractionField(Field, CharacteristicZero, CompositeDomain): +class FractionField(Field, CompositeDomain): """A class for representing rational function fields. """ dtype = DMF @@ -32,6 +31,10 @@ def __init__(self, dom, *gens): self.domain = self.dom = dom self.symbols = self.gens = gens + def set_domain(self, dom): + """Make a new fraction field with given domain. """ + return self.__class__(dom, *self.gens) + def new(self, element): return self.dtype(element, self.dom, len(self.gens) - 1) @@ -46,13 +49,6 @@ def __eq__(self, other): return isinstance(other, FractionField) and \ self.dtype == other.dtype and self.dom == other.dom and self.gens == other.gens - @property - def has_CharacteristicZero(self): - return self.dom.has_CharacteristicZero - - def characteristic(self): - return self.dom.characteristic() - def to_sympy(self, a): """Convert ``a`` to a SymPy object. """ return (basic_from_dict(a.numer().to_sympy_dict(), *self.gens) / diff --git a/sympy/polys/domains/old_polynomialring.py b/sympy/polys/domains/old_polynomialring.py index bd637b257d7a..c29a4529aac3 100644 --- a/sympy/polys/domains/old_polynomialring.py +++ b/sympy/polys/domains/old_polynomialring.py @@ -2,7 +2,6 @@ from sympy.polys.agca.modules import FreeModulePolyRing -from sympy.polys.domains.characteristiczero import CharacteristicZero from sympy.polys.domains.compositedomain import CompositeDomain from sympy.polys.domains.old_fractionfield import FractionField from sympy.polys.domains.ring import Ring @@ -14,10 +13,9 @@ from sympy.utilities import public from sympy.utilities.iterables import iterable -# XXX why does this derive from CharacteristicZero??? @public -class PolynomialRingBase(Ring, CharacteristicZero, CompositeDomain): +class PolynomialRingBase(Ring, CompositeDomain): """ Base class for generalized polynomial rings. @@ -47,6 +45,10 @@ def __init__(self, dom, *gens, **opts): # NOTE 'order' may not be set if inject was called through CompositeDomain self.order = opts.get('order', monomial_key(self.default_order)) + def set_domain(self, dom): + """Return a new polynomial ring with given domain. """ + return self.__class__(dom, *self.gens, order=self.order) + def new(self, element): return self.dtype(element, self.dom, len(self.gens) - 1) @@ -72,13 +74,6 @@ def __eq__(self, other): self.dtype == other.dtype and self.dom == other.dom and \ self.gens == other.gens and self.order == other.order - @property - def has_CharacteristicZero(self): - return self.dom.has_CharacteristicZero - - def characteristic(self): - return self.dom.characteristic() - def from_ZZ(K1, a, K0): """Convert a Python ``int`` object to ``dtype``. """ return K1._ground_new(K1.dom.convert(a, K0)) diff --git a/sympy/polys/domains/polynomialring.py b/sympy/polys/domains/polynomialring.py index 0063b6f34b44..bad73208f866 100644 --- a/sympy/polys/domains/polynomialring.py +++ b/sympy/polys/domains/polynomialring.py @@ -67,13 +67,6 @@ def __eq__(self, other): (self.dtype.ring, self.domain, self.symbols) == \ (other.dtype.ring, other.domain, other.symbols) - @property - def has_CharacteristicZero(self): - return self.domain.has_CharacteristicZero - - def characteristic(self): - return self.domain.characteristic() - def is_unit(self, a): """Returns ``True`` if ``a`` is a unit of ``self``""" if not a.is_ground: diff --git a/sympy/polys/domains/tests/test_domains.py b/sympy/polys/domains/tests/test_domains.py index 4eb97d12c6d4..3fe40cdb8fab 100644 --- a/sympy/polys/domains/tests/test_domains.py +++ b/sympy/polys/domains/tests/test_domains.py @@ -562,22 +562,70 @@ def test_Domain_get_field(): assert QQ[x, y].get_field() == QQ.frac_field(x, y) +def test_Domain_set_domain(): + doms = [GF(5), ZZ, QQ, ALG, RR, CC, EX, ZZ[z], QQ[z], RR[z], CC[z], EX[z]] + for D1 in doms: + for D2 in doms: + assert D1[x].set_domain(D2) == D2[x] + assert D1[x, y].set_domain(D2) == D2[x, y] + assert D1.frac_field(x).set_domain(D2) == D2.frac_field(x) + assert D1.frac_field(x, y).set_domain(D2) == D2.frac_field(x, y) + assert D1.old_poly_ring(x).set_domain(D2) == D2.old_poly_ring(x) + assert D1.old_poly_ring(x, y).set_domain(D2) == D2.old_poly_ring(x, y) + assert D1.old_frac_field(x).set_domain(D2) == D2.old_frac_field(x) + assert D1.old_frac_field(x, y).set_domain(D2) == D2.old_frac_field(x, y) + + +def test_Domain_is_Exact(): + exact = [GF(5), ZZ, QQ, ALG, EX] + inexact = [RR, CC] + for D in exact + inexact: + for R in D, D[x], D.frac_field(x), D.old_poly_ring(x), D.old_frac_field(x): + if D in exact: + assert R.is_Exact is True + else: + assert R.is_Exact is False + + def test_Domain_get_exact(): assert EX.get_exact() == EX assert ZZ.get_exact() == ZZ assert QQ.get_exact() == QQ assert RR.get_exact() == QQ - # XXX: This should also be like RR: - # assert CC.get_exact() == QQ_I + assert CC.get_exact() == QQ_I assert ALG.get_exact() == ALG assert ZZ[x].get_exact() == ZZ[x] assert QQ[x].get_exact() == QQ[x] + assert RR[x].get_exact() == QQ[x] + assert CC[x].get_exact() == QQ_I[x] assert ZZ[x, y].get_exact() == ZZ[x, y] assert QQ[x, y].get_exact() == QQ[x, y] + assert RR[x, y].get_exact() == QQ[x, y] + assert CC[x, y].get_exact() == QQ_I[x, y] assert ZZ.frac_field(x).get_exact() == ZZ.frac_field(x) assert QQ.frac_field(x).get_exact() == QQ.frac_field(x) + assert RR.frac_field(x).get_exact() == QQ.frac_field(x) + assert CC.frac_field(x).get_exact() == QQ_I.frac_field(x) assert ZZ.frac_field(x, y).get_exact() == ZZ.frac_field(x, y) assert QQ.frac_field(x, y).get_exact() == QQ.frac_field(x, y) + assert RR.frac_field(x, y).get_exact() == QQ.frac_field(x, y) + assert CC.frac_field(x, y).get_exact() == QQ_I.frac_field(x, y) + assert ZZ.old_poly_ring(x).get_exact() == ZZ.old_poly_ring(x) + assert QQ.old_poly_ring(x).get_exact() == QQ.old_poly_ring(x) + assert RR.old_poly_ring(x).get_exact() == QQ.old_poly_ring(x) + assert CC.old_poly_ring(x).get_exact() == QQ_I.old_poly_ring(x) + assert ZZ.old_poly_ring(x, y).get_exact() == ZZ.old_poly_ring(x, y) + assert QQ.old_poly_ring(x, y).get_exact() == QQ.old_poly_ring(x, y) + assert RR.old_poly_ring(x, y).get_exact() == QQ.old_poly_ring(x, y) + assert CC.old_poly_ring(x, y).get_exact() == QQ_I.old_poly_ring(x, y) + assert ZZ.old_frac_field(x).get_exact() == ZZ.old_frac_field(x) + assert QQ.old_frac_field(x).get_exact() == QQ.old_frac_field(x) + assert RR.old_frac_field(x).get_exact() == QQ.old_frac_field(x) + assert CC.old_frac_field(x).get_exact() == QQ_I.old_frac_field(x) + assert ZZ.old_frac_field(x, y).get_exact() == ZZ.old_frac_field(x, y) + assert QQ.old_frac_field(x, y).get_exact() == QQ.old_frac_field(x, y) + assert RR.old_frac_field(x, y).get_exact() == QQ.old_frac_field(x, y) + assert CC.old_frac_field(x, y).get_exact() == QQ_I.old_frac_field(x, y) def test_Domain_characteristic(): @@ -614,8 +662,8 @@ def check_element(e1, e2, K1, K2, K3): def check_domains(K1, K2): K3 = K1.unify(K2) - check_element(K3.convert_from( K1.one, K1), K3.one, K1, K2, K3) - check_element(K3.convert_from( K2.one, K2), K3.one, K1, K2, K3) + check_element(K3.convert_from(K1.one, K1), K3.one, K1, K2, K3) + check_element(K3.convert_from(K2.one, K2), K3.one, K1, K2, K3) check_element(K3.convert_from(K1.zero, K1), K3.zero, K1, K2, K3) check_element(K3.convert_from(K2.zero, K2), K3.zero, K1, K2, K3) @@ -648,6 +696,11 @@ def composite_domains(K): assert CC.convert(ZZ_I(1, 2)) == CC(1, 2) assert CC.convert(QQ_I(1, 2)) == CC(1, 2) + assert QQ.convert_from(RR(0.5), RR) == QQ(1, 2) + assert RR.convert_from(QQ(1, 2), QQ) == RR(0.5) + assert QQ_I.convert_from(CC(0.5, 0.75), CC) == QQ_I(QQ(1, 2), QQ(3, 4)) + assert CC.convert_from(QQ_I(QQ(1, 2), QQ(3, 4)), QQ_I) == CC(0.5, 0.75) + K1 = QQ.frac_field(x) K2 = ZZ.frac_field(x) K3 = QQ[x] diff --git a/sympy/polys/euclidtools.py b/sympy/polys/euclidtools.py index 1a919e1f108d..768a44a94930 100644 --- a/sympy/polys/euclidtools.py +++ b/sympy/polys/euclidtools.py @@ -1489,7 +1489,19 @@ def dup_inner_gcd(f, g, K): (x - 1, x + 1, x - 2) """ - if not K.is_Exact: + # XXX: This used to check for K.is_Exact but leads to awkward results when + # the domain is something like RR[z] e.g.: + # + # >>> g, p, q = Poly(1, x).cancel(Poly(51.05*x*y - 1.0, x)) + # >>> g + # 1.0 + # >>> p + # Poly(17592186044421.0, x, domain='RR[y]') + # >>> q + # Poly(898081097567692.0*y*x - 17592186044421.0, x, domain='RR[y]')) + # + # Maybe it would be better to flatten into multivariate polynomials first. + if K.is_RR or K.is_CC: try: exact = K.get_exact() except DomainError: diff --git a/sympy/solvers/tests/test_solveset.py b/sympy/solvers/tests/test_solveset.py index 87dd4cb424e9..1c4dd02dc30c 100644 --- a/sympy/solvers/tests/test_solveset.py +++ b/sympy/solvers/tests/test_solveset.py @@ -1899,6 +1899,20 @@ def test_solve_nonlinear_trans(): assert nonlinsolve([x**2 - y**2/exp(x)], [x, y]) == soln4 +def test_nonlinsolve_issue_25182(): + a1, b1, c1, ca, cb, cg = symbols('a1, b1, c1, ca, cb, cg') + eq1 = a1*a1 + b1*b1 - 2.*a1*b1*cg - c1*c1 + eq2 = a1*a1 + c1*c1 - 2.*a1*c1*cb - b1*b1 + eq3 = b1*b1 + c1*c1 - 2.*b1*c1*ca - a1*a1 + assert nonlinsolve([eq1, eq2, eq3], [c1, cb, cg]) == FiniteSet( + (1.0*b1*ca - 1.0*sqrt(a1**2 + b1**2*ca**2 - b1**2), + -1.0*sqrt(a1**2 + b1**2*ca**2 - b1**2)/a1, + -1.0*b1*(ca - 1)*(ca + 1)/a1 + 1.0*ca*sqrt(a1**2 + b1**2*ca**2 - b1**2)/a1), + (1.0*b1*ca + 1.0*sqrt(a1**2 + b1**2*ca**2 - b1**2), + 1.0*sqrt(a1**2 + b1**2*ca**2 - b1**2)/a1, + -1.0*b1*(ca - 1)*(ca + 1)/a1 - 1.0*ca*sqrt(a1**2 + b1**2*ca**2 - b1**2)/a1)) + + def test_issue_14642(): x = Symbol('x') n1 = 0.5*x**3+x**2+0.5+I #add I in the Polynomials
{ "difficulty": "medium", "estimated_review_effort": 4, "problem_domain": "Bug Fixes" }
sympy__sympy-26036@e8ed910
sympy/sympy
Python
26,036
Fix for bug in pow class
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> Fixes #25165 #### Brief description of what is fixed or changed Added fixes for better execution of thee operation to remove the constant term from an expression in _eval_nseries method in Pow class #### Other comments Changed some old tests, as the new implementation gives more simplified forms of the expected results. #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> * functions * Fixed a bug in _eval_nseries method of Pow class to better handle the operation of negating the constant term from the expression. <!-- END RELEASE NOTES -->
2024-01-02T15:18:50Z
Series expansion not working Hello, In our project we use very large functions and a lot of series expansions throw a "not implemented error". I managed to extract a relatively simple case: `(1/sqrt(( - y + 1)**2 + (y - 0.23)**4)).series(y,x0=0,n=5)` When using `apart()` the series expansion works: `apart((1/sqrt(( - y + 1)**2 + (y - 0.23)**4))).series(y,x0=0,n=5)` sadly its not that easy with our larger functions. Any ideas or am I missing something fundamental? The error is: `Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/user/.local/lib/python3.10/site-packages/sympy/core/expr.py", line 3026, in series rv = self.subs(x, xpos).series(xpos, x0, n, dir, logx=logx, cdir=cdir) File "/home/user/.local/lib/python3.10/site-packages/sympy/core/expr.py", line 3034, in series s1 = self._eval_nseries(x, n=n, logx=logx, cdir=cdir) File "/home/user/.local/lib/python3.10/site-packages/sympy/core/power.py", line 1758, in _eval_nseries raise NotImplementedError() NotImplementedError`
Hey @RagonEbker thanks for reporting this issue. We actually have the necessary tools to address the above case. The problem lies here https://github.com/sympy/sympy/blob/aa6db7c688855c181e020929dd34b9639dec64ad/sympy/core/power.py#L1581-L1586 Let's say we want to calculate the series for `(1/sqrt(( - y + 1)**2 + (y - 0.6)**4))` Hence we encounter a terms `g` which is the following ``` >>> g 0.885269121813031*(y - 1)**2 + 0.885269121813031*(y - 0.6)**4 - 1.0 ``` If we look at this term the constant term here is S.Zero or rather there is no constant term ``` >>> g = 0.885269121813031*(y - 1)**2 + 0.885269121813031*(y - 0.6)**4 - 1.0 >>> (0.885269121813031*(y - 1)**2).expand() 0.885269121813031*y**2 - 1.77053824362606*y + 0.885269121813031 >>> (0.885269121813031*(y - 0.6)**4).expand() 0.885269121813031*y**4 - 2.12464589235127*y**3 + 1.91218130311615*y**2 - 0.764872521246459*y + 0.114730878186969 >>> 0.885269121813031 + 0.114730878186969 - 1.0 0.0 ``` But if we calculate the leading term for `g`, we get a constant due to a rounding off error. ``` >>> g.as_leading_term(y) -2.22044604925031e-16 ``` This leads to the wrong answer. @oscarbenjamin what can be done here ? We have all the tools necessary. We just need to expand/simplify `g` such that the constant terms goes to 0 rather than returning a float I think that `nsimplify` is being called in the wrong place. It is already too late because rounding errors have already happened and so after `nsimplify` the constant term is still not found to be zero. If this is going to use `nsimplify` to handle floats then it needs to be done right at the beginning of the calculation. Otherwise everything needs to be designed to handle floats throughout. cc @arnabnandikgp As you've been contributing to the series module these days, you might be interested in picking up this issue. We have all the tools required to solve this though we are making some mistakes with simplyfing things. I was toggling through the git history of the associated lines and realised that the code block https://github.com/sympy/sympy/blob/aa6db7c688855c181e020929dd34b9639dec64ad/sympy/core/power.py#L1581-L1586 was added through #22213 in order to handle the very same negation problems that we are trying to handle in this case but it worked then as the failing test cases weren't beyond the limits of nsimplify(which is 10**-15) but in this example we presumably exceeded that causing us problems.
[ { "body": "Hello,\r\n\r\nIn our project we use very large functions and a lot of series expansions throw a \"not implemented error\". I managed to extract a relatively simple case:\r\n\r\n`(1/sqrt(( - y + 1)**2 + (y - 0.23)**4)).series(y,x0=0,n=5)`\r\n\r\nWhen using `apart()` the series expansion works: \r\n\r\n`apart((1/sqrt(( - y + 1)**2 + (y - 0.23)**4))).series(y,x0=0,n=5)`\r\n\r\nsadly its not that easy with our larger functions. Any ideas or am I missing something fundamental? \r\n\r\nThe error is: \r\n\r\n`Traceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/home/user/.local/lib/python3.10/site-packages/sympy/core/expr.py\", line 3026, in series\r\n rv = self.subs(x, xpos).series(xpos, x0, n, dir, logx=logx, cdir=cdir)\r\n File \"/home/user/.local/lib/python3.10/site-packages/sympy/core/expr.py\", line 3034, in series\r\n s1 = self._eval_nseries(x, n=n, logx=logx, cdir=cdir)\r\n File \"/home/user/.local/lib/python3.10/site-packages/sympy/core/power.py\", line 1758, in _eval_nseries\r\n raise NotImplementedError()\r\nNotImplementedError`", "number": 25165, "title": "Series expansion not working" } ]
b354658bfd7b863ee59897321d9645efbb9d1f57
{ "head_commit": "e8ed910523b3cb18581470335c74b0e2be6c53b4", "head_commit_message": "changes added\n\nSigned-off-by: arnabnandikgp <[email protected]>", "patch_to_review": "diff --git a/sympy/core/power.py b/sympy/core/power.py\nindex a8dc47bdfe67..d695e7d08320 100644\n--- a/sympy/core/power.py\n+++ b/sympy/core/power.py\n@@ -1533,7 +1533,8 @@ def _eval_nseries(self, x, n, logx, cdir=0):\n return res\n \n f = b.as_leading_term(x, logx=logx)\n- g = (b/f - S.One).cancel(expand=False)\n+ g = (b.expand() - f).cancel()\n+ g = g/f\n if not m.is_number:\n raise NotImplementedError()\n maxpow = n - m*e\n@@ -1583,8 +1584,8 @@ def mul(d1, d2):\n # Convert floats like 0.5 to exact SymPy numbers like S.Half, to\n # prevent rounding errors which can induce wrong values of d leading\n # to a NotImplementedError being returned from the block below.\n- from sympy.simplify.simplify import nsimplify\n- _, d = nsimplify(g).leadterm(x, logx=logx)\n+ g = g.replace(lambda x: x.is_Float, lambda x: Rational(x))\n+ _, d = g.leadterm(x, logx=logx)\n if not d.is_positive:\n g = g.simplify()\n if g.is_zero:\ndiff --git a/sympy/core/tests/test_expr.py b/sympy/core/tests/test_expr.py\nindex af83b22ab361..a95144d99cc9 100644\n--- a/sympy/core/tests/test_expr.py\n+++ b/sympy/core/tests/test_expr.py\n@@ -479,14 +479,13 @@ def test_as_leading_term():\n # https://github.com/sympy/sympy/issues/21177\n e = -3*x + (x + Rational(3, 2) - sqrt(3)*S.ImaginaryUnit/2)**2\\\n - Rational(3, 2) + 3*sqrt(3)*S.ImaginaryUnit/2\n- assert e.as_leading_term(x) == \\\n- (12*sqrt(3)*x - 12*S.ImaginaryUnit*x)/(4*sqrt(3) + 12*S.ImaginaryUnit)\n+ assert e.as_leading_term(x) == -sqrt(3)*I*x\n \n # https://github.com/sympy/sympy/issues/21245\n e = 1 - x - x**2\n d = (1 + sqrt(5))/2\n assert e.subs(x, y + 1/d).as_leading_term(y) == \\\n- (-576*sqrt(5)*y - 1280*y)/(256*sqrt(5) + 576)\n+ (-40*y - 16*sqrt(5)*y)/(16 + 8*sqrt(5))\n \n \n def test_leadterm2():\ndiff --git a/sympy/core/tests/test_power.py b/sympy/core/tests/test_power.py\nindex 640834ead0a9..f1080e338de7 100644\n--- a/sympy/core/tests/test_power.py\n+++ b/sympy/core/tests/test_power.py\n@@ -18,6 +18,7 @@\n from sympy.testing.pytest import warns, _both_exp_pow\n from sympy.utilities.exceptions import SymPyDeprecationWarning\n from sympy.abc import a, b, c, x, y\n+from sympy.core.numbers import all_close\n \n def test_rational():\n a = Rational(1, 5)\n@@ -661,3 +662,9 @@ def test_issue_26546():\n assert Pow(x+I, Rational(1,2)).is_extended_real is False\n assert Pow(x+I, Rational(1,13)).is_extended_real is False\n assert Pow(x+I, Rational(2,3)).is_extended_real is None\n+\n+\n+def test_issue_25165():\n+ e1 = (1/sqrt(( - x + 1)**2 + (x - 0.23)**4)).series(x, 0, 2)\n+ e2 = 0.998603724830355 + 1.02004923189934*x + O(x**2)\n+ assert all_close(e1, e2)\ndiff --git a/sympy/series/tests/test_series.py b/sympy/series/tests/test_series.py\nindex 6ae7ed2b848b..3313b8e6ee48 100644\n--- a/sympy/series/tests/test_series.py\n+++ b/sympy/series/tests/test_series.py\n@@ -367,9 +367,9 @@ def test_issue_20697():\n def test_issue_21245():\n fi = (1 + sqrt(5))/2\n assert (1/(1 - x - x**2)).series(x, 1/fi, 1).factor() == \\\n- (-4812 - 2152*sqrt(5) + 1686*x + 754*sqrt(5)*x\\\n- + O((x - 2/(1 + sqrt(5)))**2, (x, 2/(1 + sqrt(5)))))/((1 + sqrt(5))\\\n- *(20 + 9*sqrt(5))**2*(x + sqrt(5)*x - 2))\n+ (-37*sqrt(5) - 83 + 13*sqrt(5)*x + 29*x + O((x - 2/(1 + sqrt(5)))**2, (x\\\n+ , 2/(1 + sqrt(5)))))/((2*sqrt(5) + 5)**2*(x + sqrt(5)*x - 2))\n+\n \n \n def test_issue_21938():\n" }
[ { "diff_hunk": "@@ -1533,7 +1533,8 @@ def _eval_nseries(self, x, n, logx, cdir=0):\n return res\n \n f = b.as_leading_term(x, logx=logx)\n- g = (b/f - S.One).cancel(expand=False)\n+ g = (b.expand() - f).cancel()", "line": null, "original_line": 1536, "original_start_line": null, "path": "sympy/core/power.py", "start_line": null, "text": "@user1:\n```suggestion\r\n g = (_mexpand(b) - f).cancel()\r\n```" } ]
cabd6483017136e2b1909e3a75d00df62043aa6e
diff --git a/sympy/core/power.py b/sympy/core/power.py index a8dc47bdfe67..e6e9e310db47 100644 --- a/sympy/core/power.py +++ b/sympy/core/power.py @@ -1533,7 +1533,8 @@ def _eval_nseries(self, x, n, logx, cdir=0): return res f = b.as_leading_term(x, logx=logx) - g = (b/f - S.One).cancel(expand=False) + g = (_mexpand(b) - f).cancel() + g = g/f if not m.is_number: raise NotImplementedError() maxpow = n - m*e @@ -1583,8 +1584,8 @@ def mul(d1, d2): # Convert floats like 0.5 to exact SymPy numbers like S.Half, to # prevent rounding errors which can induce wrong values of d leading # to a NotImplementedError being returned from the block below. - from sympy.simplify.simplify import nsimplify - _, d = nsimplify(g).leadterm(x, logx=logx) + g = g.replace(lambda x: x.is_Float, lambda x: Rational(x)) + _, d = g.leadterm(x, logx=logx) if not d.is_positive: g = g.simplify() if g.is_zero: diff --git a/sympy/core/tests/test_expr.py b/sympy/core/tests/test_expr.py index af83b22ab361..a95144d99cc9 100644 --- a/sympy/core/tests/test_expr.py +++ b/sympy/core/tests/test_expr.py @@ -479,14 +479,13 @@ def test_as_leading_term(): # https://github.com/sympy/sympy/issues/21177 e = -3*x + (x + Rational(3, 2) - sqrt(3)*S.ImaginaryUnit/2)**2\ - Rational(3, 2) + 3*sqrt(3)*S.ImaginaryUnit/2 - assert e.as_leading_term(x) == \ - (12*sqrt(3)*x - 12*S.ImaginaryUnit*x)/(4*sqrt(3) + 12*S.ImaginaryUnit) + assert e.as_leading_term(x) == -sqrt(3)*I*x # https://github.com/sympy/sympy/issues/21245 e = 1 - x - x**2 d = (1 + sqrt(5))/2 assert e.subs(x, y + 1/d).as_leading_term(y) == \ - (-576*sqrt(5)*y - 1280*y)/(256*sqrt(5) + 576) + (-40*y - 16*sqrt(5)*y)/(16 + 8*sqrt(5)) def test_leadterm2(): diff --git a/sympy/core/tests/test_power.py b/sympy/core/tests/test_power.py index 640834ead0a9..f1080e338de7 100644 --- a/sympy/core/tests/test_power.py +++ b/sympy/core/tests/test_power.py @@ -18,6 +18,7 @@ from sympy.testing.pytest import warns, _both_exp_pow from sympy.utilities.exceptions import SymPyDeprecationWarning from sympy.abc import a, b, c, x, y +from sympy.core.numbers import all_close def test_rational(): a = Rational(1, 5) @@ -661,3 +662,9 @@ def test_issue_26546(): assert Pow(x+I, Rational(1,2)).is_extended_real is False assert Pow(x+I, Rational(1,13)).is_extended_real is False assert Pow(x+I, Rational(2,3)).is_extended_real is None + + +def test_issue_25165(): + e1 = (1/sqrt(( - x + 1)**2 + (x - 0.23)**4)).series(x, 0, 2) + e2 = 0.998603724830355 + 1.02004923189934*x + O(x**2) + assert all_close(e1, e2) diff --git a/sympy/series/tests/test_series.py b/sympy/series/tests/test_series.py index 6ae7ed2b848b..3313b8e6ee48 100644 --- a/sympy/series/tests/test_series.py +++ b/sympy/series/tests/test_series.py @@ -367,9 +367,9 @@ def test_issue_20697(): def test_issue_21245(): fi = (1 + sqrt(5))/2 assert (1/(1 - x - x**2)).series(x, 1/fi, 1).factor() == \ - (-4812 - 2152*sqrt(5) + 1686*x + 754*sqrt(5)*x\ - + O((x - 2/(1 + sqrt(5)))**2, (x, 2/(1 + sqrt(5)))))/((1 + sqrt(5))\ - *(20 + 9*sqrt(5))**2*(x + sqrt(5)*x - 2)) + (-37*sqrt(5) - 83 + 13*sqrt(5)*x + 29*x + O((x - 2/(1 + sqrt(5)))**2, (x\ + , 2/(1 + sqrt(5)))))/((2*sqrt(5) + 5)**2*(x + sqrt(5)*x - 2)) + def test_issue_21938():
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Bug Fixes" }
sympy__sympy-25875@4f91444
sympy/sympy
Python
25,875
Float(0).is_integer is True, Float(1.1).is_integer is False, Float(1) is None
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> yet another attempt to fix #23731 alternate to #25865, #25864, #25856 #### Brief description of what is fixed or changed ```python >>> Float(0).is_integer # same as master True >>> print(Float(1).is_integer) # False in master None >>> Float(1.5).is_integer # same as master False ``` #### Other comments There was a long discussion of whether `Float` should be classified as `rational` [here](https://groups.google.com/g/sympy/c/RUYOxxwMgac/m/4-1JXtny32EJ). One major concern was that float/Float objects do not behave as Rationals in operations, e.g. ```python >>> S(1e-16) + S(2) == 2.0 True >>> (x+y).n(subs={x:1e-16,y:2}) == 2. True ``` #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> - core - `Float.is_integer` is now `None` except for `0.0` and `Floats` with a non-zero fraction. Previously, only `Float(0)` was True - Fixed Eq incorrectly evaluating to `False` in some cases with floats. <!-- END RELEASE NOTES -->
2023-11-05T03:44:33Z
`is_zero` broken for integer/float comparison Not sure if this is a recent regression, but comparing an integer-assumption symbol to a Float (or a non-integer to an `Integer`) returns the wrong value: ```python >>> i = Symbol('x', integer=True) >>> Eq(i, 1.0) False >>> Eq(i, 1) Eq(x, 1) >>> ni = Symbol('x', integer=False) >>> Eq(ni, 1) False >>> Eq(ni, 1.0) Eq(x, 1.0) ``` Upon further inspection, it has to do with the difference being checked in `is_eq`: `_dif.is_zero` returns False, even though that should not always be true.
It's been this way all the way back to SymPy 1.2 and is because of this: ```python In [1]: Float(1) Out[1]: 1.00000000000000 In [2]: Float(1).is_integer Out[2]: False ``` That in turn is because of this: https://github.com/sympy/sympy/blob/997749da1838628eaeb077c7ab32da6666912e13/sympy/core/numbers.py#L1239-L1240 Apparently any Float other than the zero Float has `is_integer -> False`... That code dates back to #1743 according to `git blame`. There doesn't seem to be any discussion there about `is_integer` at all there. Looking at the commit before though every `Float` had `integer=False`: https://github.com/sympy/sympy/blob/0608bc62619a572ef8bf6bb4f154b435d21ad4ec/sympy/core/numbers.py#L564 The line originates from 99b21ff58ad2e2ba83172512d7f513a7c37e50c3 which is some kind of rewrite commit that obliterates all the history so I can't find the source any earlier. In any case at worst `Float(1).is_integer` should give `None` which would prevent the `Eq` from evaluating. The fix to do that is straight forward: ```diff diff --git a/sympy/core/numbers.py b/sympy/core/numbers.py index 0303786..b673e63 100644 --- a/sympy/core/numbers.py +++ b/sympy/core/numbers.py @@ -1237,7 +1237,10 @@ def _eval_is_infinite(self): return False def _eval_is_integer(self): - return self._mpf_ == fzero + if self._mpf_ == fzero: + return True + elif self._mpf_[2] < 0: + return False def _eval_is_negative(self): if self._mpf_ in (_mpf_ninf, _mpf_inf): ``` That only fails one test in core: https://github.com/sympy/sympy/blob/997749da1838628eaeb077c7ab32da6666912e13/sympy/core/tests/test_numbers.py#L496 `None` is probably the best. Floats can have properties that might break the assumptions, e.g., `x + 1 == x` is possible for Float `x`. That was probably the motivation for making it `False`, but `None` is better since some floats do behave like integers in some contexts. Maybe it's unrelated, but after some playing around I found that even an arbitrary symbol with integer=False is assumed as nonzero: ```python >>> sp.Symbol('x', integer=False).is_zero False ``` Is that intended? > an arbitrary symbol with integer=False is assumed as nonzero Zero is an integer so if `integer=False` then `zero=False`. Alright, makes sense. This should not naturally happen in our usage because integer is never explicitly set to False, only None or True. I tried setting Float.is_integer to None at https://github.com/sympy/sympy/pull/25856 to see what happens. If it breaks too many things, then we should try to be more precise, specifically, return True when the Float exactly represents an integer (like 3.0), False when it has a nonzero fractional part, and None when it is too large to represent the integer exactly (like 1e20). I do worry that strange things could happen around the boundaries of exactly representable integers if we do this, though. `is_integer` was intended to be the symbolic equivalent of `is_Integer`. With the improved behavior of `==` (that discriminates between 1 and 1.0 and reports them as unequal since they are not structurally the same) but we allow `Eq(1, 1.0)` to be True. Conceptually, Float and Rational are the same except that the former is more granular, the latter bounded by memory. A possible fix for the current behavior of `Equality` is to replace any Floats with a Dummy that has `rational=True` assumption. ``` >>> i = var('i',integer=1) >>> Eq(i/2, S.Half) Eq(i/2, 1/2) >>> Eq(i/2, .5) Eq(i/2, 0.5) >>> Eq(i, 1.0) Eq(i, 1.0) >>> ni = var('ni',integer=0) >>> Eq(ni, 1.0) False >>> Eq(ni, 1) False ```
[ { "body": "Not sure if this is a recent regression, but comparing an integer-assumption symbol to a Float (or a non-integer to an `Integer`) returns the wrong value:\r\n\r\n```python\r\n>>> i = Symbol('x', integer=True)\r\n>>> Eq(i, 1.0)\r\nFalse\r\n>>> Eq(i, 1)\r\nEq(x, 1)\r\n\r\n>>> ni = Symbol('x', integer=False)\r\n>>> Eq(ni, 1)\r\nFalse\r\n>>> Eq(ni, 1.0)\r\nEq(x, 1.0)\r\n```\r\n\r\nUpon further inspection, it has to do with the difference being checked in `is_eq`: `_dif.is_zero` returns False, even though that should not always be true.\r\n", "number": 23731, "title": "`is_zero` broken for integer/float comparison" } ]
1770afaff55220b9f893fedb5b889f698826bf58
{ "head_commit": "4f914443104958edf49e9014c922e2d0c2adabd2", "head_commit_message": "Update sympy/assumptions/handlers/ntheory.py", "patch_to_review": "diff --git a/sympy/assumptions/handlers/ntheory.py b/sympy/assumptions/handlers/ntheory.py\nindex 4f1397b283ee..ccb91f726e2e 100644\n--- a/sympy/assumptions/handlers/ntheory.py\n+++ b/sympy/assumptions/handlers/ntheory.py\n@@ -5,7 +5,7 @@\n from sympy.assumptions import Q, ask\n from sympy.core import Add, Basic, Expr, Float, Mul, Pow, S\n from sympy.core.numbers import (ImaginaryUnit, Infinity, Integer, NaN,\n- NegativeInfinity, NumberSymbol, Rational)\n+ NegativeInfinity, NumberSymbol, Rational, int_valued)\n from sympy.functions import Abs, im, re\n from sympy.ntheory import isprime\n \n@@ -119,13 +119,15 @@ def _(expr, assumptions):\n \n def _EvenPredicate_number(expr, assumptions):\n # helper method\n+ if isinstance(expr, (float, Float)):\n+ if int_valued(expr):\n+ return None\n+ return False\n try:\n i = int(expr.round())\n- if not (expr - i).equals(0):\n- raise TypeError\n except TypeError:\n return False\n- if isinstance(expr, (float, Float)):\n+ if not (expr - i).equals(0):\n return False\n return i % 2 == 0\n \ndiff --git a/sympy/assumptions/tests/test_query.py b/sympy/assumptions/tests/test_query.py\nindex 40ed2a8c2d21..b892d724ee0d 100644\n--- a/sympy/assumptions/tests/test_query.py\n+++ b/sympy/assumptions/tests/test_query.py\n@@ -89,7 +89,7 @@ def test_int_12():\n def test_float_1():\n z = 1.0\n assert ask(Q.commutative(z)) is True\n- assert ask(Q.integer(z)) is False\n+ assert ask(Q.integer(z)) is None\n assert ask(Q.rational(z)) is None\n assert ask(Q.real(z)) is True\n assert ask(Q.complex(z)) is True\n@@ -98,10 +98,10 @@ def test_float_1():\n assert ask(Q.positive(z)) is True\n assert ask(Q.negative(z)) is False\n assert ask(Q.even(z)) is False\n- assert ask(Q.odd(z)) is False\n+ assert ask(Q.odd(z)) is None\n assert ask(Q.finite(z)) is True\n- assert ask(Q.prime(z)) is False\n- assert ask(Q.composite(z)) is False\n+ assert ask(Q.prime(z)) is None\n+ assert ask(Q.composite(z)) is None\n assert ask(Q.hermitian(z)) is True\n assert ask(Q.antihermitian(z)) is False\n \n@@ -2307,11 +2307,11 @@ def test_check_old_assumption():\n \n \n def test_issue_9636():\n- assert ask(Q.integer(1.0)) is False\n- assert ask(Q.prime(3.0)) is False\n- assert ask(Q.composite(4.0)) is False\n- assert ask(Q.even(2.0)) is False\n- assert ask(Q.odd(3.0)) is False\n+ assert ask(Q.integer(1.0)) is None\n+ assert ask(Q.prime(3.0)) is None\n+ assert ask(Q.composite(4.0)) is None\n+ assert ask(Q.even(2.0)) is None\n+ assert ask(Q.odd(3.0)) is None\n \n \n def test_autosimp_used_to_fail():\ndiff --git a/sympy/core/numbers.py b/sympy/core/numbers.py\nindex d4e112a0c943..f3839e22d1cd 100644\n--- a/sympy/core/numbers.py\n+++ b/sympy/core/numbers.py\n@@ -749,8 +749,11 @@ class Float(Number):\n \n _mpf_: tuple[int, int, int, int]\n \n- # A Float represents many real numbers,\n- # both rational and irrational.\n+ # A Float, though rational in form, does not behave like\n+ # a rational in all Python expressions so we deal with\n+ # exceptions (where we want to deal with the rational\n+ # form of the Float as a rational) at the source rather\n+ # than assigning a mathematically loaded category of 'rational'\n is_rational = None\n is_irrational = None\n is_number = True\n@@ -969,7 +972,10 @@ def _eval_is_infinite(self):\n return False\n \n def _eval_is_integer(self):\n- return self._mpf_ == fzero\n+ if self._mpf_ == fzero:\n+ return True\n+ if not int_valued(self):\n+ return False\n \n def _eval_is_negative(self):\n if self._mpf_ in (_mpf_ninf, _mpf_inf):\ndiff --git a/sympy/core/relational.py b/sympy/core/relational.py\nindex cce1dfe76ccd..5ccfc7e9632e 100644\n--- a/sympy/core/relational.py\n+++ b/sympy/core/relational.py\n@@ -4,6 +4,7 @@\n from .sorting import ordered\n from .evalf import EvalfMixin\n from .function import AppliedUndef\n+from .numbers import int_valued\n from .singleton import S\n from .sympify import _sympify, SympifyError\n from .parameters import global_parameters\n@@ -1571,6 +1572,15 @@ def split_real_imag(expr):\n if z:\n return True\n \n+ # is_zero cannot help decide integer/rational with Float\n+ c, t = dif.as_coeff_Add()\n+ if c.is_Float:\n+ if int_valued(c):\n+ if t.is_integer is False:\n+ return False\n+ elif t.is_rational is False:\n+ return False\n+\n n2 = _n2(lhs, rhs)\n if n2 is not None:\n return _sympify(n2 == 0)\ndiff --git a/sympy/core/tests/test_numbers.py b/sympy/core/tests/test_numbers.py\nindex 1834524a4423..82a221cf4417 100644\n--- a/sympy/core/tests/test_numbers.py\n+++ b/sympy/core/tests/test_numbers.py\n@@ -504,7 +504,7 @@ def eq(a, b):\n # rationality properties\n # if the integer test fails then the use of intlike\n # should be removed from gamma_functions.py\n- assert Float(1).is_integer is False\n+ assert Float(1).is_integer is None\n assert Float(1).is_rational is None\n assert Float(1).is_irrational is None\n assert sqrt(2).n(15).is_rational is None\ndiff --git a/sympy/core/tests/test_relational.py b/sympy/core/tests/test_relational.py\nindex 723ec4cafe69..6ab6405c49db 100644\n--- a/sympy/core/tests/test_relational.py\n+++ b/sympy/core/tests/test_relational.py\n@@ -5,7 +5,7 @@\n from sympy.assumptions.ask import Q\n from sympy.core.add import Add\n from sympy.core.basic import Basic\n-from sympy.core.expr import Expr\n+from sympy.core.expr import Expr, unchanged\n from sympy.core.function import Function\n from sympy.core.mul import Mul\n from sympy.core.numbers import (Float, I, Rational, nan, oo, pi, zoo)\n@@ -1252,6 +1252,19 @@ def test_weak_strict():\n assert eq.strict == Lt(x, 1)\n assert eq.weak == eq\n \n+\n+def test_issue_23731():\n+ i = symbols('i', integer=True)\n+ assert unchanged(Eq, i, 1.0)\n+ assert unchanged(Eq, i/2, 0.5)\n+ ni = symbols('ni', integer=False)\n+ assert Eq(ni, 1) == False\n+ assert unchanged(Eq, ni, .1)\n+ assert Eq(ni, 1.0) == False\n+ nr = symbols('nr', rational=False)\n+ assert Eq(nr, .1) == False\n+\n+\n def test_rewrite_Add():\n from sympy.testing.pytest import warns_deprecated_sympy\n with warns_deprecated_sympy():\ndiff --git a/sympy/functions/combinatorial/numbers.py b/sympy/functions/combinatorial/numbers.py\nindex 52f73a2660d9..df0fa1ebd4e8 100644\n--- a/sympy/functions/combinatorial/numbers.py\n+++ b/sympy/functions/combinatorial/numbers.py\n@@ -539,8 +539,8 @@ def _calc_bernoulli(n):\n \n # We implement a specialized memoization scheme to handle each\n # case modulo 6 separately\n- _cache = {0: S.One, 2: Rational(1, 6), 4: Rational(-1, 30)}\n- _highest = {0: 0, 2: 2, 4: 4}\n+ _cache = {0: S.One, 1: Rational(1, 2), 2: Rational(1, 6), 4: Rational(-1, 30)}\n+ _highest = {0: 0, 1: 1, 2: 2, 4: 4}\n \n @classmethod\n def eval(cls, n, x=None):\ndiff --git a/sympy/matrices/expressions/matexpr.py b/sympy/matrices/expressions/matexpr.py\nindex 5fce1478c40c..5cee3a8f1eda 100644\n--- a/sympy/matrices/expressions/matexpr.py\n+++ b/sympy/matrices/expressions/matexpr.py\n@@ -235,7 +235,8 @@ def _eval_derivative(self, x):\n @classmethod\n def _check_dim(cls, dim):\n \"\"\"Helper function to check invalid matrix dimensions\"\"\"\n- ok = check_assumptions(dim, integer=True, nonnegative=True)\n+ ok = not dim.is_Float and check_assumptions(\n+ dim, integer=True, nonnegative=True)\n if ok is False:\n raise ValueError(\n \"The dimension specification {} should be \"\ndiff --git a/sympy/matrices/expressions/sets.py b/sympy/matrices/expressions/sets.py\nindex 90816c684def..de29b1db14fe 100644\n--- a/sympy/matrices/expressions/sets.py\n+++ b/sympy/matrices/expressions/sets.py\n@@ -57,7 +57,8 @@ def _contains(self, other):\n @classmethod\n def _check_dim(cls, dim):\n \"\"\"Helper function to check invalid matrix dimensions\"\"\"\n- ok = check_assumptions(dim, integer=True, nonnegative=True)\n+ ok = not dim.is_Float and check_assumptions(\n+ dim, integer=True, nonnegative=True)\n if ok is False:\n raise ValueError(\n \"The dimension specification {} should be \"\ndiff --git a/sympy/sets/tests/test_fancysets.py b/sympy/sets/tests/test_fancysets.py\nindex 08f097b9346c..b23c2a99fce0 100644\n--- a/sympy/sets/tests/test_fancysets.py\n+++ b/sympy/sets/tests/test_fancysets.py\n@@ -114,7 +114,8 @@ def test_ImageSet():\n harmonics = ImageSet(Lambda(x, 1/x), S.Naturals)\n assert Rational(1, 5) in harmonics\n assert Rational(.25) in harmonics\n- assert 0.25 not in harmonics\n+ assert harmonics.contains(.25) == Contains(\n+ 0.25, ImageSet(Lambda(x, 1/x), S.Naturals), evaluate=False)\n assert Rational(.3) not in harmonics\n assert (1, 2) not in harmonics\n \n@@ -1268,7 +1269,8 @@ def test_Rationals():\n Rational(1, 3), 3, Rational(-1, 3), -3, Rational(2, 3)]\n assert Basic() not in S.Rationals\n assert S.Half in S.Rationals\n- assert S.Rationals.contains(0.5) == Contains(0.5, S.Rationals, evaluate=False)\n+ assert S.Rationals.contains(0.5) == Contains(\n+ 0.5, S.Rationals, evaluate=False)\n assert 2 in S.Rationals\n r = symbols('r', rational=True)\n assert r in S.Rationals\ndiff --git a/sympy/stats/stochastic_process_types.py b/sympy/stats/stochastic_process_types.py\nindex 5b72a6c3893f..e1cdd0ea8442 100644\n--- a/sympy/stats/stochastic_process_types.py\n+++ b/sympy/stats/stochastic_process_types.py\n@@ -1644,7 +1644,7 @@ class BernoulliProcess(DiscreteTimeStochasticProcess):\n >>> B = BernoulliProcess(\"B\", p=0.7, success=1, failure=0)\n >>> B.state_space\n {0, 1}\n- >>> (B.p).round(2)\n+ >>> B.p.round(2)\n 0.70\n >>> B.success\n 1\n" }
[ { "diff_hunk": "@@ -98,10 +98,10 @@ def test_float_1():\n assert ask(Q.positive(z)) is True\n assert ask(Q.negative(z)) is False\n assert ask(Q.even(z)) is False", "line": null, "original_line": 100, "original_start_line": null, "path": "sympy/assumptions/tests/test_query.py", "start_line": null, "text": "@author:\n```suggestion\r\n assert ask(Q.even(z)) is None\r\n```" } ]
b8d349dd8e1639a88442f8ba88cc82a6c1cb8611
diff --git a/sympy/assumptions/handlers/ntheory.py b/sympy/assumptions/handlers/ntheory.py index 4f1397b283ee..ccb91f726e2e 100644 --- a/sympy/assumptions/handlers/ntheory.py +++ b/sympy/assumptions/handlers/ntheory.py @@ -5,7 +5,7 @@ from sympy.assumptions import Q, ask from sympy.core import Add, Basic, Expr, Float, Mul, Pow, S from sympy.core.numbers import (ImaginaryUnit, Infinity, Integer, NaN, - NegativeInfinity, NumberSymbol, Rational) + NegativeInfinity, NumberSymbol, Rational, int_valued) from sympy.functions import Abs, im, re from sympy.ntheory import isprime @@ -119,13 +119,15 @@ def _(expr, assumptions): def _EvenPredicate_number(expr, assumptions): # helper method + if isinstance(expr, (float, Float)): + if int_valued(expr): + return None + return False try: i = int(expr.round()) - if not (expr - i).equals(0): - raise TypeError except TypeError: return False - if isinstance(expr, (float, Float)): + if not (expr - i).equals(0): return False return i % 2 == 0 diff --git a/sympy/assumptions/tests/test_query.py b/sympy/assumptions/tests/test_query.py index 40ed2a8c2d21..4a398b80e609 100644 --- a/sympy/assumptions/tests/test_query.py +++ b/sympy/assumptions/tests/test_query.py @@ -89,7 +89,7 @@ def test_int_12(): def test_float_1(): z = 1.0 assert ask(Q.commutative(z)) is True - assert ask(Q.integer(z)) is False + assert ask(Q.integer(z)) is None assert ask(Q.rational(z)) is None assert ask(Q.real(z)) is True assert ask(Q.complex(z)) is True @@ -97,11 +97,11 @@ def test_float_1(): assert ask(Q.imaginary(z)) is False assert ask(Q.positive(z)) is True assert ask(Q.negative(z)) is False - assert ask(Q.even(z)) is False - assert ask(Q.odd(z)) is False + assert ask(Q.even(z)) is None + assert ask(Q.odd(z)) is None assert ask(Q.finite(z)) is True - assert ask(Q.prime(z)) is False - assert ask(Q.composite(z)) is False + assert ask(Q.prime(z)) is None + assert ask(Q.composite(z)) is None assert ask(Q.hermitian(z)) is True assert ask(Q.antihermitian(z)) is False @@ -2307,11 +2307,11 @@ def test_check_old_assumption(): def test_issue_9636(): - assert ask(Q.integer(1.0)) is False - assert ask(Q.prime(3.0)) is False - assert ask(Q.composite(4.0)) is False - assert ask(Q.even(2.0)) is False - assert ask(Q.odd(3.0)) is False + assert ask(Q.integer(1.0)) is None + assert ask(Q.prime(3.0)) is None + assert ask(Q.composite(4.0)) is None + assert ask(Q.even(2.0)) is None + assert ask(Q.odd(3.0)) is None def test_autosimp_used_to_fail(): diff --git a/sympy/core/numbers.py b/sympy/core/numbers.py index d4e112a0c943..f3839e22d1cd 100644 --- a/sympy/core/numbers.py +++ b/sympy/core/numbers.py @@ -749,8 +749,11 @@ class Float(Number): _mpf_: tuple[int, int, int, int] - # A Float represents many real numbers, - # both rational and irrational. + # A Float, though rational in form, does not behave like + # a rational in all Python expressions so we deal with + # exceptions (where we want to deal with the rational + # form of the Float as a rational) at the source rather + # than assigning a mathematically loaded category of 'rational' is_rational = None is_irrational = None is_number = True @@ -969,7 +972,10 @@ def _eval_is_infinite(self): return False def _eval_is_integer(self): - return self._mpf_ == fzero + if self._mpf_ == fzero: + return True + if not int_valued(self): + return False def _eval_is_negative(self): if self._mpf_ in (_mpf_ninf, _mpf_inf): diff --git a/sympy/core/relational.py b/sympy/core/relational.py index cce1dfe76ccd..5ccfc7e9632e 100644 --- a/sympy/core/relational.py +++ b/sympy/core/relational.py @@ -4,6 +4,7 @@ from .sorting import ordered from .evalf import EvalfMixin from .function import AppliedUndef +from .numbers import int_valued from .singleton import S from .sympify import _sympify, SympifyError from .parameters import global_parameters @@ -1571,6 +1572,15 @@ def split_real_imag(expr): if z: return True + # is_zero cannot help decide integer/rational with Float + c, t = dif.as_coeff_Add() + if c.is_Float: + if int_valued(c): + if t.is_integer is False: + return False + elif t.is_rational is False: + return False + n2 = _n2(lhs, rhs) if n2 is not None: return _sympify(n2 == 0) diff --git a/sympy/core/tests/test_numbers.py b/sympy/core/tests/test_numbers.py index 1834524a4423..82a221cf4417 100644 --- a/sympy/core/tests/test_numbers.py +++ b/sympy/core/tests/test_numbers.py @@ -504,7 +504,7 @@ def eq(a, b): # rationality properties # if the integer test fails then the use of intlike # should be removed from gamma_functions.py - assert Float(1).is_integer is False + assert Float(1).is_integer is None assert Float(1).is_rational is None assert Float(1).is_irrational is None assert sqrt(2).n(15).is_rational is None diff --git a/sympy/core/tests/test_relational.py b/sympy/core/tests/test_relational.py index 723ec4cafe69..6ab6405c49db 100644 --- a/sympy/core/tests/test_relational.py +++ b/sympy/core/tests/test_relational.py @@ -5,7 +5,7 @@ from sympy.assumptions.ask import Q from sympy.core.add import Add from sympy.core.basic import Basic -from sympy.core.expr import Expr +from sympy.core.expr import Expr, unchanged from sympy.core.function import Function from sympy.core.mul import Mul from sympy.core.numbers import (Float, I, Rational, nan, oo, pi, zoo) @@ -1252,6 +1252,19 @@ def test_weak_strict(): assert eq.strict == Lt(x, 1) assert eq.weak == eq + +def test_issue_23731(): + i = symbols('i', integer=True) + assert unchanged(Eq, i, 1.0) + assert unchanged(Eq, i/2, 0.5) + ni = symbols('ni', integer=False) + assert Eq(ni, 1) == False + assert unchanged(Eq, ni, .1) + assert Eq(ni, 1.0) == False + nr = symbols('nr', rational=False) + assert Eq(nr, .1) == False + + def test_rewrite_Add(): from sympy.testing.pytest import warns_deprecated_sympy with warns_deprecated_sympy(): diff --git a/sympy/functions/combinatorial/numbers.py b/sympy/functions/combinatorial/numbers.py index 52f73a2660d9..df0fa1ebd4e8 100644 --- a/sympy/functions/combinatorial/numbers.py +++ b/sympy/functions/combinatorial/numbers.py @@ -539,8 +539,8 @@ def _calc_bernoulli(n): # We implement a specialized memoization scheme to handle each # case modulo 6 separately - _cache = {0: S.One, 2: Rational(1, 6), 4: Rational(-1, 30)} - _highest = {0: 0, 2: 2, 4: 4} + _cache = {0: S.One, 1: Rational(1, 2), 2: Rational(1, 6), 4: Rational(-1, 30)} + _highest = {0: 0, 1: 1, 2: 2, 4: 4} @classmethod def eval(cls, n, x=None): diff --git a/sympy/matrices/expressions/matexpr.py b/sympy/matrices/expressions/matexpr.py index 5fce1478c40c..5cee3a8f1eda 100644 --- a/sympy/matrices/expressions/matexpr.py +++ b/sympy/matrices/expressions/matexpr.py @@ -235,7 +235,8 @@ def _eval_derivative(self, x): @classmethod def _check_dim(cls, dim): """Helper function to check invalid matrix dimensions""" - ok = check_assumptions(dim, integer=True, nonnegative=True) + ok = not dim.is_Float and check_assumptions( + dim, integer=True, nonnegative=True) if ok is False: raise ValueError( "The dimension specification {} should be " diff --git a/sympy/matrices/expressions/sets.py b/sympy/matrices/expressions/sets.py index 90816c684def..de29b1db14fe 100644 --- a/sympy/matrices/expressions/sets.py +++ b/sympy/matrices/expressions/sets.py @@ -57,7 +57,8 @@ def _contains(self, other): @classmethod def _check_dim(cls, dim): """Helper function to check invalid matrix dimensions""" - ok = check_assumptions(dim, integer=True, nonnegative=True) + ok = not dim.is_Float and check_assumptions( + dim, integer=True, nonnegative=True) if ok is False: raise ValueError( "The dimension specification {} should be " diff --git a/sympy/sets/tests/test_fancysets.py b/sympy/sets/tests/test_fancysets.py index 08f097b9346c..b23c2a99fce0 100644 --- a/sympy/sets/tests/test_fancysets.py +++ b/sympy/sets/tests/test_fancysets.py @@ -114,7 +114,8 @@ def test_ImageSet(): harmonics = ImageSet(Lambda(x, 1/x), S.Naturals) assert Rational(1, 5) in harmonics assert Rational(.25) in harmonics - assert 0.25 not in harmonics + assert harmonics.contains(.25) == Contains( + 0.25, ImageSet(Lambda(x, 1/x), S.Naturals), evaluate=False) assert Rational(.3) not in harmonics assert (1, 2) not in harmonics @@ -1268,7 +1269,8 @@ def test_Rationals(): Rational(1, 3), 3, Rational(-1, 3), -3, Rational(2, 3)] assert Basic() not in S.Rationals assert S.Half in S.Rationals - assert S.Rationals.contains(0.5) == Contains(0.5, S.Rationals, evaluate=False) + assert S.Rationals.contains(0.5) == Contains( + 0.5, S.Rationals, evaluate=False) assert 2 in S.Rationals r = symbols('r', rational=True) assert r in S.Rationals diff --git a/sympy/stats/stochastic_process_types.py b/sympy/stats/stochastic_process_types.py index 5b72a6c3893f..e1cdd0ea8442 100644 --- a/sympy/stats/stochastic_process_types.py +++ b/sympy/stats/stochastic_process_types.py @@ -1644,7 +1644,7 @@ class BernoulliProcess(DiscreteTimeStochasticProcess): >>> B = BernoulliProcess("B", p=0.7, success=1, failure=0) >>> B.state_space {0, 1} - >>> (B.p).round(2) + >>> B.p.round(2) 0.70 >>> B.success 1
{ "difficulty": "medium", "estimated_review_effort": 4, "problem_domain": "Bug Fixes" }
sympy__sympy-25916@6aeff4f
sympy/sympy
Python
25,916
changing orient_explicit method to orient_dcm method, also adding new method orient_explicit
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> Fixes #24764 #### Brief description of what is fixed or changed Complete as you mention above : - [x] change method name `orient_explicit` to `orient_dcm` - [x] create another method `orient_explicit` for backward compatibility - [x] mention about rotation matrix direction - [x] Also `exclude-members: orient_explicit ` #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> * physics.vector * Implemented `ReferenceFrame.orient_dcm` to match the direction `ReferenceFrame.dcm`. <!-- END RELEASE NOTES -->
2023-11-21T18:16:13Z
Add new method that is more intuitive than orient_explicit() The mechanics method `orient_explicit()` takes a direction cosine matrix that is the inverse of what a `ReferenceFrame.dcm()` outputs and it is confusing. We have to make clear warnings about this everywhere. It was a mistake that we designed it this way (or rather didn't design it). This shows the issue: ```python import sympy as sm import sympy.physics.mechanics as me cxx, cyy, czz = me.dynamicsymbols('c_{xx}, c_{yy}, c_{zz}') cxy, cxz, cyx = me.dynamicsymbols('c_{xy}, c_{xz}, c_{yx}') cyz, czx, czy = me.dynamicsymbols('c_{yz}, c_{zx}, c_{zy}') B_C_A = sm.Matrix([[cxx, cxy, cxz], [cyx, cyy, cyz], [czx, czy, czz]]) A = me.ReferenceFrame('A') B = me.ReferenceFrame('B') B.orient_explicit(A, B_C_A) B.dcm(A) ``` returns: ``` Matrix([ [c_{xx}(t), c_{yx}(t), c_{zx}(t)], [c_{xy}(t), c_{yy}(t), c_{zy}(t)], [c_{xz}(t), c_{yz}(t), c_{zz}(t)]]) ``` which is the inverse of the input. So to get the expected behavior you must do: ```python B.orient_explicit(A, B_C_A.transpose()) B.dcm(A) ``` which returns the expected result: ``` Matrix([ [c_{xx}(t), c_{xy}(t), c_{xz}(t)], [c_{yx}(t), c_{yy}(t), c_{yz}(t)], [c_{zx}(t), c_{zy}(t), c_{zz}(t)]]) ``` We should introduce a `orient_from_dcm()` or similarly named method that works just like `orient_explicit()` but does the intuitive behavior. We can just leave `orient_explicit()` in place for perpetuity to avoid breaking people's code.
So far as I see, we just need to handle the transposing inside the `orient_from_dcm()`, right? I can work on this. @moorepants could you confirm my approach to this? If we keep orient_explicit, then you can just have the new method call orient_explicit. I think that having two methods, which do the same but use the transpose, only makes it more confusing. The main problem is that it is not directly clear how the rotation matrix is defined in `orient_explicit`. Therefore, I would propose clarifying that in the docstring. > I think that having two methods, which do the same but use the transpose, only makes it more confusing. We should remove all documentation except the docstring from the orient explicit so it isn't shown in use in examples etc. I would do that with the Learn Mulibody Dynamics text once the new method is in. Also I think pushed for the "explicit" name before and I now don't think that was a good choice. Having dcm in the name fits more with the dcm() method naming. > Therefore, I would propose clarifying that in the docstring. Also needed. We have examples of this in some of the other orient docstrings that try to clarify the definition. > We should remove all documentation except the docstring from the orient explicit so it isn't shown in use in examples etc. Would you also propose deprecating it? > Also I think pushed for the "explicit" name before and I now don't think that was a good choice. [This](https://github.com/sympy/sympy/pull/20318#discussion_r552580815) is the comment you are referring to. And I agree that `orient_dcm` would be clearer. Concluding I would say that there are two problems, which can indeed be solved at once: - The naming of `orient_explicit` is suboptimal and should be improved - It is not clear how a rotation matrix is defined, i.e. from A to B or from B to A. > Would you also propose deprecating it? I'm more of the style of leaving it there so it doesn't break code but hide it. We could remove the docstring from displaying in the online docs, but leave the docstring if someone inspects interactively or in an IDE (because it is still public). It's not like the function is broken, it just stands out as opposite behavior wrt to the other orient and dcm functions. So could you define `orient_dcm` and then alias the suboptimally named one as `orient_inv_dcm = orient_explicit`? And then eventually deprecate `orient_explicit`? Concluding from the issue I would propose to do the following things: - A new method `orient_dcm` (personal opinion, may also be different): - It should have a clear name. Proposed in this issue is `orient_from_dcm`. Personally I would choose `orient_dcm` instead, because `from` seems to me like redundant characters and it matches `orient_axis`, `orient_quaternion`, etc a lot better. - It should use a dcm describing the rotation from the parent to the child (`orient_explicit` is the opposite) - It should have the full description that is currently in `orient_explicit`. However the docstring should be updated to: - the new rotation direction (this mainly changes the examples a bit) - Make the rotation direction more clear. I would propose the following opening sentence: _Sets the orientation of this reference frame relative to another (parent) reference frame using a direction cosine matrix that describes the rotation from the parent to the child._ - `orient_explicit` should be excluded from the online documentation using `:exclude-members: orient_explicit` @moorepants do you agree with these changes? P.S. I would also propose moving the implementation to `orient_dcm` and let `orient_explicit call `orient_dcm`. This would make it more consistent with the deprecation policy, though we are not really deprecating it. I agree with Timo's proposal here.
[ { "body": "The mechanics method `orient_explicit()` takes a direction cosine matrix that is the inverse of what a `ReferenceFrame.dcm()` outputs and it is confusing. We have to make clear warnings about this everywhere. It was a mistake that we designed it this way (or rather didn't design it).\r\n\r\nThis shows the issue:\r\n\r\n```python\r\nimport sympy as sm\r\nimport sympy.physics.mechanics as me\r\n\r\ncxx, cyy, czz = me.dynamicsymbols('c_{xx}, c_{yy}, c_{zz}')\r\ncxy, cxz, cyx = me.dynamicsymbols('c_{xy}, c_{xz}, c_{yx}')\r\ncyz, czx, czy = me.dynamicsymbols('c_{yz}, c_{zx}, c_{zy}')\r\n\r\nB_C_A = sm.Matrix([[cxx, cxy, cxz],\r\n [cyx, cyy, cyz],\r\n [czx, czy, czz]])\r\n\r\nA = me.ReferenceFrame('A')\r\nB = me.ReferenceFrame('B')\r\nB.orient_explicit(A, B_C_A)\r\nB.dcm(A)\r\n```\r\n\r\nreturns:\r\n\r\n```\r\nMatrix([\r\n[c_{xx}(t), c_{yx}(t), c_{zx}(t)],\r\n[c_{xy}(t), c_{yy}(t), c_{zy}(t)],\r\n[c_{xz}(t), c_{yz}(t), c_{zz}(t)]])\r\n```\r\n\r\nwhich is the inverse of the input.\r\n\r\nSo to get the expected behavior you must do:\r\n\r\n```python\r\nB.orient_explicit(A, B_C_A.transpose())\r\nB.dcm(A)\r\n```\r\nwhich returns the expected result:\r\n\r\n```\r\nMatrix([\r\n[c_{xx}(t), c_{xy}(t), c_{xz}(t)],\r\n[c_{yx}(t), c_{yy}(t), c_{yz}(t)],\r\n[c_{zx}(t), c_{zy}(t), c_{zz}(t)]])\r\n```\r\n\r\nWe should introduce a `orient_from_dcm()` or similarly named method that works just like `orient_explicit()` but does the intuitive behavior. We can just leave `orient_explicit()` in place for perpetuity to avoid breaking people's code.", "number": 24764, "title": "Add new method that is more intuitive than orient_explicit()" } ]
69d3af720c7449a9dcddf17b7e2a4e8724caf15a
{ "head_commit": "6aeff4f2a0a1f43ef8671899eeba6f87ef1a5688", "head_commit_message": "Adding .mailmap", "patch_to_review": "diff --git a/.mailmap b/.mailmap\nindex 34ec49129623..95b414ba51c1 100644\n--- a/.mailmap\n+++ b/.mailmap\n@@ -654,6 +654,7 @@ Harold Erbin <[email protected]>\n Harrison Oates <[email protected]>\n Harry Mountain <[email protected]>\n Harry Zheng <[email protected]>\n+Harsh Kasat <[email protected]>\n Harsh Agarwal <[email protected]>\n Harsh Gupta <[email protected]> <[email protected]>\n Harsh Jain <[email protected]>\ndiff --git a/doc/src/modules/physics/vector/api/classes.rst b/doc/src/modules/physics/vector/api/classes.rst\nindex 7e2e10c07948..74be80ed6df1 100644\n--- a/doc/src/modules/physics/vector/api/classes.rst\n+++ b/doc/src/modules/physics/vector/api/classes.rst\n@@ -7,6 +7,8 @@ Essential Classes\n \n .. autoclass:: sympy.physics.vector.frame.ReferenceFrame\n :members:\n+ :exclude-members: orient_explicit\n+ :exclude-members: orient_dcm\n \n .. autoclass:: sympy.physics.vector.vector.Vector\n :members:\ndiff --git a/sympy/physics/vector/frame.py b/sympy/physics/vector/frame.py\nindex e93435188e22..3372c2afe7df 100644\n--- a/sympy/physics/vector/frame.py\n+++ b/sympy/physics/vector/frame.py\n@@ -720,57 +720,38 @@ def orient_explicit(self, parent, dcm):\n dcm : Matrix, shape(3, 3)\n Direction cosine matrix that specifies the relative rotation\n between the two reference frames.\n-\n+ rotation matrix : Matrix Rotation Direction\n+ The simple matrix rotation relative ``parent`` to ``dcm``.\n Warns\n ======\n \n UserWarning\n If the orientation creates a kinematic loop.\n \n- Examples\n- ========\n-\n- Setup variables for the examples:\n-\n- >>> from sympy import symbols, Matrix, sin, cos\n- >>> from sympy.physics.vector import ReferenceFrame\n- >>> q1 = symbols('q1')\n- >>> A = ReferenceFrame('A')\n- >>> B = ReferenceFrame('B')\n- >>> N = ReferenceFrame('N')\n-\n- A simple rotation of ``A`` relative to ``N`` about ``N.x`` is defined\n- by the following direction cosine matrix:\n+ \"\"\"\n+ self.orient_dcm(parent = parent, dcm = dcm)\n \n- >>> dcm = Matrix([[1, 0, 0],\n- ... [0, cos(q1), -sin(q1)],\n- ... [0, sin(q1), cos(q1)]])\n- >>> A.orient_explicit(N, dcm)\n- >>> A.dcm(N)\n- Matrix([\n- [1, 0, 0],\n- [0, cos(q1), sin(q1)],\n- [0, -sin(q1), cos(q1)]])\n+ def orient_dcm(self, parent, dcm):\n+ \"\"\"Sets the orientation of this reference frame relative to a parent\n+ reference frame by explicitly setting the direction cosine matrix.\n \n- This is equivalent to using ``orient_axis()``:\n+ Parameters\n+ ==========\n \n- >>> B.orient_axis(N, N.x, q1)\n- >>> B.dcm(N)\n- Matrix([\n- [1, 0, 0],\n- [0, cos(q1), sin(q1)],\n- [0, -sin(q1), cos(q1)]])\n+ parent : ReferenceFrame\n+ Reference frame that this reference frame will be rotated relative\n+ to.\n+ dcm : Matrix, shape(3, 3)\n+ Direction cosine matrix that specifies the relative rotation\n+ between the two reference frames.\n+ rotation matrix : Matrix Rotation Direction\n+ The simple matrix rotation relative ``parent`` to ``dcm``.\n+ Warns\n+ ======\n \n- **Note carefully that** ``N.dcm(B)`` **(the transpose) would be passed\n- into** ``orient_explicit()`` **for** ``A.dcm(N)`` **to match**\n- ``B.dcm(N)``:\n+ UserWarning\n+ If the orientation creates a kinematic loop.\n \n- >>> A.orient_explicit(N, N.dcm(B))\n- >>> A.dcm(N)\n- Matrix([\n- [1, 0, 0],\n- [0, cos(q1), sin(q1)],\n- [0, -sin(q1), cos(q1)]])\n \n \"\"\"\n \n@@ -780,9 +761,7 @@ def orient_explicit(self, parent, dcm):\n if not isinstance(dcm, MatrixBase):\n raise TypeError(\"Amounts must be a SymPy Matrix type object.\")\n \n- parent_orient_dcm = dcm\n-\n- self._dcm(parent, parent_orient_dcm)\n+ self._dcm(parent, dcm.T)\n \n wvec = self._w_diff_dcm(parent)\n self._ang_vel_dict.update({parent: wvec})\n@@ -1202,7 +1181,7 @@ def orient(self, parent, rot_type, amounts, rot_order=''):\n self.orient_axis(parent, amounts[1], amounts[0])\n \n elif rot_type == 'DCM':\n- self.orient_explicit(parent, amounts)\n+ self.orient_dcm(parent, amounts)\n \n elif rot_type == 'BODY':\n self.orient_body_fixed(parent, amounts, rot_order)\n@@ -1313,7 +1292,7 @@ def orientnew(self, newname, rot_type, amounts, rot_order='',\n newframe.orient_axis(self, amounts[1], amounts[0])\n \n elif rot_type == 'DCM':\n- newframe.orient_explicit(self, amounts)\n+ newframe.orient_dcm(self, amounts)\n \n elif rot_type == 'BODY':\n newframe.orient_body_fixed(self, amounts, rot_order)\ndiff --git a/sympy/physics/vector/tests/test_frame.py b/sympy/physics/vector/tests/test_frame.py\nindex 8e6e2cb1cab7..2550ce990a2f 100644\n--- a/sympy/physics/vector/tests/test_frame.py\n+++ b/sympy/physics/vector/tests/test_frame.py\n@@ -451,10 +451,10 @@ def test_dcm_diff_16824():\n assert simplify(AwB.dot(A.y) - alpha2) == 0\n assert simplify(AwB.dot(B.y) - beta2) == 0\n \n-def test_orient_explicit():\n+def test_orient_dcm():\n A = ReferenceFrame('A')\n B = ReferenceFrame('B')\n- A.orient_explicit(B, eye(3))\n+ A.orient_dcm(B, eye(3))\n assert A.dcm(B) == Matrix([[1, 0, 0], [0, 1, 0], [0, 0, 1]])\n \n def test_orient_axis():\n" }
[ { "diff_hunk": "@@ -720,57 +720,38 @@ def orient_explicit(self, parent, dcm):\n dcm : Matrix, shape(3, 3)\n Direction cosine matrix that specifies the relative rotation\n between the two reference frames.\n-\n+ rotation matrix : Matrix Rotation Direction\n+ The simple matrix rotation relative ``parent`` to ``dcm``.\n Warns\n ======\n \n UserWarning\n If the orientation creates a kinematic loop.\n \n- Examples\n- ========\n-\n- Setup variables for the examples:\n-\n- >>> from sympy import symbols, Matrix, sin, cos\n- >>> from sympy.physics.vector import ReferenceFrame\n- >>> q1 = symbols('q1')\n- >>> A = ReferenceFrame('A')\n- >>> B = ReferenceFrame('B')\n- >>> N = ReferenceFrame('N')\n-\n- A simple rotation of ``A`` relative to ``N`` about ``N.x`` is defined\n- by the following direction cosine matrix:\n+ \"\"\"\n+ self.orient_dcm(parent = parent, dcm = dcm)", "line": null, "original_line": 732, "original_start_line": null, "path": "sympy/physics/vector/frame.py", "start_line": null, "text": "@user1:\nShould be `dcm.T`" }, { "diff_hunk": "@@ -1313,7 +1292,7 @@ def orientnew(self, newname, rot_type, amounts, rot_order='',\n newframe.orient_axis(self, amounts[1], amounts[0])\n \n elif rot_type == 'DCM':\n- newframe.orient_explicit(self, amounts)\n+ newframe.orient_dcm(self, amounts)", "line": null, "original_line": 1295, "original_start_line": null, "path": "sympy/physics/vector/frame.py", "start_line": null, "text": "@user1:\nSame here, this is backward incompatible." }, { "diff_hunk": "@@ -451,10 +451,10 @@ def test_dcm_diff_16824():\n assert simplify(AwB.dot(A.y) - alpha2) == 0\n assert simplify(AwB.dot(B.y) - beta2) == 0\n \n-def test_orient_explicit():", "line": 454, "original_line": 454, "original_start_line": null, "path": "sympy/physics/vector/tests/test_frame.py", "start_line": null, "text": "@user1:\nDon't change the `orient_explicit` test because we still want to make sure it works. Instead, add a new `test_orient_dcm`, where you showcase that [this](https://github.com/sympy/sympy/issues/24764#issue-1596398197) works." }, { "diff_hunk": "@@ -1202,7 +1181,7 @@ def orient(self, parent, rot_type, amounts, rot_order=''):\n self.orient_axis(parent, amounts[1], amounts[0])\n \n elif rot_type == 'DCM':\n- self.orient_explicit(parent, amounts)\n+ self.orient_dcm(parent, amounts)", "line": null, "original_line": 1184, "original_start_line": null, "path": "sympy/physics/vector/frame.py", "start_line": null, "text": "@user1:\nI don't think we can make this change, as this will break existing code." }, { "diff_hunk": "@@ -720,57 +720,38 @@ def orient_explicit(self, parent, dcm):\n dcm : Matrix, shape(3, 3)\n Direction cosine matrix that specifies the relative rotation\n between the two reference frames.\n-\n+ rotation matrix : Matrix Rotation Direction\n+ The simple matrix rotation relative ``parent`` to ``dcm``.", "line": null, "original_line": 724, "original_start_line": null, "path": "sympy/physics/vector/frame.py", "start_line": null, "text": "@user1:\nThis sentence is unclear.\n\n@user2:\nIn this package we use the word \"direction cosine matrix\" instead of \"rotation matrix\". They are the same thing, so why would we need two?\n\n@author:\n```\r\ndef orient_dcm(self, parent, dcm):\r\n \"\"\" Sets the orientation of this reference frame relative to a parent\r\n reference frame by explicitly setting the direction cosine matrix.\r\n\r\n Parameters\r\n ==========\r\n\r\n parent: ReferenceFrame\r\n Reference frame that this reference frame will be rotated relative\r\n to.\r\n dcm : Matrix, shape(3, 3)\r\n Direction cosine matrix that specifies the relative rotation\r\n between the two reference frames.\r\n rotation matrix: Direction Cosine Matrix\r\n Rotate relative to ``parent``.\r\n```\r\n* When I read [reference](https://moorepants.github.io/learn-multibody-dynamics/orientation.html#orientation-simple-successive), I understand that ``B.orient_dcm(A, B_C_A)`` or `B.orient_dcm(A, B_C_A)` means that B (reference frame) is relatively rotated to A (reference frame). We can verify using `B.dcm(A)` .\r\n* Please let me know if any modifications or corrections are needed for the code.\r\n* Sorry for the spam message. I'm new to open source and apologize for any inconvenience. I'm providing:\n\n@user2:\nThe docstring implies you have added a new argument, but actually it is unclear.\n\n@author:\n```\r\n def orient_dcm(self, parent, dcm):\r\n \"\"\"Sets the orientation of this reference frame relative to another (parent) reference frame\r\n using a direction cosine matrix that describes the rotation from the parent to the child.\r\n\r\n Parameters\r\n ==========\r\n\r\n parent : ReferenceFrame\r\n Reference frame that this reference frame will be rotated relative\r\n to.\r\n dcm : Matrix, shape(3, 3)\r\n Direction cosine matrix that specifies the relative rotation\r\n between the two reference frames.\r\n\r\n Warns\r\n ======\r\n\r\n```\r\nI think the current docstring implies that a change is needed. Please update me if any change is need.\n\n@user2:\nI am referring to this addition:\r\n\r\n```\r\n rotation matrix: Direction Cosine Matrix\r\n Rotate relative to ``parent``.\r\n```\r\n\r\nIs this a new argument? It is formatted in that way.\n\n@author:\nThere is no new argument. I made a mistake by writing 'rotation matrix' as an argument. \r\nHere, I have explained about rotation instead.\r\n>Sets the orientation of this reference frame relative to another (parent) reference frame\r\n using a direction cosine matrix that describes the rotation from the parent to the child.\r\n\r\nPlease update me if any change is need.\n\n@user3:\nOkay, could you then remove the `rotation matrix` argument and use something as you propose [here](https://github.com/sympy/sympy/pull/25916/files#r1402041601). Make sure that it is clear that the DCM describes the orientation of the parent with respect to the child in `orient_explicit`." }, { "diff_hunk": "@@ -720,57 +720,38 @@ def orient_explicit(self, parent, dcm):\n dcm : Matrix, shape(3, 3)\n Direction cosine matrix that specifies the relative rotation\n between the two reference frames.\n-\n+ rotation matrix : Matrix Rotation Direction\n+ The simple matrix rotation relative ``parent`` to ``dcm``.\n Warns\n ======\n \n UserWarning\n If the orientation creates a kinematic loop.\n \n- Examples\n- ========\n-\n- Setup variables for the examples:\n-\n- >>> from sympy import symbols, Matrix, sin, cos\n- >>> from sympy.physics.vector import ReferenceFrame\n- >>> q1 = symbols('q1')\n- >>> A = ReferenceFrame('A')\n- >>> B = ReferenceFrame('B')\n- >>> N = ReferenceFrame('N')\n-\n- A simple rotation of ``A`` relative to ``N`` about ``N.x`` is defined\n- by the following direction cosine matrix:\n+ \"\"\"\n+ self.orient_dcm(parent = parent, dcm = dcm)\n \n- >>> dcm = Matrix([[1, 0, 0],\n- ... [0, cos(q1), -sin(q1)],\n- ... [0, sin(q1), cos(q1)]])\n- >>> A.orient_explicit(N, dcm)\n- >>> A.dcm(N)\n- Matrix([\n- [1, 0, 0],\n- [0, cos(q1), sin(q1)],\n- [0, -sin(q1), cos(q1)]])\n+ def orient_dcm(self, parent, dcm):\n+ \"\"\"Sets the orientation of this reference frame relative to a parent\n+ reference frame by explicitly setting the direction cosine matrix.\n \n- This is equivalent to using ``orient_axis()``:\n+ Parameters\n+ ==========\n \n- >>> B.orient_axis(N, N.x, q1)\n- >>> B.dcm(N)\n- Matrix([\n- [1, 0, 0],\n- [0, cos(q1), sin(q1)],\n- [0, -sin(q1), cos(q1)]])\n+ parent : ReferenceFrame\n+ Reference frame that this reference frame will be rotated relative\n+ to.\n+ dcm : Matrix, shape(3, 3)\n+ Direction cosine matrix that specifies the relative rotation\n+ between the two reference frames.\n+ rotation matrix : Matrix Rotation Direction\n+ The simple matrix rotation relative ``parent`` to ``dcm``.", "line": null, "original_line": 748, "original_start_line": 735, "path": "sympy/physics/vector/frame.py", "start_line": null, "text": "@user1:\nThe same change is required here. Only now the DCM describes the orientation of the child with respect to the parent." }, { "diff_hunk": "@@ -7,6 +7,8 @@ Essential Classes\n \n .. autoclass:: sympy.physics.vector.frame.ReferenceFrame\n :members:\n+ :exclude-members: orient_explicit\n+ :exclude-members: orient_dcm", "line": null, "original_line": 11, "original_start_line": null, "path": "doc/src/modules/physics/vector/api/classes.rst", "start_line": null, "text": "@user1:\nYou should include `orient_dcm` because we would like people to see it in the online documentation, such that they'll use it." } ]
f2da0e41c4d42ca7171f334c15efb1de00bf562b
diff --git a/.mailmap b/.mailmap index 34ec49129623..5e153ae5e20b 100644 --- a/.mailmap +++ b/.mailmap @@ -657,6 +657,7 @@ Harry Zheng <[email protected]> Harsh Agarwal <[email protected]> Harsh Gupta <[email protected]> <[email protected]> Harsh Jain <[email protected]> +Harsh Kasat <[email protected]> Harshil Goel <[email protected]> Harshil Goel <[email protected]> <[email protected]> Harshil Meena <[email protected]> diff --git a/doc/src/modules/physics/vector/api/classes.rst b/doc/src/modules/physics/vector/api/classes.rst index 7e2e10c07948..79f1aa90919f 100644 --- a/doc/src/modules/physics/vector/api/classes.rst +++ b/doc/src/modules/physics/vector/api/classes.rst @@ -7,6 +7,7 @@ Essential Classes .. autoclass:: sympy.physics.vector.frame.ReferenceFrame :members: + :exclude-members: orient_explicit .. autoclass:: sympy.physics.vector.vector.Vector :members: diff --git a/sympy/physics/vector/frame.py b/sympy/physics/vector/frame.py index e93435188e22..23f37b687356 100644 --- a/sympy/physics/vector/frame.py +++ b/sympy/physics/vector/frame.py @@ -708,8 +708,8 @@ def orient_axis(self, parent, axis, angle): self._var_dict = {} def orient_explicit(self, parent, dcm): - """Sets the orientation of this reference frame relative to a parent - reference frame by explicitly setting the direction cosine matrix. + """Sets the orientation of this reference frame relative to another (parent) reference frame + using a direction cosine matrix that describes the rotation from the parent to the child. Parameters ========== @@ -773,16 +773,77 @@ def orient_explicit(self, parent, dcm): [0, -sin(q1), cos(q1)]]) """ - _check_frame(parent) # amounts must be a Matrix type object # (e.g. sympy.matrices.dense.MutableDenseMatrix). if not isinstance(dcm, MatrixBase): raise TypeError("Amounts must be a SymPy Matrix type object.") - parent_orient_dcm = dcm + self.orient_dcm(parent, dcm.T) + + def orient_dcm(self, parent, dcm): + """Sets the orientation of this reference frame relative to another (parent) reference frame + using a direction cosine matrix that describes the rotation from the child to the parent. + + Parameters + ========== + + parent : ReferenceFrame + Reference frame that this reference frame will be rotated relative + to. + dcm : Matrix, shape(3, 3) + Direction cosine matrix that specifies the relative rotation + between the two reference frames. + + Warns + ====== + + UserWarning + If the orientation creates a kinematic loop. + + Examples + ======== + + Setup variables for the examples: + + >>> from sympy import symbols, Matrix, sin, cos + >>> from sympy.physics.vector import ReferenceFrame + >>> q1 = symbols('q1') + >>> A = ReferenceFrame('A') + >>> B = ReferenceFrame('B') + >>> N = ReferenceFrame('N') + + A simple rotation of ``A`` relative to ``N`` about ``N.x`` is defined + by the following direction cosine matrix: + + >>> dcm = Matrix([[1, 0, 0], + ... [0, cos(q1), sin(q1)], + ... [0, -sin(q1), cos(q1)]]) + >>> A.orient_dcm(N, dcm) + >>> A.dcm(N) + Matrix([ + [1, 0, 0], + [0, cos(q1), sin(q1)], + [0, -sin(q1), cos(q1)]]) + + This is equivalent to using ``orient_axis()``: + + >>> B.orient_axis(N, N.x, q1) + >>> B.dcm(N) + Matrix([ + [1, 0, 0], + [0, cos(q1), sin(q1)], + [0, -sin(q1), cos(q1)]]) + + """ + + _check_frame(parent) + # amounts must be a Matrix type object + # (e.g. sympy.matrices.dense.MutableDenseMatrix). + if not isinstance(dcm, MatrixBase): + raise TypeError("Amounts must be a SymPy Matrix type object.") - self._dcm(parent, parent_orient_dcm) + self._dcm(parent, dcm.T) wvec = self._w_diff_dcm(parent) self._ang_vel_dict.update({parent: wvec}) diff --git a/sympy/physics/vector/tests/test_frame.py b/sympy/physics/vector/tests/test_frame.py index 8e6e2cb1cab7..8e2d0234c7d2 100644 --- a/sympy/physics/vector/tests/test_frame.py +++ b/sympy/physics/vector/tests/test_frame.py @@ -452,10 +452,38 @@ def test_dcm_diff_16824(): assert simplify(AwB.dot(B.y) - beta2) == 0 def test_orient_explicit(): + cxx, cyy, czz = dynamicsymbols('c_{xx}, c_{yy}, c_{zz}') + cxy, cxz, cyx = dynamicsymbols('c_{xy}, c_{xz}, c_{yx}') + cyz, czx, czy = dynamicsymbols('c_{yz}, c_{zx}, c_{zy}') + dcxx, dcyy, dczz = dynamicsymbols('c_{xx}, c_{yy}, c_{zz}', 1) + dcxy, dcxz, dcyx = dynamicsymbols('c_{xy}, c_{xz}, c_{yx}', 1) + dcyz, dczx, dczy = dynamicsymbols('c_{yz}, c_{zx}, c_{zy}', 1) A = ReferenceFrame('A') B = ReferenceFrame('B') - A.orient_explicit(B, eye(3)) - assert A.dcm(B) == Matrix([[1, 0, 0], [0, 1, 0], [0, 0, 1]]) + B_C_A = Matrix([[cxx, cxy, cxz], + [cyx, cyy, cyz], + [czx, czy, czz]]) + B_w_A = ((cyx*dczx + cyy*dczy + cyz*dczz)*B.x + + (czx*dcxx + czy*dcxy + czz*dcxz)*B.y + + (cxx*dcyx + cxy*dcyy + cxz*dcyz)*B.z) + A.orient_explicit(B, B_C_A) + assert B.dcm(A) == B_C_A + assert A.ang_vel_in(B) == B_w_A + assert B.ang_vel_in(A) == -B_w_A + +def test_orient_dcm(): + cxx, cyy, czz = dynamicsymbols('c_{xx}, c_{yy}, c_{zz}') + cxy, cxz, cyx = dynamicsymbols('c_{xy}, c_{xz}, c_{yx}') + cyz, czx, czy = dynamicsymbols('c_{yz}, c_{zx}, c_{zy}') + B_C_A = Matrix([[cxx, cxy, cxz], + [cyx, cyy, cyz], + [czx, czy, czz]]) + A = ReferenceFrame('A') + B = ReferenceFrame('B') + B.orient_dcm(A, B_C_A) + assert B.dcm(A) == Matrix([[cxx, cxy, cxz], + [cyx, cyy, cyz], + [czx, czy, czz]]) def test_orient_axis(): A = ReferenceFrame('A')
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "New Feature Additions" }
sympy__sympy-25822@838de45
sympy/sympy
Python
25,822
watch for iterable sols
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> fixes #25820 #### Brief description of what is fixed or changed #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> NO ENTRY <!-- END RELEASE NOTES -->
2023-10-23T13:57:23Z
AttributeError: when solve differential equation: `y(x).diff(x).diff(x)*y(x)**3+49` ```python from sympy import Derivative, Function, Symbol, dsolve x = Symbol('x') y = Function('y') dsolve(y(x).diff(x).diff(x)*y(x)**3+49, y(x)) ``` the error: ```python Traceback (most recent call last): File "sympy/solvers/ode/ode.py", line 640, in dsolve return _helper_simplify(eq, hint, hints, simplify, ics=ics) File "sympy/solvers/ode/ode.py", line 690, in _helper_simplify rv = _remove_redundant_solutions(eq, rv, order, func.args[0]) File "sympy/solvers/ode/ode.py", line 2627, in _remove_redundant_solutions if is_special_case_of(soln1, soln2): File "sympy/solvers/ode/ode.py", line 2622, in is_special_case_of return _is_special_case_of(soln1, soln2, eq, order, var) File "sympy/solvers/ode/ode.py", line 2655, in _is_special_case_of soln1 = soln1.rhs - soln1.lhs AttributeError: 'list' object has no attribute 'rhs' ```
Some how the solutions are being returned from somewhere as a list of lists of solutions rather than a list of solutions. ```diff diff --git a/sympy/solvers/ode/ode.py b/sympy/solvers/ode/ode.py index 75bccdf8d5..4fc6d75d01 100644 --- a/sympy/solvers/ode/ode.py +++ b/sympy/solvers/ode/ode.py @@ -639,6 +639,7 @@ def recur_len(l): hint = hints['hint'] return _helper_simplify(eq, hint, hints, simplify, ics=ics) + def _helper_simplify(eq, hint, match, simplify=True, ics=None, **kwargs): r""" Helper function of dsolve that calls the respective @@ -670,7 +671,13 @@ def _helper_simplify(eq, hint, match, simplify=True, ics=None, **kwargs): else: sols = solvefunc(eq, func, order, match) if iterable(sols): - rv = [odesimp(eq, s, func, hint) for s in sols] + rv = [] + for s in sols: + simp = odesimp(eq, s, func, hint) + if len(s) > 1: + rv.extend(simp) + else: + rv.append(simp) else: rv = odesimp(eq, sols, func, hint) else: @@ -686,6 +693,7 @@ def _helper_simplify(eq, hint, match, simplify=True, ics=None, **kwargs): rv = _handle_Integral(exprs, func, hint) if isinstance(rv, list): + assert all(isinstance(i, Eq) for i in rv), rv if simplify: rv = _remove_redundant_solutions(eq, rv, order, func.args[0]) if len(rv) == 1: @@ -707,6 +715,7 @@ def _helper_simplify(eq, hint, match, simplify=True, ics=None, **kwargs): rv = rv1 return rv + def solve_ics(sols, funcs, constants, ics): """ Solve for the constants given initial conditions ```
[ { "body": "```python\r\nfrom sympy import Derivative, Function, Symbol, dsolve\r\nx = Symbol('x')\r\ny = Function('y')\r\ndsolve(y(x).diff(x).diff(x)*y(x)**3+49, y(x))\r\n```\r\n\r\nthe error:\r\n```python\r\nTraceback (most recent call last):\r\n File \"sympy/solvers/ode/ode.py\", line 640, in dsolve\r\n return _helper_simplify(eq, hint, hints, simplify, ics=ics)\r\n File \"sympy/solvers/ode/ode.py\", line 690, in _helper_simplify\r\n rv = _remove_redundant_solutions(eq, rv, order, func.args[0])\r\n File \"sympy/solvers/ode/ode.py\", line 2627, in _remove_redundant_solutions\r\n if is_special_case_of(soln1, soln2):\r\n File \"sympy/solvers/ode/ode.py\", line 2622, in is_special_case_of\r\n return _is_special_case_of(soln1, soln2, eq, order, var)\r\n File \"sympy/solvers/ode/ode.py\", line 2655, in _is_special_case_of\r\n soln1 = soln1.rhs - soln1.lhs\r\nAttributeError: 'list' object has no attribute 'rhs'\r\n```", "number": 25820, "title": "AttributeError: when solve differential equation: `y(x).diff(x).diff(x)*y(x)**3+49`" } ]
4cc0be9f5533ea5ffb9c50c5ccb96af810d70417
{ "head_commit": "838de45b18f7a4be7e0b85ed4da3ddc053fbbc6c", "head_commit_message": "watch for iterable sols", "patch_to_review": "diff --git a/sympy/solvers/ode/ode.py b/sympy/solvers/ode/ode.py\nindex 75bccdf8d56b..d9d40e37f4fe 100644\n--- a/sympy/solvers/ode/ode.py\n+++ b/sympy/solvers/ode/ode.py\n@@ -639,6 +639,7 @@ def recur_len(l):\n hint = hints['hint']\n return _helper_simplify(eq, hint, hints, simplify, ics=ics)\n \n+\n def _helper_simplify(eq, hint, match, simplify=True, ics=None, **kwargs):\n r\"\"\"\n Helper function of dsolve that calls the respective\n@@ -670,9 +671,15 @@ def _helper_simplify(eq, hint, match, simplify=True, ics=None, **kwargs):\n else:\n sols = solvefunc(eq, func, order, match)\n if iterable(sols):\n- rv = [odesimp(eq, s, func, hint) for s in sols]\n+ rv = []\n+ for s in sols:\n+ simp = odesimp(eq, s, func, hint)\n+ if iterable(simp):\n+ rv.extend(simp)\n+ else:\n+ rv.append(simp)\n else:\n- rv = odesimp(eq, sols, func, hint)\n+ rv = odesimp(eq, sols, func, hint)\n else:\n # We still want to integrate (you can disable it separately with the hint)\n if isinstance(solvefunc, SingleODESolver):\n@@ -686,6 +693,7 @@ def _helper_simplify(eq, hint, match, simplify=True, ics=None, **kwargs):\n rv = _handle_Integral(exprs, func, hint)\n \n if isinstance(rv, list):\n+ assert all(isinstance(i, Eq) for i in rv), rv # if not => internal error\n if simplify:\n rv = _remove_redundant_solutions(eq, rv, order, func.args[0])\n if len(rv) == 1:\n@@ -707,6 +715,7 @@ def _helper_simplify(eq, hint, match, simplify=True, ics=None, **kwargs):\n rv = rv1\n return rv\n \n+\n def solve_ics(sols, funcs, constants, ics):\n \"\"\"\n Solve for the constants given initial conditions\ndiff --git a/sympy/solvers/ode/tests/test_ode.py b/sympy/solvers/ode/tests/test_ode.py\nindex 547b8e425c34..76f6508c0f2a 100644\n--- a/sympy/solvers/ode/tests/test_ode.py\n+++ b/sympy/solvers/ode/tests/test_ode.py\n@@ -1094,3 +1094,12 @@ def test_issue_23425():\n assert classify_ode(eq) == \\\n ('Liouville', 'nth_order_reducible', \\\n '2nd_power_series_ordinary', 'Liouville_Integral')\n+\n+\n+def test_issue_25820():\n+ x = Symbol('x')\n+ y = Function('y')\n+ eq = y(x)**3*Derivative(y(x), (x, 1)) + 49\n+ r = (C1 - 196*x)**(S(1)/4)\n+ assert dsolve(eq, y(x)) == [\n+ Eq(y(x), -I*r), Eq(y(x), I*r), Eq(y(x), -r), Eq(y(x), r)]\n" }
[ { "diff_hunk": "@@ -1094,3 +1094,12 @@ def test_issue_23425():\n assert classify_ode(eq) == \\\n ('Liouville', 'nth_order_reducible', \\\n '2nd_power_series_ordinary', 'Liouville_Integral')\n+\n+\n+def test_issue_25820():\n+ x = Symbol('x')\n+ y = Function('y')\n+ eq = y(x)**3*Derivative(y(x), (x, 1)) + 49", "line": null, "original_line": 1102, "original_start_line": null, "path": "sympy/solvers/ode/tests/test_ode.py", "start_line": null, "text": "@user1:\nIt should be a second derivative:\r\n```\r\nIn [1]: dsolve(f(x).diff(x, 2)*f(x)**3 + 49)\r\n---------------------------------------------------------------------------\r\nAttributeError \r\n```\n\n@author:\nThat took excessively long so I picked something that traversed the same added code path. I'll revisit tonight." } ]
0e79b9b492a0e1db0e98e17cec77124c407cb214
diff --git a/sympy/solvers/ode/ode.py b/sympy/solvers/ode/ode.py index 75bccdf8d56b..d9d40e37f4fe 100644 --- a/sympy/solvers/ode/ode.py +++ b/sympy/solvers/ode/ode.py @@ -639,6 +639,7 @@ def recur_len(l): hint = hints['hint'] return _helper_simplify(eq, hint, hints, simplify, ics=ics) + def _helper_simplify(eq, hint, match, simplify=True, ics=None, **kwargs): r""" Helper function of dsolve that calls the respective @@ -670,9 +671,15 @@ def _helper_simplify(eq, hint, match, simplify=True, ics=None, **kwargs): else: sols = solvefunc(eq, func, order, match) if iterable(sols): - rv = [odesimp(eq, s, func, hint) for s in sols] + rv = [] + for s in sols: + simp = odesimp(eq, s, func, hint) + if iterable(simp): + rv.extend(simp) + else: + rv.append(simp) else: - rv = odesimp(eq, sols, func, hint) + rv = odesimp(eq, sols, func, hint) else: # We still want to integrate (you can disable it separately with the hint) if isinstance(solvefunc, SingleODESolver): @@ -686,6 +693,7 @@ def _helper_simplify(eq, hint, match, simplify=True, ics=None, **kwargs): rv = _handle_Integral(exprs, func, hint) if isinstance(rv, list): + assert all(isinstance(i, Eq) for i in rv), rv # if not => internal error if simplify: rv = _remove_redundant_solutions(eq, rv, order, func.args[0]) if len(rv) == 1: @@ -707,6 +715,7 @@ def _helper_simplify(eq, hint, match, simplify=True, ics=None, **kwargs): rv = rv1 return rv + def solve_ics(sols, funcs, constants, ics): """ Solve for the constants given initial conditions diff --git a/sympy/solvers/ode/tests/test_ode.py b/sympy/solvers/ode/tests/test_ode.py index 547b8e425c34..b1ddcc784fde 100644 --- a/sympy/solvers/ode/tests/test_ode.py +++ b/sympy/solvers/ode/tests/test_ode.py @@ -23,7 +23,7 @@ from sympy.solvers.ode.nonhomogeneous import _undetermined_coefficients_match from sympy.solvers.ode.single import LinearCoefficients from sympy.solvers.deutils import ode_order -from sympy.testing.pytest import XFAIL, raises, slow +from sympy.testing.pytest import XFAIL, raises, slow, SKIP from sympy.utilities.misc import filldedent @@ -1094,3 +1094,11 @@ def test_issue_23425(): assert classify_ode(eq) == \ ('Liouville', 'nth_order_reducible', \ '2nd_power_series_ordinary', 'Liouville_Integral') + + +@SKIP("too slow for @slow") +def test_issue_25820(): + x = Symbol('x') + y = Function('y') + eq = y(x)**3*y(x).diff(x, 2) + 49 + assert dsolve(eq, y(x)) is not None # doesn't raise
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Bug Fixes" }
sympy__sympy-25814@0f39e0d
sympy/sympy
Python
25,814
Reject unordered sets in linear_eq_to_matrix
#### References to other Issues or PRs Fixes https://github.com/sympy/sympy/pull/25812 This PR contains the squashed commit version of the previously closed #25812 . This new PR do not have commits which will unintentionally affect a large number of lines in the blame. Fixes https://github.com/sympy/sympy/issues/25423 If a user relies on `sets` to pass symbols to `linear_eq_to_matrix`, they might receive results that appear inconsistent. Since the equations' coefficients matrix depends on the order of symbols, using an unordered set can lead to confusion. Different runs could potentially produce different matrices, even if the equations and symbols remain the same. #### Brief description of what is fixed or changed Previous implementations allowed unordered `sets` as inputs for `linear_eq_to_matrix`, leading to unpredictable outputs. With input validation in `solveset.py` , symbols are provided in a predictable sequence, enforcing consistent behavior. New test cases implemented in `test_solveset.py`cover the scenarios where `sets` are passed to the function, ensuring proper error responses. #### Release Notes <!-- BEGIN RELEASE NOTES --> * solvers * `linear_eq_to_matrix` now raises TypeError when passing unordered symbols <!-- END RELEASE NOTES -->
2023-10-21T04:34:08Z
`linear_eq_to_matrix` should reject unordered symbols Although dictionaries are ordered in newer Python versions, sets are unordered and should be rejected as input to `linear_eq_to_matrix` which returns the linear coefficients for one or more expressions *in the order that symbols are given*. Although the routine which this function uses will order the symbols, this order is not reported back to the user. It might be better to simply raise an error in this case. ```python >>> linear_eq_to_matrix(2*x + y, {x, y}) # order is `list(ordered({x,y})) -> [x, y] (Matrix([[2, 1]]), Matrix([[0]])) ```
Can I work on this?
[ { "body": "Although dictionaries are ordered in newer Python versions, sets are unordered and should be rejected as input to `linear_eq_to_matrix` which returns the linear coefficients for one or more expressions *in the order that symbols are given*. Although the routine which this function uses will order the symbols, this order is not reported back to the user. It might be better to simply raise an error in this case.\r\n```python\r\n>>> linear_eq_to_matrix(2*x + y, {x, y}) # order is `list(ordered({x,y})) -> [x, y]\r\n(Matrix([[2, 1]]), Matrix([[0]]))\r\n```", "number": 25423, "title": "`linear_eq_to_matrix` should reject unordered symbols" } ]
c226febc80e48826d4314f63a391b31c4d20cfeb
{ "head_commit": "0f39e0dfb4462698318d3b855c70c4d509854f19", "head_commit_message": "Reject unordered sets in linear_eq_to_matrix\n\nPrevious implementations allowed unordered sets as inputs for\nlinear_eq_to_matrix, leading to unpredictable outputs. This change\nintroduces input validation to ensure that the symbols are provided\nin a predictable sequence, enforcing consistent behavior.\n\nNew test cases cover the scenarios where sets are passed to the\nfunction, ensuring proper error responses.\n\nResolves: #issue-number (if applicable)\nauthor: add Congxu Yang to .mailmap", "patch_to_review": "diff --git a/.mailmap b/.mailmap\nindex 80e50bcc92c1..512b3fd19773 100644\n--- a/.mailmap\n+++ b/.mailmap\n@@ -466,6 +466,7 @@ Colin B. Macdonald <[email protected]> <[email protected]>\n Colin Marquardt <[email protected]>\n Colleen Lee <[email protected]> <[email protected]>\n Comer Duncan <[email protected]>\n+Congxu Yang <[email protected]>\n Constantin Mateescu <[email protected]>\n Costor <[email protected]>\n Craig A. Stoudt <[email protected]>\ndiff --git a/sympy/solvers/solveset.py b/sympy/solvers/solveset.py\nindex 0f0d947d114e..f5650e111631 100644\n--- a/sympy/solvers/solveset.py\n+++ b/sympy/solvers/solveset.py\n@@ -2605,6 +2605,11 @@ def linear_eq_to_matrix(equations, *symbols):\n are to be found.\n '''))\n \n+ # Check if 'symbols' is a set and raise an error if it is\n+ if isinstance(symbols[0], set):\n+ raise TypeError(\n+ \"Unordered 'set' type is not supported as input for symbols.\")\n+\n if hasattr(symbols[0], '__iter__'):\n symbols = symbols[0]\n \ndiff --git a/sympy/solvers/tests/test_solveset.py b/sympy/solvers/tests/test_solveset.py\nindex 7b4473fd5a70..c100c19cd60a 100644\n--- a/sympy/solvers/tests/test_solveset.py\n+++ b/sympy/solvers/tests/test_solveset.py\n@@ -3271,3 +3271,13 @@ def test_issue_22628():\n \n def test_issue_25781():\n assert solve(sqrt(x/2) - x) == [0, S.Half]\n+\n+\n+def test_issue_25423():\n+ x, y = symbols('x y')\n+ equations1 = []\n+ equations2 = [2 * x + 3 * y - 1]\n+ nonlinear_equations = [x ** 2 + y ** 2 - 1, x - y]\n+ raises(TypeError, lambda: linear_eq_to_matrix(equations1, {x, y}))\n+ raises(TypeError, lambda: linear_eq_to_matrix(equations2, {x, y}))\n+ raises(NonlinearError, lambda: linear_eq_to_matrix(nonlinear_equations, x, y))\n" }
[ { "diff_hunk": "@@ -3271,3 +3271,13 @@ def test_issue_22628():\n \n def test_issue_25781():\n assert solve(sqrt(x/2) - x) == [0, S.Half]\n+\n+\n+def test_issue_25423():\n+ x, y = symbols('x y')\n+ equations1 = []\n+ equations2 = [2 * x + 3 * y - 1]\n+ nonlinear_equations = [x ** 2 + y ** 2 - 1, x - y]\n+ raises(TypeError, lambda: linear_eq_to_matrix(equations1, {x, y}))\n+ raises(TypeError, lambda: linear_eq_to_matrix(equations2, {x, y}))\n+ raises(NonlinearError, lambda: linear_eq_to_matrix(nonlinear_equations, x, y))", "line": null, "original_line": 3283, "original_start_line": 3280, "path": "sympy/solvers/tests/test_solveset.py", "start_line": null, "text": "@author:\nThis change from TypeError to ValueError will cause test failure becasue within the `linear_eq_to_matrix` function, a _TypeError_ is raised when checking to see if the symbols parameter is a set, indicating that an unordered `set` type is not supported for the symbols parameter. This is the actual behavior of the function checking the `set` input. \r\n\r\nI think since we are concerned with `set` type as input, we should indeed raise _TypeError_ instead of ValueError?\n\n@user2:\nFor some reason I think the preference is to raise a ValueError. @user1 ?\r\n\r\nThat would be consistent with raising a ValueError when a set is used to pass the equations, too." }, { "diff_hunk": "@@ -3271,3 +3271,13 @@ def test_issue_22628():\n \n def test_issue_25781():\n assert solve(sqrt(x/2) - x) == [0, S.Half]\n+\n+\n+def test_issue_25423():\n+ x, y = symbols('x y')\n+ equations1 = []\n+ equations2 = [2 * x + 3 * y - 1]\n+ nonlinear_equations = [x ** 2 + y ** 2 - 1, x - y]\n+ raises(TypeError, lambda: linear_eq_to_matrix(equations1, {x, y}))\n+ raises(TypeError, lambda: linear_eq_to_matrix(equations2, {x, y}))\n+ raises(NonlinearError, lambda: linear_eq_to_matrix(nonlinear_equations, x, y))", "line": null, "original_line": 3283, "original_start_line": null, "path": "sympy/solvers/tests/test_solveset.py", "start_line": null, "text": "@user1:\nInstead of this, how about showing/affirming that passing `set(equations2)` with `[x, y]` does *not* raise an error.\r\n\r\nAlso, symbols `x` and `y` have already been created at the top of the file and need not be recreated here.\n\n@author:\nIn the case of `set(equations2)`, there will be test failure indicates that the current implementation of `linear_eq_to_matrix` does not support a set of equations as input. It will raise exception becasue Python set is not recognized as a valid sequence for this purpose. \r\n\r\nIn this case, for the test to work, I will need to convert `sets` to `list`. In such a case, we are not testing the function's ability to handle sets directly. It's a gap in our testing if the function should handle sets.\r\n\r\n\n\n@user1:\nOK, that's fine (and probably better) that an error raises. It won't change the solution but it could be a little harder to debug if the order changes wrt what was passed. Then let's delete that 3rd test since it is already tested in lines 1408ff in the same file." }, { "diff_hunk": "@@ -3271,3 +3271,13 @@ def test_issue_22628():\n \n def test_issue_25781():\n assert solve(sqrt(x/2) - x) == [0, S.Half]\n+\n+\n+def test_issue_25423():\n+ x, y = symbols('x y')\n+ equations1 = []\n+ equations2 = [2 * x + 3 * y - 1]\n+ nonlinear_equations = [x ** 2 + y ** 2 - 1, x - y]\n+ raises(TypeError, lambda: linear_eq_to_matrix(equations1, {x, y}))\n+ raises(TypeError, lambda: linear_eq_to_matrix(equations2, {x, y}))\n+ raises(NonlinearError, lambda: linear_eq_to_matrix(nonlinear_equations, x, y))", "line": null, "original_line": 3283, "original_start_line": 3280, "path": "sympy/solvers/tests/test_solveset.py", "start_line": null, "text": "@user1:\n```suggestion\r\n raises(TypeError, lambda: linear_eq_to_matrix(equations1, {x, y}))\r\n raises(TypeError, lambda: linear_eq_to_matrix(equations2, {x, y}))\r\n raises(ValueError, lambda: linear_eq_to_matrix(set(equations2), (x, y)))\r\n```" } ]
aa3db964d220c750d0a00aef264506bb0c94f3dd
diff --git a/.mailmap b/.mailmap index 80e50bcc92c1..512b3fd19773 100644 --- a/.mailmap +++ b/.mailmap @@ -466,6 +466,7 @@ Colin B. Macdonald <[email protected]> <[email protected]> Colin Marquardt <[email protected]> Colleen Lee <[email protected]> <[email protected]> Comer Duncan <[email protected]> +Congxu Yang <[email protected]> Constantin Mateescu <[email protected]> Costor <[email protected]> Craig A. Stoudt <[email protected]> diff --git a/sympy/solvers/solveset.py b/sympy/solvers/solveset.py index 0f0d947d114e..f5650e111631 100644 --- a/sympy/solvers/solveset.py +++ b/sympy/solvers/solveset.py @@ -2605,6 +2605,11 @@ def linear_eq_to_matrix(equations, *symbols): are to be found. ''')) + # Check if 'symbols' is a set and raise an error if it is + if isinstance(symbols[0], set): + raise TypeError( + "Unordered 'set' type is not supported as input for symbols.") + if hasattr(symbols[0], '__iter__'): symbols = symbols[0] diff --git a/sympy/solvers/tests/test_solveset.py b/sympy/solvers/tests/test_solveset.py index 7b4473fd5a70..ed9a337be991 100644 --- a/sympy/solvers/tests/test_solveset.py +++ b/sympy/solvers/tests/test_solveset.py @@ -1408,6 +1408,11 @@ def test_linear_eq_to_matrix(): assert linear_eq_to_matrix(Eq(x + 2, 1), x) == ( Matrix([[1]]), Matrix([[-1]])) + # issue 25423 + raises(TypeError, lambda: linear_eq_to_matrix([], {x, y})) + raises(TypeError, lambda: linear_eq_to_matrix([x + y], {x, y})) + raises(ValueError, lambda: linear_eq_to_matrix({x + y}, (x, y))) + def test_issue_16577(): assert linear_eq_to_matrix(Eq(a*(2*x + 3*y) + 4*y, 5), x, y) == (
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Bug Fixes" }
sympy__sympy-25717@6b7112f
sympy/sympy
Python
25,717
Fix Rational parsing in Mathematica parser
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs Fixes #25716 #### Brief description of what is fixed or changed The Mathematica parser at present creates an `AppliedUndef` function with the same name and syntax as the built-in sympy `Rational` type, but none of the properties. This is because the conversion is not defined in the parser. This PR adds the relevant conversion and a simple test to the test cases. #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> * parsing * Fixed a bug in the Mathematica parser parsing rational objects Rational[p,q]. <!-- END RELEASE NOTES -->
2023-09-22T14:42:48Z
'Rational[m,n]' conversion from Mathematica fails to create a Rational Minimal working example: ``` > from sympy.parsing.mathematica import parse_mathematica > parse_mathematica("Rational[1,2]") Rational(1, 2) > parse_mathematica("Rational[1,2]") + 2 AttributeError: 'Rational' object has no attribute 'p' ``` I have investigated this a little, and the parser is creating an `AppliedUndef` function called `Rational` which appears to be a Rational as expected, but in fact fails to have any of the required attributes for a type named 'Rational' (in the MWE, it has no numerator attribute), leading to some strange and confusing behaviour. The fix is to define the map from Mathematica's `Rational` to Sympy's `Rational`. I have a patch which seems to work, which I will submit as a PR shortly.
[ { "body": "Minimal working example:\r\n\r\n```\r\n> from sympy.parsing.mathematica import parse_mathematica\r\n> parse_mathematica(\"Rational[1,2]\")\r\nRational(1, 2)\r\n> parse_mathematica(\"Rational[1,2]\") + 2\r\nAttributeError: 'Rational' object has no attribute 'p'\r\n```\r\n\r\nI have investigated this a little, and the parser is creating an `AppliedUndef` function called `Rational` which appears to be a Rational as expected, but in fact fails to have any of the required attributes for a type named 'Rational' (in the MWE, it has no numerator attribute), leading to some strange and confusing behaviour.\r\n\r\nThe fix is to define the map from Mathematica's `Rational` to Sympy's `Rational`. I have a patch which seems to work, which I will submit as a PR shortly.", "number": 25716, "title": "'Rational[m,n]' conversion from Mathematica fails to create a Rational" } ]
c0c175e3ab05b40edc44b445a55c076820a8f2e2
{ "head_commit": "6b7112f1f19f0c3466d5ad430980b553f66950a6", "head_commit_message": "Added test for Rational conversion in Mathematica parser", "patch_to_review": "diff --git a/sympy/parsing/mathematica.py b/sympy/parsing/mathematica.py\nindex d0cbe7f7b13f..09414d11628e 100644\n--- a/sympy/parsing/mathematica.py\n+++ b/sympy/parsing/mathematica.py\n@@ -5,7 +5,7 @@\n from typing import Any, Callable\n \n import sympy\n-from sympy import Mul, Add, Pow, log, exp, sqrt, cos, sin, tan, asin, acos, acot, asec, acsc, sinh, cosh, tanh, asinh, \\\n+from sympy import Mul, Add, Pow, Rational, log, exp, sqrt, cos, sin, tan, asin, acos, acot, asec, acsc, sinh, cosh, tanh, asinh, \\\n acosh, atanh, acoth, asech, acsch, expand, im, flatten, polylog, cancel, expand_trig, sign, simplify, \\\n UnevaluatedExpr, S, atan, atan2, Mod, Max, Min, rf, Ei, Si, Ci, airyai, airyaiprime, airybi, primepi, prime, \\\n isprime, cot, sec, csc, csch, sech, coth, Function, I, pi, Tuple, GreaterThan, StrictGreaterThan, StrictLessThan, \\\n@@ -131,6 +131,7 @@ class MathematicaParser:\n # left: Mathematica, right: SymPy\n CORRESPONDENCES = {\n 'Sqrt[x]': 'sqrt(x)',\n+ 'Rational[x,y]': 'Rational(x,y)',\n 'Exp[x]': 'exp(x)',\n 'Log[x]': 'log(x)',\n 'Log[x,y]': 'log(y,x)',\n@@ -975,6 +976,7 @@ def converter(expr):\n \"Times\": Mul,\n \"Plus\": Add,\n \"Power\": Pow,\n+ \"Rational\": lambda *a: Rational(*a),\n \"Log\": lambda *a: log(*reversed(a)),\n \"Log2\": lambda x: log(x, 2),\n \"Log10\": lambda x: log(x, 10),\ndiff --git a/sympy/parsing/tests/test_mathematica.py b/sympy/parsing/tests/test_mathematica.py\nindex b6df911f30a2..c72b3e5eeaef 100644\n--- a/sympy/parsing/tests/test_mathematica.py\n+++ b/sympy/parsing/tests/test_mathematica.py\n@@ -67,7 +67,8 @@ def test_mathematica():\n 'LogIntegral[4]': ' li(4)',\n 'PrimePi[7]': 'primepi(7)',\n 'Prime[5]': 'prime(5)',\n- 'PrimeQ[5]': 'isprime(5)'\n+ 'PrimeQ[5]': 'isprime(5)',\n+ 'Rational[2,19]': 'Rational(2,19)', # test case for issue 25716\n }\n \n for e in d:\n" }
[ { "diff_hunk": "@@ -975,6 +976,7 @@ def converter(expr):\n \"Times\": Mul,\n \"Plus\": Add,\n \"Power\": Pow,\n+ \"Rational\": lambda *a: Rational(*a),", "line": null, "original_line": 979, "original_start_line": null, "path": "sympy/parsing/mathematica.py", "start_line": null, "text": "@user1:\nWhy not just \r\n\r\n```suggestion\r\n \"Rational\": Rational,\r\n```\n\n@author:\nNo reason except for clarity that it's a two-argument function that takes its arguments in the same order as Mathematica. Probably better without the lambda-function, so I'll make that change." } ]
fcf8232737a2b4f9f0c64218f40d9b75be806497
diff --git a/.mailmap b/.mailmap index 99e41da3b539..fa4f81a43713 100644 --- a/.mailmap +++ b/.mailmap @@ -706,6 +706,7 @@ James Goppert <[email protected]> James Harrop <[email protected]> James Pearson <[email protected]> James Taylor <[email protected]> +James Whitehead <[email protected]> jcwhitehead <[email protected]> Jan Kruse <[email protected]> Jan-Philipp Hoffmann <[email protected]> Jan-Philipp Hoffmann <[email protected]> Jared Lumpe <[email protected]> Michael Jared Lumpe <[email protected]> diff --git a/sympy/parsing/mathematica.py b/sympy/parsing/mathematica.py index d0cbe7f7b13f..3856e0a411c0 100644 --- a/sympy/parsing/mathematica.py +++ b/sympy/parsing/mathematica.py @@ -5,7 +5,7 @@ from typing import Any, Callable import sympy -from sympy import Mul, Add, Pow, log, exp, sqrt, cos, sin, tan, asin, acos, acot, asec, acsc, sinh, cosh, tanh, asinh, \ +from sympy import Mul, Add, Pow, Rational, log, exp, sqrt, cos, sin, tan, asin, acos, acot, asec, acsc, sinh, cosh, tanh, asinh, \ acosh, atanh, acoth, asech, acsch, expand, im, flatten, polylog, cancel, expand_trig, sign, simplify, \ UnevaluatedExpr, S, atan, atan2, Mod, Max, Min, rf, Ei, Si, Ci, airyai, airyaiprime, airybi, primepi, prime, \ isprime, cot, sec, csc, csch, sech, coth, Function, I, pi, Tuple, GreaterThan, StrictGreaterThan, StrictLessThan, \ @@ -131,6 +131,7 @@ class MathematicaParser: # left: Mathematica, right: SymPy CORRESPONDENCES = { 'Sqrt[x]': 'sqrt(x)', + 'Rational[x,y]': 'Rational(x,y)', 'Exp[x]': 'exp(x)', 'Log[x]': 'log(x)', 'Log[x,y]': 'log(y,x)', @@ -975,6 +976,7 @@ def converter(expr): "Times": Mul, "Plus": Add, "Power": Pow, + "Rational": Rational, "Log": lambda *a: log(*reversed(a)), "Log2": lambda x: log(x, 2), "Log10": lambda x: log(x, 10), diff --git a/sympy/parsing/tests/test_mathematica.py b/sympy/parsing/tests/test_mathematica.py index b6df911f30a2..c72b3e5eeaef 100644 --- a/sympy/parsing/tests/test_mathematica.py +++ b/sympy/parsing/tests/test_mathematica.py @@ -67,7 +67,8 @@ def test_mathematica(): 'LogIntegral[4]': ' li(4)', 'PrimePi[7]': 'primepi(7)', 'Prime[5]': 'prime(5)', - 'PrimeQ[5]': 'isprime(5)' + 'PrimeQ[5]': 'isprime(5)', + 'Rational[2,19]': 'Rational(2,19)', # test case for issue 25716 } for e in d:
{ "difficulty": "low", "estimated_review_effort": 2, "problem_domain": "Bug Fixes" }
sympy__sympy-25706@970e61a
sympy/sympy
Python
25,706
remove `Eq.rewrite(Add)`
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> closes #25701 #### Brief description of what is fixed or changed Remove `rewrite(Add)` which is misplaced in the rewrite machinery. #### Other comments From @oscarbenjamin in #25701 > I have said before that this rewrite rule is a mistake: rewrite is not expected to turn one kind of expression into a different kind and in this case a Boolean should not be rewritten to an Expr. > This rewrite method was added as a compromise since the natural method `as_expr()` was frowned on since `Eq` aren't expressions. The problem is that it was added into machinery that wants to apply rewrite rules to an entire expression tree. And in the context of `rewrite(Add)` which only applies to `Eq` it doesn't make sense to apply this to a tree. There are a few solutions: 1) add a guard so this can't be run on anything but `Eq`. The fact that we have to do so, however, indicates that this is a design flaw. So do one of the following: 2) Agree on a name of a method for Eq that return lhs - rhs and gives the option of doing so in an unevaluated manner. - `rhs0` could return Eq(Add(lhs, -rhs, evaluate=False), 0, evaluate=False) and the user could grab the lhs as in `eq.rhs0.lhs` or - `lhs_rhs()` could return the diff in an evaluated manner or `lhs_rhs(evaluate=False)` could do so in an unevaluated manner. 3) make the user reconstruct the Add on their own: `Add(eq.lhs, -eq.rhs, evaluate=False)` This PR implements #2 and provides a pain-free way of updating the code-base since the name changes but the functionality is identically the same. #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> * core * BREAKING CHANGE `eq.rewrite(Add)` should be replaced with `eq.lhs - eq.rhs` <!-- END RELEASE NOTES -->
2023-09-20T03:02:33Z
TypeError on Eq(2*sign(x + 3)/(5*Abs(x + 3)**(3/5)), 0) ```python from sympy import Symbol, Eq, sign, Abs, Rational x = Symbol('x', real=True) eq = Eq(2*sign(x + 3)/(5*Abs(x + 3)**Rational(3, 5)), 0) eq.simplify() ``` Result: ``` Traceback (most recent call last): File "/home/satels/workenv/krcore/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 3508, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-1-2fbb666c6791>", line 7, in <module> eq.simplify() File "/home/satels/workenv/krcore/lib/python3.10/site-packages/sympy/core/basic.py", line 1853, in simplify return simplify(self, **kwargs) File "/home/satels/workenv/krcore/lib/python3.10/site-packages/sympy/simplify/simplify.py", line 601, in simplify return _eval_simplify(**kwargs) File "/home/satels/workenv/krcore/lib/python3.10/site-packages/sympy/core/relational.py", line 692, in _eval_simplify e.rewrite(Add, evaluate=False), x) File "/home/satels/workenv/krcore/lib/python3.10/site-packages/sympy/core/basic.py", line 1981, in rewrite return self._rewrite(pattern, rule, method, **hints) File "/home/satels/workenv/krcore/lib/python3.10/site-packages/sympy/core/basic.py", line 1986, in _rewrite args = [a._rewrite(pattern, rule, method, **hints) File "/home/satels/workenv/krcore/lib/python3.10/site-packages/sympy/core/basic.py", line 1986, in <listcomp> args = [a._rewrite(pattern, rule, method, **hints) File "/home/satels/workenv/krcore/lib/python3.10/site-packages/sympy/core/basic.py", line 1986, in _rewrite args = [a._rewrite(pattern, rule, method, **hints) File "/home/satels/workenv/krcore/lib/python3.10/site-packages/sympy/core/basic.py", line 1986, in <listcomp> args = [a._rewrite(pattern, rule, method, **hints) File "/home/satels/workenv/krcore/lib/python3.10/site-packages/sympy/core/basic.py", line 2000, in _rewrite return self.func(*args) File "/home/satels/workenv/krcore/lib/python3.10/site-packages/sympy/functions/elementary/piecewise.py", line 33, in __new__ raise TypeError(filldedent(''' TypeError: Second argument must be a Boolean, not `Add` ```
This is because an `Eq` is rewritten as an `Add`: ```python In [4]: Eq(x, 2).rewrite(Add) Out[4]: x - 2 ``` I have said before that this rewrite rule is a mistake: rewrite is not expected to turn one kind of expression into a different kind and in this case a Boolean should not be rewritten to an Expr. This is a fix to the problem: ```diff diff --git a/sympy/core/relational.py b/sympy/core/relational.py index b2dbd1f395..a534926411 100644 --- a/sympy/core/relational.py +++ b/sympy/core/relational.py @@ -686,8 +686,7 @@ def _eval_simplify(self, **kwargs): from .add import Add from sympy.solvers.solveset import linear_coeffs x = free.pop() - m, b = linear_coeffs( - e.rewrite(Add, evaluate=False), x) + m, b = linear_coeffs(Add(e.lhs, -e.rhs, evaluate=False), x) if m.is_zero is False: enew = e.func(x, -b / m) else: ``` With the diff above we have: ```python In [4]: eq.simplify() Out[4]: ⎧ 0 for x = -3 ⎪ ⎪ 2⋅(x + 3) ⎨────────────── otherwise = 0 ⎪ │ 8/5│ ⎪5⋅│(x + 3) │ ⎩ ``` This is not completely correct but the basic bug here is fixed. The reason that this is not completely correct is that when `x = -3` the lhs should be undefined rather than `0`. The `Eq.rewrite(Add)` should be removed entirely to fix that problem in general.
[ { "body": "```python\r\nfrom sympy import Symbol, Eq, sign, Abs, Rational\r\n\r\nx = Symbol('x', real=True)\r\n\r\neq = Eq(2*sign(x + 3)/(5*Abs(x + 3)**Rational(3, 5)), 0)\r\n\r\neq.simplify()\r\n\r\n```\r\n\r\nResult:\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/satels/workenv/krcore/lib/python3.10/site-packages/IPython/core/interactiveshell.py\", line 3508, in run_code\r\n exec(code_obj, self.user_global_ns, self.user_ns)\r\n File \"<ipython-input-1-2fbb666c6791>\", line 7, in <module>\r\n eq.simplify()\r\n File \"/home/satels/workenv/krcore/lib/python3.10/site-packages/sympy/core/basic.py\", line 1853, in simplify\r\n return simplify(self, **kwargs)\r\n File \"/home/satels/workenv/krcore/lib/python3.10/site-packages/sympy/simplify/simplify.py\", line 601, in simplify\r\n return _eval_simplify(**kwargs)\r\n File \"/home/satels/workenv/krcore/lib/python3.10/site-packages/sympy/core/relational.py\", line 692, in _eval_simplify\r\n e.rewrite(Add, evaluate=False), x)\r\n File \"/home/satels/workenv/krcore/lib/python3.10/site-packages/sympy/core/basic.py\", line 1981, in rewrite\r\n return self._rewrite(pattern, rule, method, **hints)\r\n File \"/home/satels/workenv/krcore/lib/python3.10/site-packages/sympy/core/basic.py\", line 1986, in _rewrite\r\n args = [a._rewrite(pattern, rule, method, **hints)\r\n File \"/home/satels/workenv/krcore/lib/python3.10/site-packages/sympy/core/basic.py\", line 1986, in <listcomp>\r\n args = [a._rewrite(pattern, rule, method, **hints)\r\n File \"/home/satels/workenv/krcore/lib/python3.10/site-packages/sympy/core/basic.py\", line 1986, in _rewrite\r\n args = [a._rewrite(pattern, rule, method, **hints)\r\n File \"/home/satels/workenv/krcore/lib/python3.10/site-packages/sympy/core/basic.py\", line 1986, in <listcomp>\r\n args = [a._rewrite(pattern, rule, method, **hints)\r\n File \"/home/satels/workenv/krcore/lib/python3.10/site-packages/sympy/core/basic.py\", line 2000, in _rewrite\r\n return self.func(*args)\r\n File \"/home/satels/workenv/krcore/lib/python3.10/site-packages/sympy/functions/elementary/piecewise.py\", line 33, in __new__\r\n raise TypeError(filldedent('''\r\nTypeError: \r\nSecond argument must be a Boolean, not `Add`\r\n```", "number": 25701, "title": "TypeError on Eq(2*sign(x + 3)/(5*Abs(x + 3)**(3/5)), 0)" } ]
67c72efedfbddc20589f463fe1a8c813640b6349
{ "head_commit": "970e61a553f4df14964dfb5544b6225be2ab7601", "head_commit_message": "e.rewrite(Add) -> e.lhs_rhs()", "patch_to_review": "diff --git a/sympy/calculus/util.py b/sympy/calculus/util.py\nindex e140396d8cd7..2e9bfab05108 100644\n--- a/sympy/calculus/util.py\n+++ b/sympy/calculus/util.py\n@@ -422,7 +422,7 @@ def _check(orig_f, period):\n period = None\n \n if isinstance(f, Relational):\n- f = f.lhs - f.rhs\n+ f = f.lhs_rhs()\n \n f = f.simplify()\n \ndiff --git a/sympy/core/expr.py b/sympy/core/expr.py\nindex ee4ae39abee1..457a2fba55eb 100644\n--- a/sympy/core/expr.py\n+++ b/sympy/core/expr.py\n@@ -68,6 +68,9 @@ class Expr(Basic, EvalfMixin):\n \n is_scalar = True # self derivative is 1\n \n+ def lhs_rhs(self, evaluate=True):\n+ return self # only for Eq\n+\n @property\n def _diff_wrt(self):\n \"\"\"Return True if one can differentiate with respect to this\ndiff --git a/sympy/core/relational.py b/sympy/core/relational.py\nindex b2dbd1f39552..10b107765a0b 100644\n--- a/sympy/core/relational.py\n+++ b/sympy/core/relational.py\n@@ -419,7 +419,7 @@ def _eval_simplify(self, **kwargs):\n if r.is_Relational:\n if not isinstance(r.lhs, Expr) or not isinstance(r.rhs, Expr):\n return r\n- dif = r.lhs - r.rhs\n+ dif = r.lhs_rhs()\n # replace dif with a valid Number that will\n # allow a definitive comparison with 0\n v = None\n@@ -437,7 +437,7 @@ def _eval_simplify(self, **kwargs):\n try:\n from sympy.solvers.solveset import linear_coeffs\n x = free.pop()\n- dif = r.lhs - r.rhs\n+ dif = r.lhs_rhs()\n m, b = linear_coeffs(dif, x)\n if m.is_zero is False:\n if m.is_negative:\n@@ -467,7 +467,7 @@ def _eval_simplify(self, **kwargs):\n from sympy.solvers.solveset import linear_coeffs\n from sympy.polys.polytools import gcd\n free = list(ordered(free))\n- dif = r.lhs - r.rhs\n+ dif = r.lhs_rhs()\n m = linear_coeffs(dif, *free)\n constant = m[-1]\n del m[-1]\n@@ -594,7 +594,7 @@ class Equality(Relational):\n \n Since this object is already an expression, it does not respond to\n the method ``as_expr`` if one tries to create `x - y` from ``Eq(x, y)``.\n- This can be done with the ``rewrite(Add)`` method.\n+ This can be done with the ``lhs_rhs()`` method.\n \n .. deprecated:: 1.5\n \n@@ -626,7 +626,7 @@ def __new__(cls, lhs, rhs, **options):\n def _eval_relation(cls, lhs, rhs):\n return _sympify(lhs == rhs)\n \n- def _eval_rewrite_as_Add(self, L, R, evaluate=True, **kwargs):\n+ def lhs_rhs(self, evaluate=True):\n \"\"\"\n return Eq(L, R) as L - R. To control the evaluation of\n the result set pass `evaluate=True` to give L - R;\n@@ -638,17 +638,18 @@ def _eval_rewrite_as_Add(self, L, R, evaluate=True, **kwargs):\n Examples\n ========\n \n- >>> from sympy import Eq, Add\n+ >>> from sympy import Eq\n >>> from sympy.abc import b, x\n >>> eq = Eq(x + b, x - b)\n- >>> eq.rewrite(Add)\n+ >>> eq.lhs_rhs()\n 2*b\n- >>> eq.rewrite(Add, evaluate=None).args\n+ >>> eq.lhs_rhs(evaluate=None).args\n (b, b, x, -x)\n- >>> eq.rewrite(Add, evaluate=False).args\n+ >>> eq.lhs_rhs(evaluate=False).args\n (b, x, b, -x)\n \"\"\"\n from .add import _unevaluated_Add, Add\n+ L, R = self.args\n if L == 0:\n return R\n if R == 0:\n@@ -683,11 +684,10 @@ def _eval_simplify(self, **kwargs):\n free = self.free_symbols\n if len(free) == 1:\n try:\n- from .add import Add\n from sympy.solvers.solveset import linear_coeffs\n x = free.pop()\n m, b = linear_coeffs(\n- e.rewrite(Add, evaluate=False), x)\n+ e.lhs_rhs(evaluate=False), x)\n if m.is_zero is False:\n enew = e.func(x, -b / m)\n else:\ndiff --git a/sympy/core/tests/test_relational.py b/sympy/core/tests/test_relational.py\nindex a25650fce20f..3b1ffe9b5a39 100644\n--- a/sympy/core/tests/test_relational.py\n+++ b/sympy/core/tests/test_relational.py\n@@ -12,6 +12,8 @@\n from sympy.core.power import Pow\n from sympy.core.singleton import S\n from sympy.core.symbol import (Symbol, symbols)\n+from sympy.functions.elementary.complexes import sign, Abs\n+from sympy.functions.elementary.piecewise import Piecewise\n from sympy.functions.elementary.exponential import (exp, exp_polar, log)\n from sympy.functions.elementary.integers import (ceiling, floor)\n from sympy.functions.elementary.miscellaneous import sqrt\n@@ -558,14 +560,14 @@ def test_x_minus_y_not_same_as_x_lt_y():\n \n ineq = Lt(x, y, evaluate=False)\n raises(TypeError, lambda: ineq.doit())\n- assert ineq.lhs - ineq.rhs < 0\n+ assert ineq.lhs_rhs() < 0\n \n t = Symbol('t', imaginary=True)\n x = 2 + t\n y = 3 + t\n ineq = Lt(x, y, evaluate=False)\n raises(TypeError, lambda: ineq.doit())\n- assert ineq.lhs - ineq.rhs < 0\n+ assert ineq.lhs_rhs() < 0\n \n # this one should give error either way\n x = I + 2\n@@ -1026,18 +1028,32 @@ def test_rel_args():\n raises(TypeError, lambda: Relational(b, v, op))\n \n \n-def test_Equality_rewrite_as_Add():\n+def test_Equality_lhs_rhs():\n+ # XXX we can be strict and make every routine that want to treat\n+ # Eq like Eqn for purpose of solving explicitly test for Eq\n+ # if isinstance(eq, Eq):\n+ # eq = eq.lhs_rhs()\n+ # or allow Expr to pass\n+ # eq = eq.lhs_rhs() # valid for Eq or Expr\n+ assert (x + 1).lhs_rhs() == x + 1\n+\n eq = Eq(x + y, y - x)\n- assert eq.rewrite(Add) == 2*x\n- assert eq.rewrite(Add, evaluate=None).args == (x, x, y, -y)\n- assert eq.rewrite(Add, evaluate=False).args == (x, y, x, -y)\n+ assert eq.lhs_rhs() == 2*x\n+ assert eq.lhs_rhs(evaluate=None).args == (x, x, y, -y)\n+ assert eq.lhs_rhs(evaluate=False).args == (x, y, x, -y)\n for e in (True, False, None):\n- assert Eq(x, 0, evaluate=e).rewrite(Add) == x\n- assert Eq(0, x, evaluate=e).rewrite(Add) == x\n+ assert Eq(x, 0, evaluate=e).lhs_rhs() == x\n+ assert Eq(0, x, evaluate=e).lhs_rhs() == x\n+\n+ # issue 25701\n+ r = symbols('r', real=True)\n+ assert Eq(2*sign(r + 3)/(5*Abs(r + 3)**Rational(3, 5)), 0\n+ ).simplify() == Eq(Piecewise(\n+ (0, Eq(r, -3)), ((r + 3)/(5*Abs((r + 3)**Rational(8, 5)))*2, True)), 0)\n \n \n def test_issue_15847():\n- a = Ne(x*(x+y), x**2 + x*y)\n+ a = Ne(x*(x + y), x**2 + x*y)\n assert simplify(a) == False\n \n \ndiff --git a/sympy/functions/elementary/piecewise.py b/sympy/functions/elementary/piecewise.py\nindex 016fe3f4e75a..67ae79e6195b 100644\n--- a/sympy/functions/elementary/piecewise.py\n+++ b/sympy/functions/elementary/piecewise.py\n@@ -830,7 +830,7 @@ def __eval_cond(cls, cond):\n return True\n if isinstance(cond, Eq):\n try:\n- diff = cond.lhs - cond.rhs\n+ diff = cond.lhs_rhs()\n if diff.is_commutative:\n return diff.is_zero\n except TypeError:\ndiff --git a/sympy/geometry/ellipse.py b/sympy/geometry/ellipse.py\nindex f096556f4bd7..5a76fb526370 100644\n--- a/sympy/geometry/ellipse.py\n+++ b/sympy/geometry/ellipse.py\n@@ -1557,7 +1557,7 @@ def __new__(cls, *args, **kwargs):\n y = kwargs.get('y', 'y')\n equation = args[0].expand()\n if isinstance(equation, Eq):\n- equation = equation.lhs - equation.rhs\n+ equation = equation.lhs_rhs()\n x = find(x, equation)\n y = find(y, equation)\n \ndiff --git a/sympy/geometry/line.py b/sympy/geometry/line.py\nindex e9b29b5766bd..08920623ea77 100644\n--- a/sympy/geometry/line.py\n+++ b/sympy/geometry/line.py\n@@ -1192,7 +1192,7 @@ def __new__(cls, *args, **kwargs):\n \n equation = args[0]\n if isinstance(equation, Eq):\n- equation = equation.lhs - equation.rhs\n+ equation = equation.lhs_rhs()\n \n def find_or_missing(x):\n try:\ndiff --git a/sympy/logic/boolalg.py b/sympy/logic/boolalg.py\nindex ac190ac50883..429c0c4b676b 100644\n--- a/sympy/logic/boolalg.py\n+++ b/sympy/logic/boolalg.py\n@@ -4,7 +4,6 @@\n \n from collections import defaultdict\n from itertools import chain, combinations, product, permutations\n-from sympy.core.add import Add\n from sympy.core.basic import Basic\n from sympy.core.cache import cacheit\n from sympy.core.containers import Tuple\n@@ -662,7 +661,7 @@ def _eval_simplify(self, **kwargs):\n if (e.lhs != x or x in e.rhs.free_symbols) and x not in reps:\n try:\n m, b = linear_coeffs(\n- e.rewrite(Add, evaluate=False), x)\n+ e.lhs_rhs(evaluate=False), x)\n enew = e.func(x, -b/m)\n if measure(enew) <= ratio*measure(e):\n e = enew\ndiff --git a/sympy/plotting/series.py b/sympy/plotting/series.py\nindex 731d1df89154..dd08748330e6 100644\n--- a/sympy/plotting/series.py\n+++ b/sympy/plotting/series.py\n@@ -2485,7 +2485,7 @@ def _preprocess_meshgrid_expression(expr, adaptive):\n \"\"\"\n equality = False\n if isinstance(expr, Equality):\n- expr = expr.lhs - expr.rhs\n+ expr = expr.lhs_rhs()\n equality = True\n elif isinstance(expr, Relational):\n expr = expr.gts - expr.lts\ndiff --git a/sympy/polys/polyutils.py b/sympy/polys/polyutils.py\nindex 82c8c836191e..2a632f7736ff 100644\n--- a/sympy/polys/polyutils.py\n+++ b/sympy/polys/polyutils.py\n@@ -190,7 +190,7 @@ def _parallel_dict_from_expr_if_gens(exprs, opt):\n poly = {}\n \n if expr.is_Equality:\n- expr = expr.lhs - expr.rhs\n+ expr = expr.lhs_rhs()\n \n for term in Add.make_args(expr):\n coeff, monom = [], [0]*k\n@@ -249,7 +249,7 @@ def _is_coeff(factor):\n terms = []\n \n if expr.is_Equality:\n- expr = expr.lhs - expr.rhs\n+ expr = expr.lhs_rhs()\n \n for term in Add.make_args(expr):\n coeff, elements = [], {}\ndiff --git a/sympy/solvers/deutils.py b/sympy/solvers/deutils.py\nindex c968b65c8d51..64ca1e9fb511 100644\n--- a/sympy/solvers/deutils.py\n+++ b/sympy/solvers/deutils.py\n@@ -173,7 +173,7 @@ def _desolve(eq, func=None, hint=\"default\", ics=None, simplify=True, *, prep=Tru\n classify_pde(pde.py)\n \"\"\"\n if isinstance(eq, Equality):\n- eq = eq.lhs - eq.rhs\n+ eq = eq.lhs_rhs()\n \n # preprocess the equation and find func if not given\n if prep or func is None:\ndiff --git a/sympy/solvers/diophantine/diophantine.py b/sympy/solvers/diophantine/diophantine.py\nindex e9137170930f..9522482ee0cc 100644\n--- a/sympy/solvers/diophantine/diophantine.py\n+++ b/sympy/solvers/diophantine/diophantine.py\n@@ -1335,7 +1335,7 @@ def diophantine(eq, param=symbols(\"t\", integer=True), syms=None,\n eq = _sympify(eq)\n \n if isinstance(eq, Eq):\n- eq = eq.lhs - eq.rhs\n+ eq = eq.lhs_rhs()\n \n try:\n var = list(eq.expand(force=True).free_symbols)\ndiff --git a/sympy/solvers/inequalities.py b/sympy/solvers/inequalities.py\nindex 246a752c3106..07f8e1297f64 100644\n--- a/sympy/solvers/inequalities.py\n+++ b/sympy/solvers/inequalities.py\n@@ -233,7 +233,7 @@ def reduce_rational_inequalities(exprs, gen, relational=True):\n expr, rel = expr\n else:\n if expr.is_Relational:\n- expr, rel = expr.lhs - expr.rhs, expr.rel_op\n+ expr, rel = expr.lhs_rhs(), expr.rel_op\n else:\n expr, rel = expr, '=='\n \n@@ -479,7 +479,7 @@ def solve_univariate_inequality(expr, gen, relational=True, domain=S.Reals, cont\n rv = S.EmptySet\n \n else:\n- e = expr.lhs - expr.rhs\n+ e = expr.lhs_rhs()\n period = periodicity(e, gen)\n if period == S.Zero:\n e = expand_mul(e)\n@@ -805,7 +805,7 @@ def classify(ie, s, i):\n \n rv = None\n oo = S.Infinity\n- expr = ie.lhs - ie.rhs\n+ expr = ie.lhs_rhs()\n try:\n p = Poly(expr, s)\n if p.degree() == 0:\ndiff --git a/sympy/solvers/ode/lie_group.py b/sympy/solvers/ode/lie_group.py\nindex 329b2d5d30d4..90bfeb8fd802 100644\n--- a/sympy/solvers/ode/lie_group.py\n+++ b/sympy/solvers/ode/lie_group.py\n@@ -230,7 +230,7 @@ def infinitesimals(eq, func=None, order=None, hint='default', match=None):\n \"\"\"\n \n if isinstance(eq, Equality):\n- eq = eq.lhs - eq.rhs\n+ eq = eq.lhs_rhs()\n if not func:\n eq, func = _preprocess(eq)\n variables = func.args\ndiff --git a/sympy/solvers/ode/ode.py b/sympy/solvers/ode/ode.py\nindex 75bccdf8d56b..5a79876e876e 100644\n--- a/sympy/solvers/ode/ode.py\n+++ b/sympy/solvers/ode/ode.py\n@@ -943,7 +943,7 @@ class in it. Note that a hint may do this anyway if\n \"work with functions of one variable, not %s\" % func)\n \n if isinstance(eq, Equality):\n- eq = eq.lhs - eq.rhs\n+ eq = eq.lhs_rhs()\n \n # Some methods want the unprocessed equation\n eq_orig = eq\n@@ -1213,7 +1213,7 @@ def _sympify(eq):\n eq, funcs = (_sympify(w) for w in [eq, funcs])\n for i, fi in enumerate(eq):\n if isinstance(fi, Equality):\n- eq[i] = fi.lhs - fi.rhs\n+ eq[i] = fi.lhs_rhs()\n \n t = list(list(eq[0].atoms(Derivative))[0].atoms(Symbol))[0]\n matching_hints = {\"no_of_equation\":i+1}\n@@ -2815,7 +2815,7 @@ def checkinfsol(eq, infinitesimals, func=None, order=None):\n \n \"\"\"\n if isinstance(eq, Equality):\n- eq = eq.lhs - eq.rhs\n+ eq = eq.lhs_rhs()\n if not func:\n eq, func = _preprocess(eq)\n variables = func.args\ndiff --git a/sympy/solvers/ode/riccati.py b/sympy/solvers/ode/riccati.py\nindex 2ef66ed0896d..6de7fd80e32d 100644\n--- a/sympy/solvers/ode/riccati.py\n+++ b/sympy/solvers/ode/riccati.py\n@@ -295,7 +295,7 @@ def match_riccati(eq, f, x):\n \"\"\"\n # Group terms based on f(x)\n if isinstance(eq, Eq):\n- eq = eq.lhs - eq.rhs\n+ eq = eq.lhs_rhs()\n eq = eq.expand().collect(f(x))\n cf = eq.coeff(f(x).diff(x))\n \ndiff --git a/sympy/solvers/ode/subscheck.py b/sympy/solvers/ode/subscheck.py\nindex 6ac7fba7d364..9cd63189bb4b 100644\n--- a/sympy/solvers/ode/subscheck.py\n+++ b/sympy/solvers/ode/subscheck.py\n@@ -177,7 +177,7 @@ def checkodesol(ode, sol, func=None, order='auto', solve_for_func=True):\n if testnum == 0:\n # First pass, try substituting a solved solution directly into the\n # ODE. This has the highest chance of succeeding.\n- ode_diff = ode.lhs - ode.rhs\n+ ode_diff = ode.lhs_rhs()\n \n if sol.lhs == func:\n s = sub_func_doit(ode_diff, func, sol.rhs)\n@@ -221,7 +221,7 @@ def checkodesol(ode, sol, func=None, order='auto', solve_for_func=True):\n diffsols = {0: sol.lhs}\n else:\n diffsols = {}\n- sol = sol.lhs - sol.rhs\n+ sol = sol.lhs_rhs()\n for i in range(1, order + 1):\n # Differentiation is a linear operator, so there should always\n # be 1 solution. Nonetheless, we test just to make sure.\n@@ -345,7 +345,7 @@ def _sympify(eq):\n eqs = _sympify(eqs)\n for i in range(len(eqs)):\n if isinstance(eqs[i], Equality):\n- eqs[i] = eqs[i].lhs - eqs[i].rhs\n+ eqs[i] = eqs[i].lhs_rhs()\n if func is None:\n funcs = []\n for eq in eqs:\ndiff --git a/sympy/solvers/ode/tests/test_riccati.py b/sympy/solvers/ode/tests/test_riccati.py\nindex 548a1ee5b5e8..563c15939464 100644\n--- a/sympy/solvers/ode/tests/test_riccati.py\n+++ b/sympy/solvers/ode/tests/test_riccati.py\n@@ -717,7 +717,7 @@ def check_dummy_sol(eq, solse, dummy_sym):\n contains dummy symbols.\n \"\"\"\n if isinstance(eq, Eq):\n- eq = eq.lhs - eq.rhs\n+ eq = eq.lhs_rhs()\n _, funcs = match_riccati(eq, f, x)\n \n sols = solve_riccati(f(x), x, *funcs)\ndiff --git a/sympy/solvers/pde.py b/sympy/solvers/pde.py\nindex f54e1e3cb049..6109b7df8713 100644\n--- a/sympy/solvers/pde.py\n+++ b/sympy/solvers/pde.py\n@@ -468,7 +468,7 @@ def checkpdesol(pde, sol, func=None, solve_for_func=True):\n \n # try direct substitution of the solution into the PDE and simplify\n if sol.lhs == func:\n- pde = pde.lhs - pde.rhs\n+ pde = pde.lhs_rhs()\n s = simplify(pde.subs(func, sol.rhs).doit())\n return s is S.Zero, s\n \ndiff --git a/sympy/solvers/recurr.py b/sympy/solvers/recurr.py\nindex ba627bbd4cb0..c7ed7c981a71 100644\n--- a/sympy/solvers/recurr.py\n+++ b/sympy/solvers/recurr.py\n@@ -728,7 +728,7 @@ def rsolve(f, y, init=None):\n \n \"\"\"\n if isinstance(f, Equality):\n- f = f.lhs - f.rhs\n+ f = f.lhs_rhs()\n \n n = y.args[0]\n k = Wild('k', exclude=(n,))\ndiff --git a/sympy/solvers/simplex.py b/sympy/solvers/simplex.py\nindex ea54fefb780e..d40ff6e8a77f 100644\n--- a/sympy/solvers/simplex.py\n+++ b/sympy/solvers/simplex.py\n@@ -734,7 +734,7 @@ def _lp_matrices(objective, constraints):\n # change Eq(x, y) to x - y <= 0 and y - x <= 0\n for i in range(len(np)):\n if isinstance(np[i], Eq):\n- np[i] = np[i].lhs - np[i].rhs <= 0\n+ np[i] = np[i].lhs_rhs() <= 0\n np.append(-np[i].lhs <= 0)\n \n # convert constraints to nonpositive expressions\ndiff --git a/sympy/solvers/solvers.py b/sympy/solvers/solvers.py\nindex b79eb737804d..c04f0a99a6cd 100644\n--- a/sympy/solvers/solvers.py\n+++ b/sympy/solvers/solvers.py\n@@ -274,7 +274,7 @@ def checksol(f, symbol, sol=None, **flags):\n if not f.is_Boolean:\n return\n else:\n- f = f.rewrite(Add, evaluate=False, deep=False)\n+ f = f.lhs_rhs(evaluate=False, deep=False)\n \n if isinstance(f, BooleanAtom):\n return bool(f)\n@@ -928,7 +928,7 @@ def _sympified_list(w):\n for i, fi in enumerate(f):\n if isinstance(fi, (Eq, Ne)):\n if 'ImmutableDenseMatrix' in [type(a).__name__ for a in fi.args]:\n- fi = fi.lhs - fi.rhs\n+ fi = fi.lhs_rhs()\n else:\n L, R = fi.args\n if isinstance(R, BooleanAtom):\n@@ -948,7 +948,7 @@ def _sympified_list(w):\n is True or False.\n '''))\n else:\n- fi = fi.rewrite(Add, evaluate=False, deep=False)\n+ fi = fi.lhs_rhs(evaluate=False, deep=False)\n f[i] = fi\n \n # *** dispatch and handle as a system of relationals\n@@ -2420,7 +2420,7 @@ def solve_undetermined_coeffs(equ, coeffs, *syms, **flags):\n raise ValueError('must provide symbols for coeffs')\n \n if isinstance(equ, Eq):\n- eq = equ.lhs - equ.rhs\n+ eq = equ.lhs_rhs()\n else:\n eq = equ\n \n@@ -3053,14 +3053,14 @@ def nsolve(*args, dict=False, **kwargs):\n f = list(f)\n for i, fi in enumerate(f):\n if isinstance(fi, Eq):\n- f[i] = fi.lhs - fi.rhs\n+ f[i] = fi.lhs_rhs()\n f = Matrix(f).T\n if iterable(x0):\n x0 = list(x0)\n if not isinstance(f, Matrix):\n # assume it's a SymPy expression\n if isinstance(f, Eq):\n- f = f.lhs - f.rhs\n+ f = f.lhs_rhs()\n elif f.is_Relational:\n raise TypeError('nsolve cannot accept inequalities')\n syms = f.free_symbols\n@@ -3428,7 +3428,7 @@ def _take(d):\n _take = flags.setdefault('_take', _take)\n \n if isinstance(eq, Eq):\n- eq = eq.lhs - eq.rhs # XXX legacy Eq as Eqn support\n+ eq = eq.lhs_rhs() # XXX legacy Eq as Eqn support\n elif not isinstance(eq, Expr):\n return\n \ndiff --git a/sympy/solvers/solveset.py b/sympy/solvers/solveset.py\nindex dc95ca03d6fb..7902c0e9de65 100644\n--- a/sympy/solvers/solveset.py\n+++ b/sympy/solvers/solveset.py\n@@ -1065,7 +1065,7 @@ def _solveset(f, symbol, domain, _check=False):\n solns = solver(expr, symbol, in_set)\n result += solns\n elif isinstance(f, Eq):\n- result = solver(Add(f.lhs, - f.rhs, evaluate=False), symbol, domain)\n+ result = solver(f.lhs_rhs(evaluate=False), symbol, domain)\n \n elif f.is_Relational:\n from .inequalities import solve_univariate_inequality\n@@ -3311,7 +3311,7 @@ def _solve_using_known_values(result, solver):\n # list.\n result.remove(res)\n continue # skip as it's independent of desired symbols\n- depen1, depen2 = (eq2.rewrite(Add)).as_independent(*unsolved_syms)\n+ depen1, depen2 = (eq2.lhs_rhs()).as_independent(*unsolved_syms)\n if (depen1.has(Abs) or depen2.has(Abs)) and solver == solveset_complex:\n # Absolute values cannot be inverted in the\n # complex domain\n@@ -3528,7 +3528,7 @@ def _separate_poly_nonpoly(system, symbols):\n denominators.update(_simple_dens(eq, symbols))\n # Convert equality to expression\n if isinstance(eq, Equality):\n- eq = eq.rewrite(Add)\n+ eq = eq.lhs_rhs()\n # try to remove sqrt and rational power\n without_radicals = unrad(simplify(eq), *symbols)\n if without_radicals:\ndiff --git a/sympy/solvers/tests/test_solvers.py b/sympy/solvers/tests/test_solvers.py\nindex cdf3d54d9e8b..fdd8663a36ad 100644\n--- a/sympy/solvers/tests/test_solvers.py\n+++ b/sympy/solvers/tests/test_solvers.py\n@@ -2640,7 +2640,7 @@ def test_issue_10169():\n \n def test_solve_undetermined_coeffs_issue_23927():\n A, B, r, phi = symbols('A, B, r, phi')\n- eq = Eq(A*sin(t) + B*cos(t), r*sin(t - phi)).rewrite(Add).expand(trig=True)\n+ eq = Eq(A*sin(t) + B*cos(t), r*sin(t - phi)).lhs_rhs().expand(trig=True)\n soln = solve_undetermined_coeffs(eq, (r, phi), t)\n assert soln == [{\n phi: 2*atan((A - sqrt(A**2 + B**2))/B),\ndiff --git a/sympy/solvers/tests/test_solveset.py b/sympy/solvers/tests/test_solveset.py\nindex a1f756a4aa33..d00d5faa4310 100644\n--- a/sympy/solvers/tests/test_solveset.py\n+++ b/sympy/solvers/tests/test_solveset.py\n@@ -1,6 +1,5 @@\n from math import isclose\n \n-from sympy.core.add import Add\n from sympy.core.containers import Tuple\n from sympy.core.function import (Function, Lambda, nfloat, diff)\n from sympy.core.mod import Mod\n@@ -3223,7 +3222,7 @@ def test_issue_23318():\n Eq(x, 0.0015 * z),\n Eq(0.0015, 7845.32 * y / z),\n ]\n- eqs_expr = [eq.rewrite(Add) for eq in eqs_eq]\n+ eqs_expr = [eq.lhs_rhs() for eq in eqs_eq]\n \n sol = {(266.97755814852, 0.0340301680681629, 177985.03876568)}\n \ndiff --git a/sympy/stats/crv.py b/sympy/stats/crv.py\nindex 36e1a26e1499..bf854acab4f4 100644\n--- a/sympy/stats/crv.py\n+++ b/sympy/stats/crv.py\n@@ -418,7 +418,7 @@ def probability(self, condition, **kwargs):\n # by computing a density handled by density computation\n except NotImplementedError:\n from sympy.stats.rv import density\n- expr = condition.lhs - condition.rhs\n+ expr = condition.lhs_rhs()\n if not is_random(expr):\n dens = self.density\n comp = condition.rhs\ndiff --git a/sympy/stats/drv.py b/sympy/stats/drv.py\nindex 13517e0f6dd3..4e804afa83cb 100644\n--- a/sympy/stats/drv.py\n+++ b/sympy/stats/drv.py\n@@ -238,7 +238,7 @@ def probability(self, condition):\n prob = self.eval_prob(_domain)\n except NotImplementedError:\n from sympy.stats.rv import density\n- expr = condition.lhs - condition.rhs\n+ expr = condition.lhs_rhs()\n dens = density(expr)\n if not isinstance(dens, DiscreteDistribution):\n from sympy.stats.drv_types import DiscreteDistributionHandmade\ndiff --git a/sympy/stats/rv.py b/sympy/stats/rv.py\nindex 24d0aa8e90c6..963ec2c818c2 100644\n--- a/sympy/stats/rv.py\n+++ b/sympy/stats/rv.py\n@@ -467,7 +467,7 @@ def probability(self, condition, **kwargs):\n return Mul(*[self.probability(arg) for arg in condition.args])\n elif isinstance(condition, Or): # they are independent\n return Add(*[self.probability(arg) for arg in condition.args])\n- expr = condition.lhs - condition.rhs\n+ expr = condition.lhs_rhs()\n rvs = random_symbols(expr)\n dens = self.compute_density(expr)\n if any(pspace(rv).is_Continuous for rv in rvs):\ndiff --git a/sympy/vector/implicitregion.py b/sympy/vector/implicitregion.py\nindex fd608a07f044..5653c88feebd 100644\n--- a/sympy/vector/implicitregion.py\n+++ b/sympy/vector/implicitregion.py\n@@ -59,7 +59,7 @@ def __new__(cls, variables, equation):\n variables = Tuple(*variables)\n \n if isinstance(equation, Eq):\n- equation = equation.lhs - equation.rhs\n+ equation = equation.lhs_rhs()\n \n return super().__new__(cls, variables, equation)\n \n" }
[ { "diff_hunk": "@@ -422,7 +422,7 @@ def _check(orig_f, period):\n period = None\n \n if isinstance(f, Relational):\n- f = f.lhs - f.rhs\n+ f = f.lhs_rhs()", "line": null, "original_line": 425, "original_start_line": null, "path": "sympy/calculus/util.py", "start_line": null, "text": "@author:\n```suggestion\r\n f = f.lhs - f.rhs\r\n```" }, { "diff_hunk": "@@ -437,7 +437,7 @@ def _eval_simplify(self, **kwargs):\n try:\n from sympy.solvers.solveset import linear_coeffs\n x = free.pop()\n- dif = r.lhs - r.rhs\n+ dif = r.lhs_rhs()", "line": null, "original_line": 440, "original_start_line": null, "path": "sympy/core/relational.py", "start_line": null, "text": "@author:\n```suggestion\r\n dif = r.lhs - r.rhs\r\n```" }, { "diff_hunk": "@@ -467,7 +467,7 @@ def _eval_simplify(self, **kwargs):\n from sympy.solvers.solveset import linear_coeffs\n from sympy.polys.polytools import gcd\n free = list(ordered(free))\n- dif = r.lhs - r.rhs\n+ dif = r.lhs_rhs()", "line": null, "original_line": 470, "original_start_line": null, "path": "sympy/core/relational.py", "start_line": null, "text": "@author:\n```suggestion\r\n dif = r.lhs - r.rhs\r\n```" }, { "diff_hunk": "@@ -594,7 +594,7 @@ class Equality(Relational):\n \n Since this object is already an expression, it does not respond to\n the method ``as_expr`` if one tries to create `x - y` from ``Eq(x, y)``.\n- This can be done with the ``rewrite(Add)`` method.\n+ This can be done with the ``lhs_rhs()`` method.", "line": null, "original_line": 597, "original_start_line": null, "path": "sympy/core/relational.py", "start_line": null, "text": "@author:\n```suggestion\r\n This can be done for ``eq = Eq(x, y)`` with `eq.lhs - eq.rhs`.\r\n```" }, { "diff_hunk": "@@ -558,14 +560,14 @@ def test_x_minus_y_not_same_as_x_lt_y():\n \n ineq = Lt(x, y, evaluate=False)\n raises(TypeError, lambda: ineq.doit())\n- assert ineq.lhs - ineq.rhs < 0\n+ assert ineq.lhs_rhs() < 0", "line": null, "original_line": 563, "original_start_line": null, "path": "sympy/core/tests/test_relational.py", "start_line": null, "text": "@author:\n```suggestion\r\n assert ineq.lhs - ineq.rhs < 0\r\n```" }, { "diff_hunk": "@@ -1026,18 +1028,32 @@ def test_rel_args():\n raises(TypeError, lambda: Relational(b, v, op))\n \n \n-def test_Equality_rewrite_as_Add():\n+def test_Equality_lhs_rhs():\n+ # XXX we can be strict and make every routine that want to treat\n+ # Eq like Eqn for purpose of solving explicitly test for Eq\n+ # if isinstance(eq, Eq):\n+ # eq = eq.lhs_rhs()\n+ # or allow Expr to pass\n+ # eq = eq.lhs_rhs() # valid for Eq or Expr\n+ assert (x + 1).lhs_rhs() == x + 1", "line": null, "original_line": 1038, "original_start_line": null, "path": "sympy/core/tests/test_relational.py", "start_line": null, "text": "@author:\n```suggestion\r\n```" }, { "diff_hunk": "@@ -1026,18 +1028,32 @@ def test_rel_args():\n raises(TypeError, lambda: Relational(b, v, op))\n \n \n-def test_Equality_rewrite_as_Add():\n+def test_Equality_lhs_rhs():\n+ # XXX we can be strict and make every routine that want to treat\n+ # Eq like Eqn for purpose of solving explicitly test for Eq\n+ # if isinstance(eq, Eq):\n+ # eq = eq.lhs_rhs()\n+ # or allow Expr to pass\n+ # eq = eq.lhs_rhs() # valid for Eq or Expr\n+ assert (x + 1).lhs_rhs() == x + 1\n+\n eq = Eq(x + y, y - x)", "line": null, "original_line": 1040, "original_start_line": null, "path": "sympy/core/tests/test_relational.py", "start_line": null, "text": "@author:\n```suggestion\r\n```" }, { "diff_hunk": "@@ -1026,18 +1028,32 @@ def test_rel_args():\n raises(TypeError, lambda: Relational(b, v, op))\n \n \n-def test_Equality_rewrite_as_Add():\n+def test_Equality_lhs_rhs():\n+ # XXX we can be strict and make every routine that want to treat\n+ # Eq like Eqn for purpose of solving explicitly test for Eq\n+ # if isinstance(eq, Eq):\n+ # eq = eq.lhs_rhs()\n+ # or allow Expr to pass\n+ # eq = eq.lhs_rhs() # valid for Eq or Expr\n+ assert (x + 1).lhs_rhs() == x + 1\n+\n eq = Eq(x + y, y - x)\n- assert eq.rewrite(Add) == 2*x\n- assert eq.rewrite(Add, evaluate=None).args == (x, x, y, -y)\n- assert eq.rewrite(Add, evaluate=False).args == (x, y, x, -y)\n+ assert eq.lhs_rhs() == 2*x\n+ assert eq.lhs_rhs(evaluate=None).args == (x, x, y, -y)\n+ assert eq.lhs_rhs(evaluate=False).args == (x, y, x, -y)", "line": null, "original_line": 1043, "original_start_line": 1041, "path": "sympy/core/tests/test_relational.py", "start_line": null, "text": "@author:\n```suggestion\r\n```" }, { "diff_hunk": "@@ -1026,18 +1028,32 @@ def test_rel_args():\n raises(TypeError, lambda: Relational(b, v, op))\n \n \n-def test_Equality_rewrite_as_Add():\n+def test_Equality_lhs_rhs():\n+ # XXX we can be strict and make every routine that want to treat\n+ # Eq like Eqn for purpose of solving explicitly test for Eq\n+ # if isinstance(eq, Eq):\n+ # eq = eq.lhs_rhs()\n+ # or allow Expr to pass\n+ # eq = eq.lhs_rhs() # valid for Eq or Expr\n+ assert (x + 1).lhs_rhs() == x + 1\n+\n eq = Eq(x + y, y - x)\n- assert eq.rewrite(Add) == 2*x\n- assert eq.rewrite(Add, evaluate=None).args == (x, x, y, -y)\n- assert eq.rewrite(Add, evaluate=False).args == (x, y, x, -y)\n+ assert eq.lhs_rhs() == 2*x\n+ assert eq.lhs_rhs(evaluate=None).args == (x, x, y, -y)\n+ assert eq.lhs_rhs(evaluate=False).args == (x, y, x, -y)\n for e in (True, False, None):\n- assert Eq(x, 0, evaluate=e).rewrite(Add) == x\n- assert Eq(0, x, evaluate=e).rewrite(Add) == x\n+ assert Eq(x, 0, evaluate=e).lhs_rhs() == x\n+ assert Eq(0, x, evaluate=e).lhs_rhs() == x\n+", "line": null, "original_line": 1047, "original_start_line": 1045, "path": "sympy/core/tests/test_relational.py", "start_line": null, "text": "@author:\n```suggestion\r\n```" }, { "diff_hunk": "@@ -830,7 +830,7 @@ def __eval_cond(cls, cond):\n return True\n if isinstance(cond, Eq):\n try:\n- diff = cond.lhs - cond.rhs\n+ diff = cond.lhs_rhs()", "line": null, "original_line": 833, "original_start_line": null, "path": "sympy/functions/elementary/piecewise.py", "start_line": null, "text": "@author:\n```suggestion\r\n diff = cond.lhs - cond.rhs\r\n```" }, { "diff_hunk": "@@ -1557,7 +1557,7 @@ def __new__(cls, *args, **kwargs):\n y = kwargs.get('y', 'y')\n equation = args[0].expand()\n if isinstance(equation, Eq):\n- equation = equation.lhs - equation.rhs\n+ equation = equation.lhs_rhs()", "line": null, "original_line": 1560, "original_start_line": null, "path": "sympy/geometry/ellipse.py", "start_line": null, "text": "@author:\n```suggestion\r\n equation = equation.lhs - equation.rhs\r\n```\n\n@user1:\nI don't see how\r\n```\r\nequation = (_:= equation).lhs - _.rhs\r\n```\r\nis better than the original:\r\n```\r\nequation = equation.lhs - equation.rhs\r\n```\n\n@author:\nI was interrupted mid-edit" }, { "diff_hunk": "@@ -1192,7 +1192,7 @@ def __new__(cls, *args, **kwargs):\n \n equation = args[0]\n if isinstance(equation, Eq):\n- equation = equation.lhs - equation.rhs\n+ equation = equation.lhs_rhs()", "line": null, "original_line": 1195, "original_start_line": null, "path": "sympy/geometry/line.py", "start_line": null, "text": "@author:\n```suggestion\r\n equation = equation.lhs - equation.rhs\r\n```" }, { "diff_hunk": "@@ -662,7 +661,7 @@ def _eval_simplify(self, **kwargs):\n if (e.lhs != x or x in e.rhs.free_symbols) and x not in reps:\n try:\n m, b = linear_coeffs(\n- e.rewrite(Add, evaluate=False), x)\n+ e.lhs_rhs(evaluate=False), x)", "line": null, "original_line": 664, "original_start_line": null, "path": "sympy/logic/boolalg.py", "start_line": null, "text": "@author:\n```suggestion\r\n Add(e.lhs, -e.rhs, evaluate=False), x)\r\n```" }, { "diff_hunk": "@@ -2485,7 +2485,7 @@ def _preprocess_meshgrid_expression(expr, adaptive):\n \"\"\"\n equality = False\n if isinstance(expr, Equality):\n- expr = expr.lhs - expr.rhs\n+ expr = expr.lhs_rhs()", "line": null, "original_line": 2488, "original_start_line": null, "path": "sympy/plotting/series.py", "start_line": null, "text": "@author:\n```suggestion\r\n expr = expr.lhs - expr.rhs\r\n```" }, { "diff_hunk": "@@ -190,7 +190,7 @@ def _parallel_dict_from_expr_if_gens(exprs, opt):\n poly = {}\n \n if expr.is_Equality:\n- expr = expr.lhs - expr.rhs\n+ expr = expr.lhs_rhs()", "line": null, "original_line": 193, "original_start_line": null, "path": "sympy/polys/polyutils.py", "start_line": null, "text": "@author:\n```suggestion\r\n expr = expr.lhs - expr.rhs\r\n```" }, { "diff_hunk": "@@ -249,7 +249,7 @@ def _is_coeff(factor):\n terms = []\n \n if expr.is_Equality:\n- expr = expr.lhs - expr.rhs\n+ expr = expr.lhs_rhs()", "line": null, "original_line": 252, "original_start_line": null, "path": "sympy/polys/polyutils.py", "start_line": null, "text": "@author:\n```suggestion\r\n expr = expr.lhs - expr.rhs\r\n```" }, { "diff_hunk": "@@ -173,7 +173,7 @@ def _desolve(eq, func=None, hint=\"default\", ics=None, simplify=True, *, prep=Tru\n classify_pde(pde.py)\n \"\"\"\n if isinstance(eq, Equality):\n- eq = eq.lhs - eq.rhs\n+ eq = eq.lhs_rhs()", "line": null, "original_line": 176, "original_start_line": null, "path": "sympy/solvers/deutils.py", "start_line": null, "text": "@author:\n```suggestion\r\n eq = eq.lhs - eq.rhs\r\n```" }, { "diff_hunk": "@@ -1335,7 +1335,7 @@ def diophantine(eq, param=symbols(\"t\", integer=True), syms=None,\n eq = _sympify(eq)\n \n if isinstance(eq, Eq):\n- eq = eq.lhs - eq.rhs\n+ eq = eq.lhs_rhs()", "line": null, "original_line": 1338, "original_start_line": null, "path": "sympy/solvers/diophantine/diophantine.py", "start_line": null, "text": "@author:\n```suggestion\r\n eq = eq.lhs - eq.rhs\r\n```" }, { "diff_hunk": "@@ -233,7 +233,7 @@ def reduce_rational_inequalities(exprs, gen, relational=True):\n expr, rel = expr\n else:\n if expr.is_Relational:\n- expr, rel = expr.lhs - expr.rhs, expr.rel_op\n+ expr, rel = expr.lhs_rhs(), expr.rel_op", "line": null, "original_line": 236, "original_start_line": null, "path": "sympy/solvers/inequalities.py", "start_line": null, "text": "@author:\n```suggestion\r\n expr, rel = expr.lhs - expr.rhs, expr.rel_op\r\n```" }, { "diff_hunk": "@@ -479,7 +479,7 @@ def solve_univariate_inequality(expr, gen, relational=True, domain=S.Reals, cont\n rv = S.EmptySet\n \n else:\n- e = expr.lhs - expr.rhs\n+ e = expr.lhs_rhs()", "line": null, "original_line": 482, "original_start_line": null, "path": "sympy/solvers/inequalities.py", "start_line": null, "text": "@author:\n```suggestion\r\n e = expr.lhs - expr.rhs\r\n```" }, { "diff_hunk": "@@ -805,7 +805,7 @@ def classify(ie, s, i):\n \n rv = None\n oo = S.Infinity\n- expr = ie.lhs - ie.rhs\n+ expr = ie.lhs_rhs()", "line": null, "original_line": 808, "original_start_line": null, "path": "sympy/solvers/inequalities.py", "start_line": null, "text": "@author:\n```suggestion\r\n expr = ie.lhs - ie.rhs\r\n```" }, { "diff_hunk": "@@ -230,7 +230,7 @@ def infinitesimals(eq, func=None, order=None, hint='default', match=None):\n \"\"\"\n \n if isinstance(eq, Equality):\n- eq = eq.lhs - eq.rhs\n+ eq = eq.lhs_rhs()", "line": null, "original_line": 233, "original_start_line": null, "path": "sympy/solvers/ode/lie_group.py", "start_line": null, "text": "@author:\n```suggestion\r\n eq = eq.lhs - eq.rhs\r\n```" }, { "diff_hunk": "@@ -943,7 +943,7 @@ class in it. Note that a hint may do this anyway if\n \"work with functions of one variable, not %s\" % func)\n \n if isinstance(eq, Equality):\n- eq = eq.lhs - eq.rhs\n+ eq = eq.lhs_rhs()", "line": null, "original_line": 946, "original_start_line": null, "path": "sympy/solvers/ode/ode.py", "start_line": null, "text": "@author:\n```suggestion\r\n eq = eq.lhs - eq.rhs\r\n```" }, { "diff_hunk": "@@ -1213,7 +1213,7 @@ def _sympify(eq):\n eq, funcs = (_sympify(w) for w in [eq, funcs])\n for i, fi in enumerate(eq):\n if isinstance(fi, Equality):\n- eq[i] = fi.lhs - fi.rhs\n+ eq[i] = fi.lhs_rhs()", "line": null, "original_line": 1216, "original_start_line": null, "path": "sympy/solvers/ode/ode.py", "start_line": null, "text": "@author:\n```suggestion\r\n eq[i] = fi.lhs - fi.rhs\r\n```" }, { "diff_hunk": "@@ -2815,7 +2815,7 @@ def checkinfsol(eq, infinitesimals, func=None, order=None):\n \n \"\"\"\n if isinstance(eq, Equality):\n- eq = eq.lhs - eq.rhs\n+ eq = eq.lhs_rhs()", "line": null, "original_line": 2818, "original_start_line": null, "path": "sympy/solvers/ode/ode.py", "start_line": null, "text": "@author:\n```suggestion\r\n eq = eq.lhs - eq.rhs\r\n```" }, { "diff_hunk": "@@ -295,7 +295,7 @@ def match_riccati(eq, f, x):\n \"\"\"\n # Group terms based on f(x)\n if isinstance(eq, Eq):\n- eq = eq.lhs - eq.rhs\n+ eq = eq.lhs_rhs()", "line": null, "original_line": 298, "original_start_line": null, "path": "sympy/solvers/ode/riccati.py", "start_line": null, "text": "@author:\n```suggestion\r\n eq = eq.lhs - eq.rhs\r\n```" }, { "diff_hunk": "@@ -177,7 +177,7 @@ def checkodesol(ode, sol, func=None, order='auto', solve_for_func=True):\n if testnum == 0:\n # First pass, try substituting a solved solution directly into the\n # ODE. This has the highest chance of succeeding.\n- ode_diff = ode.lhs - ode.rhs\n+ ode_diff = ode.lhs_rhs()", "line": null, "original_line": 180, "original_start_line": null, "path": "sympy/solvers/ode/subscheck.py", "start_line": null, "text": "@author:\n```suggestion\r\n ode_diff = ode.lhs - ode.rhs\r\n```" }, { "diff_hunk": "@@ -221,7 +221,7 @@ def checkodesol(ode, sol, func=None, order='auto', solve_for_func=True):\n diffsols = {0: sol.lhs}\n else:\n diffsols = {}\n- sol = sol.lhs - sol.rhs\n+ sol = sol.lhs_rhs()", "line": null, "original_line": 224, "original_start_line": null, "path": "sympy/solvers/ode/subscheck.py", "start_line": null, "text": "@author:\n```suggestion\r\n sol = sol.lhs - sol.rhs\r\n```" }, { "diff_hunk": "@@ -345,7 +345,7 @@ def _sympify(eq):\n eqs = _sympify(eqs)\n for i in range(len(eqs)):\n if isinstance(eqs[i], Equality):\n- eqs[i] = eqs[i].lhs - eqs[i].rhs\n+ eqs[i] = eqs[i].lhs_rhs()", "line": null, "original_line": 348, "original_start_line": null, "path": "sympy/solvers/ode/subscheck.py", "start_line": null, "text": "@author:\n```suggestion\r\n eqs[i] = eqs[i].lhs - eqs[i].rhs\r\n```" }, { "diff_hunk": "@@ -717,7 +717,7 @@ def check_dummy_sol(eq, solse, dummy_sym):\n contains dummy symbols.\n \"\"\"\n if isinstance(eq, Eq):\n- eq = eq.lhs - eq.rhs\n+ eq = eq.lhs_rhs()", "line": null, "original_line": 720, "original_start_line": null, "path": "sympy/solvers/ode/tests/test_riccati.py", "start_line": null, "text": "@author:\n```suggestion\r\n eq = eq.lhs - eq.rhs\r\n```" }, { "diff_hunk": "@@ -468,7 +468,7 @@ def checkpdesol(pde, sol, func=None, solve_for_func=True):\n \n # try direct substitution of the solution into the PDE and simplify\n if sol.lhs == func:\n- pde = pde.lhs - pde.rhs\n+ pde = pde.lhs_rhs()", "line": null, "original_line": 471, "original_start_line": null, "path": "sympy/solvers/pde.py", "start_line": null, "text": "@author:\n```suggestion\r\n pde = pde.lhs - pde.rhs\r\n```" }, { "diff_hunk": "@@ -728,7 +728,7 @@ def rsolve(f, y, init=None):\n \n \"\"\"\n if isinstance(f, Equality):\n- f = f.lhs - f.rhs\n+ f = f.lhs_rhs()", "line": null, "original_line": 731, "original_start_line": null, "path": "sympy/solvers/recurr.py", "start_line": null, "text": "@author:\n```suggestion\r\n f = f.lhs - f.rhs\r\n```" }, { "diff_hunk": "@@ -734,7 +734,7 @@ def _lp_matrices(objective, constraints):\n # change Eq(x, y) to x - y <= 0 and y - x <= 0\n for i in range(len(np)):\n if isinstance(np[i], Eq):\n- np[i] = np[i].lhs - np[i].rhs <= 0\n+ np[i] = np[i].lhs_rhs() <= 0", "line": null, "original_line": 737, "original_start_line": null, "path": "sympy/solvers/simplex.py", "start_line": null, "text": "@author:\n```suggestion\r\n np[i] = np[i].lhs - np[i].rhs <= 0\r\n```" }, { "diff_hunk": "@@ -274,7 +274,7 @@ def checksol(f, symbol, sol=None, **flags):\n if not f.is_Boolean:\n return\n else:\n- f = f.rewrite(Add, evaluate=False, deep=False)\n+ f = f.lhs_rhs(evaluate=False, deep=False)", "line": null, "original_line": 277, "original_start_line": null, "path": "sympy/solvers/solvers.py", "start_line": null, "text": "@author:\n```suggestion\r\n f = Add(f.lhs, -f.rhs, evaluate=False)\r\n```" }, { "diff_hunk": "@@ -928,7 +928,7 @@ def _sympified_list(w):\n for i, fi in enumerate(f):\n if isinstance(fi, (Eq, Ne)):\n if 'ImmutableDenseMatrix' in [type(a).__name__ for a in fi.args]:\n- fi = fi.lhs - fi.rhs\n+ fi = fi.lhs_rhs()", "line": null, "original_line": 931, "original_start_line": null, "path": "sympy/solvers/solvers.py", "start_line": null, "text": "@author:\n```suggestion\r\n fi = fi.lhs - fi.rhs\r\n```" }, { "diff_hunk": "@@ -948,7 +948,7 @@ def _sympified_list(w):\n is True or False.\n '''))\n else:\n- fi = fi.rewrite(Add, evaluate=False, deep=False)\n+ fi = fi.lhs_rhs(evaluate=False, deep=False)", "line": null, "original_line": 951, "original_start_line": null, "path": "sympy/solvers/solvers.py", "start_line": null, "text": "@author:\n```suggestion\r\n fi = Add(fi.lhs, -fi.rhs, evaluate=False)\r\n```" }, { "diff_hunk": "@@ -2420,7 +2420,7 @@ def solve_undetermined_coeffs(equ, coeffs, *syms, **flags):\n raise ValueError('must provide symbols for coeffs')\n \n if isinstance(equ, Eq):\n- eq = equ.lhs - equ.rhs\n+ eq = equ.lhs_rhs()", "line": null, "original_line": 2423, "original_start_line": null, "path": "sympy/solvers/solvers.py", "start_line": null, "text": "@author:\n```suggestion\r\n eq = equ.lhs - equ.rhs\r\n```" }, { "diff_hunk": "@@ -3053,14 +3053,14 @@ def nsolve(*args, dict=False, **kwargs):\n f = list(f)\n for i, fi in enumerate(f):\n if isinstance(fi, Eq):\n- f[i] = fi.lhs - fi.rhs\n+ f[i] = fi.lhs_rhs()", "line": null, "original_line": 3056, "original_start_line": null, "path": "sympy/solvers/solvers.py", "start_line": null, "text": "@author:\n```suggestion\r\n f[i] = fi.lhs - fi.rhs\r\n```" }, { "diff_hunk": "@@ -1335,7 +1335,7 @@ def diophantine(eq, param=symbols(\"t\", integer=True), syms=None,\n eq = _sympify(eq)\n \n if isinstance(eq, Eq):\n- eq = eq.lhs - eq.rhs\n+ eq = eq.lhs_rhs()", "line": null, "original_line": 1338, "original_start_line": null, "path": "sympy/solvers/diophantine/diophantine.py", "start_line": null, "text": "@user1:\nI don't think that this is an improvement." }, { "diff_hunk": "@@ -3428,7 +3428,7 @@ def _take(d):\n _take = flags.setdefault('_take', _take)\n \n if isinstance(eq, Eq):\n- eq = eq.lhs - eq.rhs # XXX legacy Eq as Eqn support\n+ eq = eq.lhs_rhs() # XXX legacy Eq as Eqn support", "line": null, "original_line": 3431, "original_start_line": null, "path": "sympy/solvers/solvers.py", "start_line": null, "text": "@author:\n```suggestion\r\n eq = eq.lhs - eq.rhs # XXX legacy Eq as Eqn support\r\n```" }, { "diff_hunk": "@@ -3053,14 +3053,14 @@ def nsolve(*args, dict=False, **kwargs):\n f = list(f)\n for i, fi in enumerate(f):\n if isinstance(fi, Eq):\n- f[i] = fi.lhs - fi.rhs\n+ f[i] = fi.lhs_rhs()\n f = Matrix(f).T\n if iterable(x0):\n x0 = list(x0)\n if not isinstance(f, Matrix):\n # assume it's a SymPy expression\n if isinstance(f, Eq):\n- f = f.lhs - f.rhs\n+ f = f.lhs_rhs()", "line": null, "original_line": 3063, "original_start_line": null, "path": "sympy/solvers/solvers.py", "start_line": null, "text": "@author:\n```suggestion\r\n f = f.lhs - f.rhs\r\n```" }, { "diff_hunk": "@@ -3311,7 +3311,7 @@ def _solve_using_known_values(result, solver):\n # list.\n result.remove(res)\n continue # skip as it's independent of desired symbols\n- depen1, depen2 = (eq2.rewrite(Add)).as_independent(*unsolved_syms)\n+ depen1, depen2 = (eq2.lhs_rhs()).as_independent(*unsolved_syms)", "line": null, "original_line": 3314, "original_start_line": null, "path": "sympy/solvers/solveset.py", "start_line": null, "text": "@author:\n```suggestion\r\n depen1, depen2 = Add(eq2.lhs - eq2.rhs, evaluate=False).as_independent(*unsolved_syms)\r\n```" }, { "diff_hunk": "@@ -3528,7 +3528,7 @@ def _separate_poly_nonpoly(system, symbols):\n denominators.update(_simple_dens(eq, symbols))\n # Convert equality to expression\n if isinstance(eq, Equality):\n- eq = eq.rewrite(Add)\n+ eq = eq.lhs_rhs()", "line": null, "original_line": 3531, "original_start_line": null, "path": "sympy/solvers/solveset.py", "start_line": null, "text": "@author:\n```suggestion\r\n eq = eq.lhs - eq.rhs\r\n```" }, { "diff_hunk": "@@ -2640,7 +2640,7 @@ def test_issue_10169():\n \n def test_solve_undetermined_coeffs_issue_23927():\n A, B, r, phi = symbols('A, B, r, phi')\n- eq = Eq(A*sin(t) + B*cos(t), r*sin(t - phi)).rewrite(Add).expand(trig=True)\n+ eq = Eq(A*sin(t) + B*cos(t), r*sin(t - phi)).lhs_rhs().expand(trig=True)", "line": null, "original_line": 2643, "original_start_line": null, "path": "sympy/solvers/tests/test_solvers.py", "start_line": null, "text": "@author:\n```suggestion\r\n _ = Eq(A*sin(t) + B*cos(t), r*sin(t - phi))\r\n eq = (_.lhs - _.rhs).expand(trig=True)\r\n```" }, { "diff_hunk": "@@ -3223,7 +3222,7 @@ def test_issue_23318():\n Eq(x, 0.0015 * z),\n Eq(0.0015, 7845.32 * y / z),\n ]\n- eqs_expr = [eq.rewrite(Add) for eq in eqs_eq]\n+ eqs_expr = [eq.lhs_rhs() for eq in eqs_eq]", "line": null, "original_line": 3225, "original_start_line": null, "path": "sympy/solvers/tests/test_solveset.py", "start_line": null, "text": "@author:\n```suggestion\r\n eqs_expr = [eq.lhs - eq.rhs for eq in eqs_eq]\r\n```" }, { "diff_hunk": "@@ -418,7 +418,7 @@ def probability(self, condition, **kwargs):\n # by computing a density handled by density computation\n except NotImplementedError:\n from sympy.stats.rv import density\n- expr = condition.lhs - condition.rhs\n+ expr = condition.lhs_rhs()", "line": null, "original_line": 421, "original_start_line": null, "path": "sympy/stats/crv.py", "start_line": null, "text": "@author:\n```suggestion\r\n expr = condition.lhs - condition.rhs\r\n```" }, { "diff_hunk": "@@ -1065,7 +1065,7 @@ def _solveset(f, symbol, domain, _check=False):\n solns = solver(expr, symbol, in_set)\n result += solns\n elif isinstance(f, Eq):\n- result = solver(Add(f.lhs, - f.rhs, evaluate=False), symbol, domain)\n+ result = solver(f.lhs_rhs(evaluate=False), symbol, domain)", "line": null, "original_line": 1068, "original_start_line": null, "path": "sympy/solvers/solveset.py", "start_line": null, "text": "@author:\n```suggestion\r\n result = solver(Add(f.lhs, -f.rhs, evaluate=False), symbol, domain)\r\n```" }, { "diff_hunk": "@@ -467,7 +467,7 @@ def probability(self, condition, **kwargs):\n return Mul(*[self.probability(arg) for arg in condition.args])\n elif isinstance(condition, Or): # they are independent\n return Add(*[self.probability(arg) for arg in condition.args])\n- expr = condition.lhs - condition.rhs\n+ expr = condition.lhs_rhs()", "line": null, "original_line": 470, "original_start_line": null, "path": "sympy/stats/rv.py", "start_line": null, "text": "@author:\n```suggestion\r\n expr = condition.lhs - condition.rhs\r\n```" }, { "diff_hunk": "@@ -59,7 +59,7 @@ def __new__(cls, variables, equation):\n variables = Tuple(*variables)\n \n if isinstance(equation, Eq):\n- equation = equation.lhs - equation.rhs\n+ equation = equation.lhs_rhs()", "line": null, "original_line": 62, "original_start_line": null, "path": "sympy/vector/implicitregion.py", "start_line": null, "text": "@author:\n```suggestion\r\n equation = equation.lhs - equation.rhs\r\n```" }, { "diff_hunk": "@@ -238,7 +238,7 @@ def probability(self, condition):\n prob = self.eval_prob(_domain)\n except NotImplementedError:\n from sympy.stats.rv import density\n- expr = condition.lhs - condition.rhs\n+ expr = condition.lhs_rhs()", "line": null, "original_line": 241, "original_start_line": null, "path": "sympy/stats/drv.py", "start_line": null, "text": "@author:\n```suggestion\r\n expr = condition.lhs - condition.rhs\r\n```" }, { "diff_hunk": "@@ -1026,18 +1028,18 @@ def test_rel_args():\n raises(TypeError, lambda: Relational(b, v, op))\n \n \n-def test_Equality_rewrite_as_Add():\n- eq = Eq(x + y, y - x)\n- assert eq.rewrite(Add) == 2*x\n- assert eq.rewrite(Add, evaluate=None).args == (x, x, y, -y)\n- assert eq.rewrite(Add, evaluate=False).args == (x, y, x, -y)\n+def test_Equality_lhs_rhs():\n+\n for e in (True, False, None):", "line": null, "original_line": 1033, "original_start_line": null, "path": "sympy/core/tests/test_relational.py", "start_line": null, "text": "@author:\n```suggestion\r\n```" }, { "diff_hunk": "@@ -565,7 +567,7 @@ def test_x_minus_y_not_same_as_x_lt_y():\n y = 3 + t\n ineq = Lt(x, y, evaluate=False)\n raises(TypeError, lambda: ineq.doit())\n- assert ineq.lhs - ineq.rhs < 0\n+ assert ineq.lhs_rhs() < 0", "line": null, "original_line": 570, "original_start_line": null, "path": "sympy/core/tests/test_relational.py", "start_line": null, "text": "@author:\n```suggestion\r\n assert ineq.lhs - ineq.rhs < 0\r\n```" }, { "diff_hunk": "@@ -683,11 +684,10 @@ def _eval_simplify(self, **kwargs):\n free = self.free_symbols\n if len(free) == 1:\n try:\n- from .add import Add\n from sympy.solvers.solveset import linear_coeffs\n x = free.pop()\n m, b = linear_coeffs(\n- e.rewrite(Add, evaluate=False), x)\n+ e.lhs_rhs(evaluate=False), x)", "line": null, "original_line": 690, "original_start_line": null, "path": "sympy/core/relational.py", "start_line": null, "text": "@author:\n```suggestion\r\n Add(e.lhs, -e.rhs, evaluate=False), x)\r\n```" }, { "diff_hunk": "@@ -3311,7 +3311,7 @@ def _solve_using_known_values(result, solver):\n # list.\n result.remove(res)\n continue # skip as it's independent of desired symbols\n- depen1, depen2 = (eq2.rewrite(Add)).as_independent(*unsolved_syms)\n+ depen1, depen2 = Add(eq2.lhs - eq2.rhs, evaluate=False).as_independent(*unsolved_syms)", "line": null, "original_line": 3314, "original_start_line": null, "path": "sympy/solvers/solveset.py", "start_line": null, "text": "@author:\n```suggestion\r\n depen1, depen2 = (eq2.lhs - eq2.rhs).as_independent(*unsolved_syms)\r\n```" }, { "diff_hunk": "@@ -594,7 +594,7 @@ class Equality(Relational):\n \n Since this object is already an expression, it does not respond to\n the method ``as_expr`` if one tries to create `x - y` from ``Eq(x, y)``.\n- This can be done with the ``rewrite(Add)`` method.\n+ This can be done for ``eq = Eq(x, y)`` with `eq.lhs - eq.rhs`.", "line": null, "original_line": 597, "original_start_line": null, "path": "sympy/core/relational.py", "start_line": null, "text": "@author:\n```suggestion\r\n If ``eq = Eq(x, y)`` then write `eq.lhs - eq.rhs` to get ``x - y``.\r\n```" }, { "diff_hunk": "@@ -1026,18 +1028,17 @@ def test_rel_args():\n raises(TypeError, lambda: Relational(b, v, op))\n \n \n-def test_Equality_rewrite_as_Add():\n- eq = Eq(x + y, y - x)\n- assert eq.rewrite(Add) == 2*x\n- assert eq.rewrite(Add, evaluate=None).args == (x, x, y, -y)\n- assert eq.rewrite(Add, evaluate=False).args == (x, y, x, -y)\n- for e in (True, False, None):\n- assert Eq(x, 0, evaluate=e).rewrite(Add) == x\n- assert Eq(0, x, evaluate=e).rewrite(Add) == x\n+def test_nothing_happens_to_Eq_condition_during_simplify():\n+", "line": null, "original_line": 1032, "original_start_line": null, "path": "sympy/core/tests/test_relational.py", "start_line": null, "text": "@author:\n```suggestion\r\n```" }, { "diff_hunk": "@@ -3311,7 +3311,7 @@ def _solve_using_known_values(result, solver):\n # list.\n result.remove(res)\n continue # skip as it's independent of desired symbols\n- depen1, depen2 = (eq2.rewrite(Add)).as_independent(*unsolved_syms)\n+ depen1, depen2 = (eq2.lhs - eq2.rhs).as_independent(*unsolved_syms)", "line": null, "original_line": 3314, "original_start_line": null, "path": "sympy/solvers/solveset.py", "start_line": null, "text": "@author:\n```suggestion\r\n # XXX this should have happened in the initially called routine\r\n # not in the private method\r\n _ = eq2.lhs - eq2.rhs if isinstance(eq2, Eq) else eq2\r\n depen1, depen2 = _.as_independent(*unsolved_syms)\r\n```" }, { "diff_hunk": "@@ -274,7 +274,7 @@ def checksol(f, symbol, sol=None, **flags):\n if not f.is_Boolean:\n return\n else:", "line": null, "original_line": 276, "original_start_line": null, "path": "sympy/solvers/solvers.py", "start_line": null, "text": "@author:\n```suggestion\r\n elif isinstance(f, Eq):\r\n```" }, { "diff_hunk": "@@ -948,7 +948,7 @@ def _sympified_list(w):\n is True or False.\n '''))\n else:", "line": null, "original_line": 950, "original_start_line": null, "path": "sympy/solvers/solvers.py", "start_line": null, "text": "@author:\n```suggestion\r\n elif isinstance(fi, Eq):\r\n```" }, { "diff_hunk": "@@ -419,7 +419,7 @@ def _eval_simplify(self, **kwargs):\n if r.is_Relational:\n if not isinstance(r.lhs, Expr) or not isinstance(r.rhs, Expr):\n return r\n- dif = r.lhs - r.rhs\n+ dif = r.lhs_rhs()", "line": null, "original_line": 422, "original_start_line": null, "path": "sympy/core/relational.py", "start_line": null, "text": "@author:\n```suggestion\r\n dif = r.lhs - r.rhs\r\n```" }, { "diff_hunk": "@@ -1026,18 +1028,32 @@ def test_rel_args():\n raises(TypeError, lambda: Relational(b, v, op))\n \n \n-def test_Equality_rewrite_as_Add():\n+def test_Equality_lhs_rhs():\n+ # XXX we can be strict and make every routine that want to treat\n+ # Eq like Eqn for purpose of solving explicitly test for Eq\n+ # if isinstance(eq, Eq):\n+ # eq = eq.lhs_rhs()\n+ # or allow Expr to pass\n+ # eq = eq.lhs_rhs() # valid for Eq or Expr", "line": null, "original_line": 1037, "original_start_line": 1032, "path": "sympy/core/tests/test_relational.py", "start_line": null, "text": "@user1:\nI think it is better to be strict. Code that does not know whether `eq` is an `Eq` or an `Expr` is always fragile. In any situation where either `Eq` or `Expr` is allowed then a conversion should be done at the earliest opportunity either like:\r\n```python\r\nif isinstance(eq, Eq):\r\n eq = eq.lhs_rhs()\r\nelif isinstance(eq, Expr):\r\n eq = eq\r\nelse:\r\n raise Error\r\n# Now we know we have Expr\r\n```\r\nOr it should be like:\r\n```python\r\nif isinstance(eq, Eq):\r\n lhs, rhs = eq.lhs, eq.rhs\r\nelif isinstance(eq, Expr):\r\n lhs, rhs = eq, S.Zero\r\nelse:\r\n raise Error\r\n# Now we know we have a pair of Expr\r\n```\r\nOr\r\n```python\r\nif isinstance(eq, Eq):\r\n eq = eq\r\nelif isinstance(eq, Expr):\r\n eq = Eq(eq, 0)\r\nelse:\r\n raise Error\r\n# Now we know we have Eq\r\n```\r\nI don't think that burying this logic into a convenience method helps for clarity of code.\r\n\r\nIn general we should minimise having situations where either `Eq` or `Expr` can be used or at least try to localise very carefully where the conversions are done if needed." }, { "diff_hunk": "@@ -68,6 +68,9 @@ class Expr(Basic, EvalfMixin):\n \n is_scalar = True # self derivative is 1\n \n+ def lhs_rhs(self, evaluate=True):\n+ return self # only for Eq\n+", "line": null, "original_line": 73, "original_start_line": 71, "path": "sympy/core/expr.py", "start_line": null, "text": "@author:\n```suggestion\r\n```" } ]
30af3bc3e5394ce4efcd25c65bd7879307ef1eba
diff --git a/doc/src/explanation/active-deprecations.md b/doc/src/explanation/active-deprecations.md index 8cb667933931..a4752713f2f8 100644 --- a/doc/src/explanation/active-deprecations.md +++ b/doc/src/explanation/active-deprecations.md @@ -76,6 +76,16 @@ SymPy deprecation warnings. ## Version 1.13 +(eq-rewrite-Add)= +### Deprecate Eq.rewrite(Add) +The ability to rewrite ``eq = Eq(x, y)`` like ``eq.rewrite(Add)`` to give ``x - y`` +has been deprecated in favor of writing ``eq.lhs - eq.rhs``. A replacement +property/method was not deemed necessary given the clarity of the explicit +use of ``lhs`` and ``rhs``, and the inclusion of this functionality in the +rewrite apparatus leads to failures when a node expecting a Boolean is re- +written as an Expr. + + (deprecated-markers-annotations-fill-rectangles)= ### Deprecate markers, annotations, fill, rectangles of the Plot class The properties ``markers, annotations, fill, rectangles`` (containing diff --git a/sympy/core/relational.py b/sympy/core/relational.py index b2dbd1f39552..9d7c1b852c94 100644 --- a/sympy/core/relational.py +++ b/sympy/core/relational.py @@ -11,6 +11,8 @@ from sympy.logic.boolalg import Boolean, BooleanAtom from sympy.utilities.iterables import sift from sympy.utilities.misc import filldedent +from sympy.utilities.exceptions import sympy_deprecation_warning + __all__ = ( 'Rel', 'Eq', 'Ne', 'Lt', 'Le', 'Gt', 'Ge', @@ -594,7 +596,7 @@ class Equality(Relational): Since this object is already an expression, it does not respond to the method ``as_expr`` if one tries to create `x - y` from ``Eq(x, y)``. - This can be done with the ``rewrite(Add)`` method. + If ``eq = Eq(x, y)`` then write `eq.lhs - eq.rhs` to get ``x - y``. .. deprecated:: 1.5 @@ -635,6 +637,11 @@ def _eval_rewrite_as_Add(self, L, R, evaluate=True, **kwargs): non-canonical args will be returned. If one side is 0, the non-zero side will be returned. + .. deprecated:: 1.13 + + The method ``Eq.rewrite(Add)`` is deprecated. + See :ref:`eq-rewrite-Add` for details. + Examples ======== @@ -648,6 +655,16 @@ def _eval_rewrite_as_Add(self, L, R, evaluate=True, **kwargs): >>> eq.rewrite(Add, evaluate=False).args (b, x, b, -x) """ + sympy_deprecation_warning(""" + Eq.rewrite(Add) is deprecated. + + For ``eq = Eq(a, b)`` use ``eq.lhs - eq.rhs`` to obtain + ``a - b``. + """, + deprecated_since_version="1.13", + active_deprecations_target="eq-rewrite-Add", + stacklevel=5, + ) from .add import _unevaluated_Add, Add if L == 0: return R @@ -687,7 +704,7 @@ def _eval_simplify(self, **kwargs): from sympy.solvers.solveset import linear_coeffs x = free.pop() m, b = linear_coeffs( - e.rewrite(Add, evaluate=False), x) + Add(e.lhs, -e.rhs, evaluate=False), x) if m.is_zero is False: enew = e.func(x, -b / m) else: diff --git a/sympy/core/tests/test_relational.py b/sympy/core/tests/test_relational.py index a25650fce20f..83c5e1b31392 100644 --- a/sympy/core/tests/test_relational.py +++ b/sympy/core/tests/test_relational.py @@ -12,6 +12,8 @@ from sympy.core.power import Pow from sympy.core.singleton import S from sympy.core.symbol import (Symbol, symbols) +from sympy.functions.elementary.complexes import sign, Abs +from sympy.functions.elementary.piecewise import Piecewise from sympy.functions.elementary.exponential import (exp, exp_polar, log) from sympy.functions.elementary.integers import (ceiling, floor) from sympy.functions.elementary.miscellaneous import sqrt @@ -1026,18 +1028,16 @@ def test_rel_args(): raises(TypeError, lambda: Relational(b, v, op)) -def test_Equality_rewrite_as_Add(): - eq = Eq(x + y, y - x) - assert eq.rewrite(Add) == 2*x - assert eq.rewrite(Add, evaluate=None).args == (x, x, y, -y) - assert eq.rewrite(Add, evaluate=False).args == (x, y, x, -y) - for e in (True, False, None): - assert Eq(x, 0, evaluate=e).rewrite(Add) == x - assert Eq(0, x, evaluate=e).rewrite(Add) == x +def test_nothing_happens_to_Eq_condition_during_simplify(): + # issue 25701 + r = symbols('r', real=True) + assert Eq(2*sign(r + 3)/(5*Abs(r + 3)**Rational(3, 5)), 0 + ).simplify() == Eq(Piecewise( + (0, Eq(r, -3)), ((r + 3)/(5*Abs((r + 3)**Rational(8, 5)))*2, True)), 0) def test_issue_15847(): - a = Ne(x*(x+y), x**2 + x*y) + a = Ne(x*(x + y), x**2 + x*y) assert simplify(a) == False @@ -1251,3 +1251,8 @@ def test_weak_strict(): eq = Le(x, 1) assert eq.strict == Lt(x, 1) assert eq.weak == eq + +def test_rewrite_Add(): + from sympy.testing.pytest import warns_deprecated_sympy + with warns_deprecated_sympy(): + assert Eq(x, y).rewrite(Add) == x - y diff --git a/sympy/logic/boolalg.py b/sympy/logic/boolalg.py index ac190ac50883..f4c4f73362d4 100644 --- a/sympy/logic/boolalg.py +++ b/sympy/logic/boolalg.py @@ -662,7 +662,7 @@ def _eval_simplify(self, **kwargs): if (e.lhs != x or x in e.rhs.free_symbols) and x not in reps: try: m, b = linear_coeffs( - e.rewrite(Add, evaluate=False), x) + Add(e.lhs, -e.rhs, evaluate=False), x) enew = e.func(x, -b/m) if measure(enew) <= ratio*measure(e): e = enew diff --git a/sympy/solvers/solvers.py b/sympy/solvers/solvers.py index b79eb737804d..758888bb0301 100644 --- a/sympy/solvers/solvers.py +++ b/sympy/solvers/solvers.py @@ -273,8 +273,8 @@ def checksol(f, symbol, sol=None, **flags): f = f.subs(sol) if not f.is_Boolean: return - else: - f = f.rewrite(Add, evaluate=False, deep=False) + elif isinstance(f, Eq): + f = Add(f.lhs, -f.rhs, evaluate=False) if isinstance(f, BooleanAtom): return bool(f) @@ -947,8 +947,8 @@ def _sympified_list(w): Unanticipated argument of Eq when other arg is True or False. ''')) - else: - fi = fi.rewrite(Add, evaluate=False, deep=False) + elif isinstance(fi, Eq): + fi = Add(fi.lhs, -fi.rhs, evaluate=False) f[i] = fi # *** dispatch and handle as a system of relationals diff --git a/sympy/solvers/solveset.py b/sympy/solvers/solveset.py index dc95ca03d6fb..0f0d947d114e 100644 --- a/sympy/solvers/solveset.py +++ b/sympy/solvers/solveset.py @@ -11,7 +11,7 @@ - solve a system of Non Linear Equations with N variables and M equations """ from sympy.core.sympify import sympify -from sympy.core import (S, Pow, Dummy, pi, Expr, Wild, Mul, Equality, +from sympy.core import (S, Pow, Dummy, pi, Expr, Wild, Mul, Add, Basic) from sympy.core.containers import Tuple from sympy.core.function import (Lambda, expand_complex, AppliedUndef, @@ -1065,7 +1065,7 @@ def _solveset(f, symbol, domain, _check=False): solns = solver(expr, symbol, in_set) result += solns elif isinstance(f, Eq): - result = solver(Add(f.lhs, - f.rhs, evaluate=False), symbol, domain) + result = solver(Add(f.lhs, -f.rhs, evaluate=False), symbol, domain) elif f.is_Relational: from .inequalities import solve_univariate_inequality @@ -3017,6 +3017,10 @@ def substitution(system, symbols, result=[{}], known_symbols=[], if not system: return S.EmptySet + for i, e in enumerate(system): + if isinstance(e, Eq): + system[i] = e.lhs - e.rhs + if not symbols: msg = ('Symbols must be given, for which solution of the ' 'system is to be found.') @@ -3311,7 +3315,7 @@ def _solve_using_known_values(result, solver): # list. result.remove(res) continue # skip as it's independent of desired symbols - depen1, depen2 = (eq2.rewrite(Add)).as_independent(*unsolved_syms) + depen1, depen2 = eq2.as_independent(*unsolved_syms) if (depen1.has(Abs) or depen2.has(Abs)) and solver == solveset_complex: # Absolute values cannot be inverted in the # complex domain @@ -3527,8 +3531,8 @@ def _separate_poly_nonpoly(system, symbols): # Store denom expressions that contain symbols denominators.update(_simple_dens(eq, symbols)) # Convert equality to expression - if isinstance(eq, Equality): - eq = eq.rewrite(Add) + if isinstance(eq, Eq): + eq = eq.lhs - eq.rhs # try to remove sqrt and rational power without_radicals = unrad(simplify(eq), *symbols) if without_radicals: diff --git a/sympy/solvers/tests/test_solvers.py b/sympy/solvers/tests/test_solvers.py index cdf3d54d9e8b..c2f80f6af4ef 100644 --- a/sympy/solvers/tests/test_solvers.py +++ b/sympy/solvers/tests/test_solvers.py @@ -2640,7 +2640,8 @@ def test_issue_10169(): def test_solve_undetermined_coeffs_issue_23927(): A, B, r, phi = symbols('A, B, r, phi') - eq = Eq(A*sin(t) + B*cos(t), r*sin(t - phi)).rewrite(Add).expand(trig=True) + e = Eq(A*sin(t) + B*cos(t), r*sin(t - phi)) + eq = (e.lhs - e.rhs).expand(trig=True) soln = solve_undetermined_coeffs(eq, (r, phi), t) assert soln == [{ phi: 2*atan((A - sqrt(A**2 + B**2))/B), diff --git a/sympy/solvers/tests/test_solveset.py b/sympy/solvers/tests/test_solveset.py index a1f756a4aa33..7acf6daffb6f 100644 --- a/sympy/solvers/tests/test_solveset.py +++ b/sympy/solvers/tests/test_solveset.py @@ -1,6 +1,5 @@ from math import isclose -from sympy.core.add import Add from sympy.core.containers import Tuple from sympy.core.function import (Function, Lambda, nfloat, diff) from sympy.core.mod import Mod @@ -3223,7 +3222,7 @@ def test_issue_23318(): Eq(x, 0.0015 * z), Eq(0.0015, 7845.32 * y / z), ] - eqs_expr = [eq.rewrite(Add) for eq in eqs_eq] + eqs_expr = [eq.lhs - eq.rhs for eq in eqs_eq] sol = {(266.97755814852, 0.0340301680681629, 177985.03876568)}
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Bug Fixes" }
sympy__sympy-25528@7586c52
sympy/sympy
Python
25,528
Improve Logical Simplification Handling in sympy.logic.boolalg
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs Fixes #25451 <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> #### Brief description of what is fixed or changed - Revised the `binary_check_and_simplify` function in the `boolalg` module to handle binary symbols more effectively in logical expressions. - Eliminated unnecessary error raises for certain relational expressions that involve binary symbols. #### Other comments Separately we could consider improving the detection of erroneous mixing of symbols as Expr and Boolean but I don't think it is very important: time would be better spent adding a BooleanSymbol class and considering if we would ever be able to deprecate using Symbol as a Boolean. #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> * logic * Improved handling of logical simplification involving binary symbols like `Eq` <!-- END RELEASE NOTES -->
2023-08-16T09:54:16Z
Incorrect simplification when mixing basic logical operators and equality Have I misunderstood something about how sympy is supposed to work? This basic simplification of a small logical expression is incorrect. ```python from sympy import * a,b,c = symbols("a b c") expr = Or(And(a,c), Eq(a,b)) # incorrectly simplifies to a & c print(expr) # a & c # counter-example # Or(And(a,c), Eq(a,b)) with a=True, b=True, c=False # The simplification incorrectly returns False print(expr.subs({a:True, b:True, c:False})) # False # The actual value is True print(Or(And(True,False), Eq(True,True))) # True ```
You are right that this simplification is incorrect: ```python In [18]: c1 = And(a, c) In [19]: c2 = Eq(a, b) In [20]: c1 Out[20]: a ∧ c In [21]: c2 Out[21]: a = b In [22]: c1 | c2 Out[22]: a ∧ c ``` Probably somewhere this comes from mixing up the idea of Symbol as being both `Expr` and `Boolean` (there should be a separate `BooleanSymbol` type). In the case of `Eq(a, b)` both `a` and `b` would be implicitly assumed to be `Expr` whereas for `And(a, c)` both `a` and `c` would be implicitly assumed to be `Boolean`. We can see the confused code that tries to handle this here: https://github.com/sympy/sympy/blob/221d773085642f7e30440ba6ffa122d5bfbb1042/sympy/logic/boolalg.py#L491-L511 That sets `Eq(a, b)` to false because true and false are not in its args. Apparently it expects that if the `Eq` contains "binary symbols" then it must only be something like `Eq(a, True)` or otherwise we can just set it to false? I really can't fathom what the purpose of that code is. Usually at this point I would suggest a way to fix the code but maybe the whole function should just be deleted. It has two purposes, one of which is to set any `Eq` or `Ne` to false if it has binary symbols among its free symbols and does not contain true or false. I can't even imagine a situation where that is valid. The other purpose of the code seems to be to raise an error if we have a relational that is not an `Eq` or `Ne` and that contains any binary symbol among its free symbols. I guess this is to raise an error for `a > 0` if `a` is also being used as a "binary" symbol: ```python In [3]: a & (a > 0) --------------------------------------------------------------------------- TypeError: Incompatible use of binary symbol `a` as a real variable in `a > 0` ``` The check is flakey though because a relational can contain binary symbols that are used as binary symbols: ```python In [4]: p = Piecewise((1, a), (2, True)) In [5]: p Out[5]: ⎧1 for a ⎨ ⎩2 otherwise In [6]: a & (p > sqrt(2)) --------------------------------------------------------------------------- TypeError Incompatible use of binary symbol `a` as a real variable in `Piecewise((1, a), (2, True)) > sqrt(2)` ``` My suggested fix is: ```diff diff --git a/sympy/logic/boolalg.py b/sympy/logic/boolalg.py index 27213a215b..ac190ac508 100644 --- a/sympy/logic/boolalg.py +++ b/sympy/logic/boolalg.py @@ -490,25 +490,7 @@ def __lt__(self, other): @classmethod def binary_check_and_simplify(self, *args): - from sympy.core.relational import Relational, Eq, Ne - args = [as_Boolean(i) for i in args] - bin_syms = set().union(*[i.binary_symbols for i in args]) - rel = set().union(*[i.atoms(Relational) for i in args]) - reps = {} - for x in bin_syms: - for r in rel: - if x in bin_syms and x in r.free_symbols: - if isinstance(r, (Eq, Ne)): - if not ( - true in r.args or - false in r.args): - reps[r] = false - else: - raise TypeError(filldedent(''' - Incompatible use of binary symbol `%s` as a - real variable in `%s` - ''' % (x, r))) - return [i.subs(reps) for i in args] + return [as_Boolean(i) for i in args] def to_nnf(self, simplify=True): return self._to_nnf(*self.args, simplify=simplify) ``` Then we can have: ```python In [1]: a, b, c, d = symbols('a, b, c, d') In [2]: p = Piecewise((1, a), (2, True)) In [3]: a & (p > sqrt(2)) Out[3]: ⎧1 for a a ∧ ⎨ > √2 ⎩2 otherwise In [4]: expr = Or(And(a,c), Eq(a,b)) In [5]: expr Out[5]: (a ∧ c) ∨ a = b ``` I imagine that somewhere there is a test that depends on this code doing this strange replacement but I might be wrong. From the core and logic tests there were a few that failed because they expected an error to be raised. Potentially those could be changed or otherwise the code to raise an error could be improved to handle those cases without also mishandling others. Really though this just shows why we need a separate `BooleanSymbol` type: `Symbol` should not be be a simultaneous subclass of both `Boolean` and `Expr` because these are incompatible and it just confuses everything. The other test that failed was this one: https://github.com/sympy/sympy/blob/221d773085642f7e30440ba6ffa122d5bfbb1042/sympy/logic/tests/test_boolalg.py#L307 I don't think that is a very important test precisely because it is just testing what happens when Symbol is mixed up between Expr and Boolean which is something that should have never been allowed in the first place. My suggested fix is: ```diff diff --git a/sympy/logic/boolalg.py b/sympy/logic/boolalg.py index 27213a215b..ac190ac508 100644 --- a/sympy/logic/boolalg.py +++ b/sympy/logic/boolalg.py @@ -490,25 +490,7 @@ def __lt__(self, other): @classmethod def binary_check_and_simplify(self, *args): - from sympy.core.relational import Relational, Eq, Ne - args = [as_Boolean(i) for i in args] - bin_syms = set().union(*[i.binary_symbols for i in args]) - rel = set().union(*[i.atoms(Relational) for i in args]) - reps = {} - for x in bin_syms: - for r in rel: - if x in bin_syms and x in r.free_symbols: - if isinstance(r, (Eq, Ne)): - if not ( - true in r.args or - false in r.args): - reps[r] = false - else: - raise TypeError(filldedent(''' - Incompatible use of binary symbol `%s` as a - real variable in `%s` - ''' % (x, r))) - return [i.subs(reps) for i in args] + return [as_Boolean(i) for i in args] def to_nnf(self, simplify=True): return self._to_nnf(*self.args, simplify=simplify) diff --git a/sympy/logic/tests/test_boolalg.py b/sympy/logic/tests/test_boolalg.py index 7b50f8ebd7..f3ec5604a6 100644 --- a/sympy/logic/tests/test_boolalg.py +++ b/sympy/logic/tests/test_boolalg.py @@ -57,7 +57,6 @@ def test_And(): assert And(True, False, A) is false assert And(1, A) == A raises(TypeError, lambda: And(2, A)) - raises(TypeError, lambda: And(A < 2, A)) assert And(A < 1, A >= 1) is false e = A > 1 assert And(e, e.canonical) == e.canonical @@ -82,7 +81,6 @@ def test_Or(): assert Or(False, False, A) == A assert Or(1, A) is true raises(TypeError, lambda: Or(2, A)) - raises(TypeError, lambda: Or(A < 2, A)) assert Or(A < 1, A >= 1) is true e = A > 1 assert Or(e, e.canonical) == e @@ -304,7 +302,6 @@ def test_simplification_boolalg(): assert simplify_logic(Equivalent(A, B)) == \ Or(And(A, B), And(Not(A), Not(B))) assert simplify_logic(And(Equality(A, 2), C)) == And(Equality(A, 2), C) - assert simplify_logic(And(Equality(A, 2), A)) is S.false assert simplify_logic(And(Equality(A, 2), A)) == And(Equality(A, 2), A) assert simplify_logic(And(Equality(A, B), C)) == And(Equality(A, B), C) assert simplify_logic(Or(And(Equality(A, 3), B), And(Equality(A, 3), C))) \ @@ -676,7 +673,6 @@ def test_ITE(): assert ITE(1, 1, 1) is S.true assert isinstance(ITE(1, 1, 1, evaluate=False), ITE) - raises(TypeError, lambda: ITE(x > 1, y, x)) assert ITE(Eq(x, True), y, x) == ITE(x, y, x) assert ITE(Eq(x, False), y, x) == ITE(~x, y, x) assert ITE(Ne(x, True), y, x) == ITE(~x, y, x) ``` I am marking this as "easy to fix" for someone to apply the diff shown although there might also be some other test failures (I didn't run the whole test suite). Separately we could consider improving the detection of erroneous mixing of symbols as Expr and Boolean but I don't think it is very important: time would be better spent adding a `BooleanSymbol` class and considering if we would ever be able to deprecate using Symbol as a Boolean. You can use `Or(And(a, c), Equivalent(a, b))` for fix meanwhile. Maybe `Equivalent` is semantically better than `Eq` for users. Thanks for the quick response @oscarbenjamin ! Unfortunately I'm not familiar enough with the internals to help out myself. @sylee957 - This `Equivalent()` and `Not(Equivalent())` stand-in is very helpful, much appreciated! > Hi @paulcalcraft `a, b, c = symbols("a b c")` - You're initializing the symbols correctly. `expr = Or(And(a,c), Eq(a,b))` - Here's where the error is. When you're constructing the expression using `Or(And(a,c), Eq(a,b))`, you're not actually using Sympy's logical operators. In Sympy, the logical "AND" is represented by `And(...)`, the logical "OR" is represented by `Or(...)`, and the logical "EQUALS" is represented by `Eq(...)`. However, the issue arises because the `Eq(a,b)` actually returns a boolean (either `True` or `False`) if `a` and `b` can be determined as equivalent or not. So if `a` and `b` are both symbols and cannot be evaluated, then `Eq(a,b)` simply represents the symbolic equation a=b. Let's see how to correctly construct and evaluate the expression. ``` from sympy import * a, b, c = symbols("a b c") expr = Or(And(a, c), Eq(a, b)) # This will print the entire expression, not just "a & c". print(expr) # And(a, c) | Eq(a, b) # Now let's evaluate it with a=True, b=True, c=False print(expr.subs({a:True, b:True, c:False})) # True ``` Hi @ManuPer3z, as far as I can tell, the code you've provided is identical to my original post? It does not print what you've suggested. What version are you using? I've just tested again with the latest stable release (SymPy 1.12) and I still get: ``` a & c False ``` Perhaps you are using an older version that didn't have this bug? Using `Equivalent` as suggested by @sylee957 is working well. I haven't closed the issue myself because presumably the SymPy maintainers want to decide whether to correct it or flag it as a gotcha. The issue should not be closed until the diff I showed is applied or something similar. There is a bug to be fixed.
[ { "body": "Have I misunderstood something about how sympy is supposed to work? This basic simplification of a small logical expression is incorrect.\r\n\r\n```python\r\nfrom sympy import *\r\n\r\na,b,c = symbols(\"a b c\")\r\n\r\nexpr = Or(And(a,c), Eq(a,b))\r\n\r\n# incorrectly simplifies to a & c\r\nprint(expr) # a & c\r\n\r\n# counter-example\r\n\r\n# Or(And(a,c), Eq(a,b)) with a=True, b=True, c=False\r\n\r\n# The simplification incorrectly returns False\r\nprint(expr.subs({a:True, b:True, c:False})) # False\r\n\r\n# The actual value is True\r\nprint(Or(And(True,False), Eq(True,True))) # True\r\n```", "number": 25451, "title": "Incorrect simplification when mixing basic logical operators and equality" } ]
3236f7439c989486b83226a4ae8e342575a6ec9f
{ "head_commit": "7586c5240d2a2ca9beef6ac8ed8cf6ad79f4d256", "head_commit_message": "Update .mailmap", "patch_to_review": "diff --git a/.mailmap b/.mailmap\nindex 34c061d0b266..20c42ee993f7 100644\n--- a/.mailmap\n+++ b/.mailmap\n@@ -1352,6 +1352,7 @@ Tanu Hari Dixit <[email protected]>\n Tarang Patel <[email protected]> <[email protected]>\n Tarun Gaba <[email protected]>\n Tasha Kim <[email protected]>\n+Taylan Sahin <[email protected]>\n Ted Dokos <[email protected]> <[email protected]>\n Ted Horst <[email protected]>\n Tejaswini Sanapathi <[email protected]>\ndiff --git a/sympy/logic/boolalg.py b/sympy/logic/boolalg.py\nindex 27213a215b73..ac190ac50883 100644\n--- a/sympy/logic/boolalg.py\n+++ b/sympy/logic/boolalg.py\n@@ -490,25 +490,7 @@ def __lt__(self, other):\n \n @classmethod\n def binary_check_and_simplify(self, *args):\n- from sympy.core.relational import Relational, Eq, Ne\n- args = [as_Boolean(i) for i in args]\n- bin_syms = set().union(*[i.binary_symbols for i in args])\n- rel = set().union(*[i.atoms(Relational) for i in args])\n- reps = {}\n- for x in bin_syms:\n- for r in rel:\n- if x in bin_syms and x in r.free_symbols:\n- if isinstance(r, (Eq, Ne)):\n- if not (\n- true in r.args or\n- false in r.args):\n- reps[r] = false\n- else:\n- raise TypeError(filldedent('''\n- Incompatible use of binary symbol `%s` as a\n- real variable in `%s`\n- ''' % (x, r)))\n- return [i.subs(reps) for i in args]\n+ return [as_Boolean(i) for i in args]\n \n def to_nnf(self, simplify=True):\n return self._to_nnf(*self.args, simplify=simplify)\ndiff --git a/sympy/logic/tests/test_boolalg.py b/sympy/logic/tests/test_boolalg.py\nindex 7b50f8ebd773..65fe4d14691e 100644\n--- a/sympy/logic/tests/test_boolalg.py\n+++ b/sympy/logic/tests/test_boolalg.py\n@@ -57,7 +57,6 @@ def test_And():\n assert And(True, False, A) is false\n assert And(1, A) == A\n raises(TypeError, lambda: And(2, A))\n- raises(TypeError, lambda: And(A < 2, A))\n assert And(A < 1, A >= 1) is false\n e = A > 1\n assert And(e, e.canonical) == e.canonical\n@@ -82,7 +81,6 @@ def test_Or():\n assert Or(False, False, A) == A\n assert Or(1, A) is true\n raises(TypeError, lambda: Or(2, A))\n- raises(TypeError, lambda: Or(A < 2, A))\n assert Or(A < 1, A >= 1) is true\n e = A > 1\n assert Or(e, e.canonical) == e\n@@ -304,7 +302,6 @@ def test_simplification_boolalg():\n assert simplify_logic(Equivalent(A, B)) == \\\n Or(And(A, B), And(Not(A), Not(B)))\n assert simplify_logic(And(Equality(A, 2), C)) == And(Equality(A, 2), C)\n- assert simplify_logic(And(Equality(A, 2), A)) is S.false\n assert simplify_logic(And(Equality(A, 2), A)) == And(Equality(A, 2), A)\n assert simplify_logic(And(Equality(A, B), C)) == And(Equality(A, B), C)\n assert simplify_logic(Or(And(Equality(A, 3), B), And(Equality(A, 3), C))) \\\n@@ -676,7 +673,6 @@ def test_ITE():\n assert ITE(1, 1, 1) is S.true\n assert isinstance(ITE(1, 1, 1, evaluate=False), ITE)\n \n- raises(TypeError, lambda: ITE(x > 1, y, x))\n assert ITE(Eq(x, True), y, x) == ITE(x, y, x)\n assert ITE(Eq(x, False), y, x) == ITE(~x, y, x)\n assert ITE(Ne(x, True), y, x) == ITE(~x, y, x)\n@@ -1345,3 +1341,8 @@ def test_relational_threeterm_simplification_patterns_numerically():\n assert originalvalue == simplifiedvalue, \"Original: {}\\nand\"\\\n \" simplified: {}\\ndo not evaluate to the same value for\"\\\n \"{}\".format(pattern[0], simplified, sublist)\n+\n+\n+def test_issue_25451():\n+ x = Or(And(a, c), Eq(a, b))\n+ assert x != And(a, c)\n" }
[ { "diff_hunk": "@@ -1345,3 +1341,8 @@ def test_relational_threeterm_simplification_patterns_numerically():\n assert originalvalue == simplifiedvalue, \"Original: {}\\nand\"\\\n \" simplified: {}\\ndo not evaluate to the same value for\"\\\n \"{}\".format(pattern[0], simplified, sublist)\n+\n+\n+def test_issue_25451():\n+ x = Or(And(a, c), Eq(a, b))\n+ assert x != And(a, c)", "line": null, "original_line": 1348, "original_start_line": null, "path": "sympy/logic/tests/test_boolalg.py", "start_line": null, "text": "@user1:\nTest with `!=` is not ideal because many possible objects would pass the test. A better test could be:\r\n```\r\nassert isinstance(x, Or)\r\nassert set(x.args) == {And(a, c), Eq(a, b)}\r\n```\n\n@author:\nThank you, I will update it now. I appreciate your input." } ]
5075448785ff9dde2fd05566c358b04329a91bc2
diff --git a/.mailmap b/.mailmap index 34c061d0b266..20c42ee993f7 100644 --- a/.mailmap +++ b/.mailmap @@ -1352,6 +1352,7 @@ Tanu Hari Dixit <[email protected]> Tarang Patel <[email protected]> <[email protected]> Tarun Gaba <[email protected]> Tasha Kim <[email protected]> +Taylan Sahin <[email protected]> Ted Dokos <[email protected]> <[email protected]> Ted Horst <[email protected]> Tejaswini Sanapathi <[email protected]> diff --git a/sympy/logic/boolalg.py b/sympy/logic/boolalg.py index 27213a215b73..ac190ac50883 100644 --- a/sympy/logic/boolalg.py +++ b/sympy/logic/boolalg.py @@ -490,25 +490,7 @@ def __lt__(self, other): @classmethod def binary_check_and_simplify(self, *args): - from sympy.core.relational import Relational, Eq, Ne - args = [as_Boolean(i) for i in args] - bin_syms = set().union(*[i.binary_symbols for i in args]) - rel = set().union(*[i.atoms(Relational) for i in args]) - reps = {} - for x in bin_syms: - for r in rel: - if x in bin_syms and x in r.free_symbols: - if isinstance(r, (Eq, Ne)): - if not ( - true in r.args or - false in r.args): - reps[r] = false - else: - raise TypeError(filldedent(''' - Incompatible use of binary symbol `%s` as a - real variable in `%s` - ''' % (x, r))) - return [i.subs(reps) for i in args] + return [as_Boolean(i) for i in args] def to_nnf(self, simplify=True): return self._to_nnf(*self.args, simplify=simplify) diff --git a/sympy/logic/tests/test_boolalg.py b/sympy/logic/tests/test_boolalg.py index 7b50f8ebd773..f31ba2561061 100644 --- a/sympy/logic/tests/test_boolalg.py +++ b/sympy/logic/tests/test_boolalg.py @@ -57,7 +57,6 @@ def test_And(): assert And(True, False, A) is false assert And(1, A) == A raises(TypeError, lambda: And(2, A)) - raises(TypeError, lambda: And(A < 2, A)) assert And(A < 1, A >= 1) is false e = A > 1 assert And(e, e.canonical) == e.canonical @@ -82,7 +81,6 @@ def test_Or(): assert Or(False, False, A) == A assert Or(1, A) is true raises(TypeError, lambda: Or(2, A)) - raises(TypeError, lambda: Or(A < 2, A)) assert Or(A < 1, A >= 1) is true e = A > 1 assert Or(e, e.canonical) == e @@ -304,7 +302,6 @@ def test_simplification_boolalg(): assert simplify_logic(Equivalent(A, B)) == \ Or(And(A, B), And(Not(A), Not(B))) assert simplify_logic(And(Equality(A, 2), C)) == And(Equality(A, 2), C) - assert simplify_logic(And(Equality(A, 2), A)) is S.false assert simplify_logic(And(Equality(A, 2), A)) == And(Equality(A, 2), A) assert simplify_logic(And(Equality(A, B), C)) == And(Equality(A, B), C) assert simplify_logic(Or(And(Equality(A, 3), B), And(Equality(A, 3), C))) \ @@ -676,7 +673,6 @@ def test_ITE(): assert ITE(1, 1, 1) is S.true assert isinstance(ITE(1, 1, 1, evaluate=False), ITE) - raises(TypeError, lambda: ITE(x > 1, y, x)) assert ITE(Eq(x, True), y, x) == ITE(x, y, x) assert ITE(Eq(x, False), y, x) == ITE(~x, y, x) assert ITE(Ne(x, True), y, x) == ITE(~x, y, x) @@ -1345,3 +1341,9 @@ def test_relational_threeterm_simplification_patterns_numerically(): assert originalvalue == simplifiedvalue, "Original: {}\nand"\ " simplified: {}\ndo not evaluate to the same value for"\ "{}".format(pattern[0], simplified, sublist) + + +def test_issue_25451(): + x = Or(And(a, c), Eq(a, b)) + assert isinstance(x, Or) + assert set(x.args) == {And(a, c), Eq(a, b)}
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Bug Fixes" }
sympy__sympy-25556@2d9a292
sympy/sympy
Python
25,556
Added tests to prevent oo and wrong coefficient of Min in piecewise integration for cases with both float and int
#### References to other Issues or PRs Fixes #20781 #### Brief description of what is fixed or changed Test prevents the additional +oo and adds the correct (=2) coefficient in the Min function that calculates piecewise integration with two variables. the problem occurs when both variables showing range of value of x, are functionally equal but with float and int data types. <!-- BEGIN RELEASE NOTES --> NO ENTRY <!-- END RELEASE NOTES -->
2023-08-20T21:46:16Z
Sympy.integrate wrongfully returns oo when applied on Piecewise with both floats and integers of same value # introduction When integrating the sum of the shown two piecewise functions, it wrongfully returns oo. This happens when floats and integers at functionally the same value appear. IPython console for SymPy 1.7.1 (Python 3.8.5-64-bit) # example ```` import sympy as sp x_d = sp.symbols('x_d') x, y,= sp.symbols('x y') fun1 = lambda x,a1: sp.Piecewise((0,x<a1),(1,x>=a1)) fun2 = lambda x,a2: sp.Piecewise((0,x<a2),(1,x>=a2)) fun_sum = lambda x,a1,a2: sp.Piecewise((0,x<a1),(1,x>=a1)) \ +sp.Piecewise((0,x<a2),(1,x>=a2)) print('### wrong output') print('#### a1=0 a2=0.0') print(sp.integrate(fun_sum((x_d),0,0.0),(x_d,-float('Inf'),x))) print(sp.integrate(fun_sum((x_d),0,0.0),(x_d,-float('Inf'),x))) print('#### a1=0.0 a2=0') print(sp.integrate(fun_sum((x_d),0.0,0),(x_d,-float('Inf'),x))) print(sp.integrate(fun_sum((x_d),0.0,0),(x_d,-float('Inf'),x))) print('#### a1=1.0 a2=1') print(sp.integrate(fun_sum((x_d),1.0,1),(x_d,-float('Inf'),x))) print(sp.integrate(fun_sum((x_d),1.0,1),(x_d,-float('Inf'),x))) print('### expected output') print('#### a1=0.0 a2=0.0') print(sp.integrate(fun_sum((x_d),0.0,0.0),(x_d,-float('Inf'),x))) print(sp.integrate(fun_sum((x_d),0.0,0.0),(x_d,-float('Inf'),x))) print('#### a1=0 a2=0') print(sp.integrate(fun_sum((x_d),0,0),(x_d,-float('Inf'),x))) print(sp.integrate(fun_sum((x_d),0,0),(x_d,-float('Inf'),x))) ```` # output ```` ### wrong output #### a1=0 a2=0.0 2*x - Min(0.0, x) + oo 2*x - Min(0.0, x) + oo #### a1=0.0 a2=0 2*x - Min(0.0, x) + oo 2*x - Min(0.0, x) + oo #### a1=1.0 a2=1 2*x - Min(1.0, x) + oo 2*x - Min(1.0, x) + oo ### expected output #### a1=0.0 a2=0.0 2*x - 2*Min(0.0, x) 2*x - 2*Min(0.0, x) #### a1=0 a2=0 2*x - 2*Min(0, x) 2*x - 2*Min(0, x) ```` Edit: simplified example
Hey @oscarbenjamin @asmeurer the problem is caused by these lines: https://github.com/sympy/sympy/blob/14db44e03a8622da5cfe11f61ff43131ffd1fade/sympy/functions/elementary/piecewise.py#L262-L272 It took a lot of debugging. The problem is that the `current_cond` stores the conditions it had already got for example if there is a list of conditions in `cond.args = ['x <0' , 'x>1', 'x<0']` then `current_cond` will store `x < 0` only once but if there is `x < 0` as well as `x < 0.0`, they are both stored since they are treated differently and the final piecewise function made using them is messed up. Can u suggest a way to counter this. I think it has something to do with sympy based on python. If there are all floats and one of them has `0.0` then it will be converted to `0` by this and therefore causing the above mentioned problem. The problem is caused here https://github.com/sympy/sympy/blob/14db44e03a8622da5cfe11f61ff43131ffd1fade/sympy/solvers/inequalities.py#L485 which changes `x > 0.0` to `x > 0` as `e.lhs - e.rhs` in case of `x,0.0` will give `x` rather than `x-0.0`. This later makes the inequality `x > 0`. I have made change to solve that. But if there are some integers and some floats then there will be the above problem. thanks for your help. I have retested this with python 3.10 and sympy 1.11.1. The bug seems to be fixed, I am closing this issue # output ```` #### a1=0 a2=0.0 2*x - 2*Min(0, x) 2*x - 2*Min(0, x) #### a1=0.0 a2=0 2*x - 2*Min(0, x) 2*x - 2*Min(0, x) #### a1=1.0 a2=1 2*x - 2*Min(1, x) 2*x - 2*Min(1, x) ```` The fix should be bisected and a test should be added to show that the problem is fixed. This was also fixed by commit 6336e467274c67673bad286c9151f7b6e131c0d7 from gh-22286
[ { "body": "# introduction\r\nWhen integrating the sum of the shown two piecewise functions, it wrongfully returns oo.\r\nThis happens when floats and integers at functionally the same value appear.\r\n\r\nIPython console for SymPy 1.7.1 (Python 3.8.5-64-bit)\r\n\r\n# example\r\n````\r\nimport sympy as sp\r\nx_d = sp.symbols('x_d') \r\nx, y,= sp.symbols('x y')\r\n\r\n\r\nfun1 = lambda x,a1: sp.Piecewise((0,x<a1),(1,x>=a1))\r\nfun2 = lambda x,a2: sp.Piecewise((0,x<a2),(1,x>=a2))\r\n\r\nfun_sum = lambda x,a1,a2: sp.Piecewise((0,x<a1),(1,x>=a1)) \\\r\n +sp.Piecewise((0,x<a2),(1,x>=a2))\r\n\r\nprint('### wrong output')\r\nprint('#### a1=0 a2=0.0') \r\nprint(sp.integrate(fun_sum((x_d),0,0.0),(x_d,-float('Inf'),x)))\r\nprint(sp.integrate(fun_sum((x_d),0,0.0),(x_d,-float('Inf'),x)))\r\nprint('#### a1=0.0 a2=0') \r\nprint(sp.integrate(fun_sum((x_d),0.0,0),(x_d,-float('Inf'),x)))\r\nprint(sp.integrate(fun_sum((x_d),0.0,0),(x_d,-float('Inf'),x)))\r\nprint('#### a1=1.0 a2=1') \r\nprint(sp.integrate(fun_sum((x_d),1.0,1),(x_d,-float('Inf'),x)))\r\nprint(sp.integrate(fun_sum((x_d),1.0,1),(x_d,-float('Inf'),x)))\r\n\r\nprint('### expected output') \r\nprint('#### a1=0.0 a2=0.0') \r\nprint(sp.integrate(fun_sum((x_d),0.0,0.0),(x_d,-float('Inf'),x)))\r\nprint(sp.integrate(fun_sum((x_d),0.0,0.0),(x_d,-float('Inf'),x)))\r\nprint('#### a1=0 a2=0') \r\nprint(sp.integrate(fun_sum((x_d),0,0),(x_d,-float('Inf'),x)))\r\nprint(sp.integrate(fun_sum((x_d),0,0),(x_d,-float('Inf'),x)))\r\n````\r\n# output\r\n````\r\n### wrong output\r\n#### a1=0 a2=0.0\r\n2*x - Min(0.0, x) + oo\r\n2*x - Min(0.0, x) + oo\r\n#### a1=0.0 a2=0\r\n2*x - Min(0.0, x) + oo\r\n2*x - Min(0.0, x) + oo\r\n#### a1=1.0 a2=1\r\n2*x - Min(1.0, x) + oo\r\n2*x - Min(1.0, x) + oo\r\n### expected output\r\n#### a1=0.0 a2=0.0\r\n2*x - 2*Min(0.0, x)\r\n2*x - 2*Min(0.0, x)\r\n#### a1=0 a2=0\r\n2*x - 2*Min(0, x)\r\n2*x - 2*Min(0, x)\r\n````\r\n\r\nEdit: simplified example", "number": 20781, "title": "Sympy.integrate wrongfully returns oo when applied on Piecewise with both floats and integers of same value" } ]
067e0eaf90158c6b0ef6be14c32d9d9ccd97985b
{ "head_commit": "2d9a2923c06bd9a6d1952cbf8fcd4c52390778c5", "head_commit_message": "added tests for #20781", "patch_to_review": "diff --git a/.mailmap b/.mailmap\nindex ed9268cc06c1..dd2a1322503d 100644\n--- a/.mailmap\n+++ b/.mailmap\n@@ -612,6 +612,8 @@ Gaurav Jain <[email protected]> Gaurav Jain <[email protected]\n Gautam Menghani <[email protected]> gautammenghani <[email protected]>\n Gautam Menghani <[email protected]> gum3ng <[email protected]>\n Gautam Menghani <[email protected]> gum3ng <[email protected]>\n+Geetika Vadali <[email protected]> Geetika V <[email protected]>\n+Geetika Vadali <[email protected]> Geetika Vadali <[email protected]>\n Geoffry Song <[email protected]>\n George Korepanov <[email protected]>\n George Waksman <[email protected]>\ndiff --git a/sympy/integrals/tests/test_integrals.py b/sympy/integrals/tests/test_integrals.py\nindex f2c01ae9d104..04b5173a67b5 100644\n--- a/sympy/integrals/tests/test_integrals.py\n+++ b/sympy/integrals/tests/test_integrals.py\n@@ -2098,6 +2098,16 @@ def test_issue_20782():\n assert integrate(f, (x, -oo, 1)) == 1\n assert integrate(-f, (x, -oo, 1)) == -1\n \n+def test_issue_20781():\n+ x_d = Symbol('x_d')\n+ fun_sum = lambda x, a1, a2: Piecewise((0, x<a1),(1, x>=a1)) + Piecewise((0, x<a2),(1, x>=a2))\n+\n+ assert integrate(fun_sum((x_d), 0, 0.0), (x_d, -float('Inf'), x)) == 2*x - 2*Min(0, x)\n+ assert integrate(fun_sum((x_d), 0, 0.0), (x_d, -float('Inf'), x)) == 2*x - 2*Min(0, x)\n+ assert integrate(fun_sum((x_d), 0.0, 0), (x_d, -float('Inf'), x)) == 2*x - 2*Min(0, x)\n+ assert integrate(fun_sum((x_d), 0.0, 0), (x_d, -float('Inf'), x)) == 2*x - 2*Min(0, x)\n+ assert integrate(fun_sum((x_d), 1.0, 1), (x_d, -float('Inf'), x)) == 2*x - 2*Min(1, x)\n+ assert integrate(fun_sum((x_d), 1.0, 1), (x_d, -float('Inf'), x)) == 2*x - 2*Min(1, x)\n \n @slow\n def test_issue_19427():\n" }
[ { "diff_hunk": "@@ -2098,6 +2098,16 @@ def test_issue_20782():\n assert integrate(f, (x, -oo, 1)) == 1\n assert integrate(-f, (x, -oo, 1)) == -1\n \n+def test_issue_20781():\n+ x_d = Symbol('x_d')\n+ fun_sum = lambda x, a1, a2: Piecewise((0, x<a1),(1, x>=a1)) + Piecewise((0, x<a2),(1, x>=a2))\n+\n+ assert integrate(fun_sum((x_d), 0, 0.0), (x_d, -float('Inf'), x)) == 2*x - 2*Min(0, x)\n+ assert integrate(fun_sum((x_d), 0, 0.0), (x_d, -float('Inf'), x)) == 2*x - 2*Min(0, x)\n+ assert integrate(fun_sum((x_d), 0.0, 0), (x_d, -float('Inf'), x)) == 2*x - 2*Min(0, x)\n+ assert integrate(fun_sum((x_d), 0.0, 0), (x_d, -float('Inf'), x)) == 2*x - 2*Min(0, x)\n+ assert integrate(fun_sum((x_d), 1.0, 1), (x_d, -float('Inf'), x)) == 2*x - 2*Min(1, x)\n+ assert integrate(fun_sum((x_d), 1.0, 1), (x_d, -float('Inf'), x)) == 2*x - 2*Min(1, x)", "line": null, "original_line": 2110, "original_start_line": 2105, "path": "sympy/integrals/tests/test_integrals.py", "start_line": null, "text": "@user1:\nI see the tests are correct, but what is the motivation behind the repetition in consecutive tests?\n\n@author:\ni seem to have copied from the issue. let me remove the repeated lines" } ]
0453f2ab1ac770ae07faae8c243c8723305a4e41
diff --git a/.mailmap b/.mailmap index ed9268cc06c1..dd2a1322503d 100644 --- a/.mailmap +++ b/.mailmap @@ -612,6 +612,8 @@ Gaurav Jain <[email protected]> Gaurav Jain <[email protected] Gautam Menghani <[email protected]> gautammenghani <[email protected]> Gautam Menghani <[email protected]> gum3ng <[email protected]> Gautam Menghani <[email protected]> gum3ng <[email protected]> +Geetika Vadali <[email protected]> Geetika V <[email protected]> +Geetika Vadali <[email protected]> Geetika Vadali <[email protected]> Geoffry Song <[email protected]> George Korepanov <[email protected]> George Waksman <[email protected]> diff --git a/sympy/integrals/tests/test_integrals.py b/sympy/integrals/tests/test_integrals.py index f2c01ae9d104..46879fc6d6be 100644 --- a/sympy/integrals/tests/test_integrals.py +++ b/sympy/integrals/tests/test_integrals.py @@ -2098,6 +2098,13 @@ def test_issue_20782(): assert integrate(f, (x, -oo, 1)) == 1 assert integrate(-f, (x, -oo, 1)) == -1 +def test_issue_20781(): + x_d = Symbol('x_d') + fun_sum = lambda x, a1, a2: Piecewise((0, x<a1),(1, x>=a1)) + Piecewise((0, x<a2),(1, x>=a2)) + + assert integrate(fun_sum((x_d), 0, 0.0), (x_d, -float('Inf'), x)) == 2*x - 2*Min(0, x) + assert integrate(fun_sum((x_d), 0.0, 0), (x_d, -float('Inf'), x)) == 2*x - 2*Min(0, x) + assert integrate(fun_sum((x_d), 1.0, 1), (x_d, -float('Inf'), x)) == 2*x - 2*Min(1, x) @slow def test_issue_19427():
{ "difficulty": "medium", "estimated_review_effort": 2, "problem_domain": "Bug Fixes" }
sympy__sympy-25283@9e6500a
sympy/sympy
Python
25,283
Add CRootsOf incrementally for eigenvalues
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> Fixes #25282 #### Brief description of what is fixed or changed #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> - matrices - Fixed some corner cases when `Matrix.eigenvals` gives wrong result when `multiplicity=False`. <!-- END RELEASE NOTES -->
2023-06-23T03:45:17Z
incorrect multiplicities from `eigenvals` To reproduce: ``` import numpy as np import sympy as sp print(sp.__version__) dd = sd = [0] * 11 + [1] ds = [2,0,1,0,0,0,1,0,1,0,1,0] ss = ds.copy() ss[8] = 2 def rotate(x, i): return x[i:] + x[:i] mat = [] for i in range(12): mat.append(rotate(ss,i) + rotate(sd,i)) for i in range(12): mat.append(rotate(ds,i) + rotate(dd,i)) print(np.array(mat)) mat = sp.Matrix(mat) eigenvalues = mat.eigenvals() print('eigenvalues from eigenvals:') for i in eigenvalues: print(eigenvalues[i], 'x', i) print('matrix dimensions:', mat.shape) print('sum of multiplicities:', sum([eigenvalues[i] for i in eigenvalues])) eigenvectors = mat.eigenvects() print('eigenvalues from eigenvects (multiplicities appear to be correct):') for val, mult, _ in eigenvectors: print(mult, 'x', val) ``` Output: ``` 1.12 [[2 0 1 0 0 0 1 0 2 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1] [0 1 0 0 0 1 0 2 0 1 0 2 0 0 0 0 0 0 0 0 0 0 1 0] [1 0 0 0 1 0 2 0 1 0 2 0 0 0 0 0 0 0 0 0 0 1 0 0] [0 0 0 1 0 2 0 1 0 2 0 1 0 0 0 0 0 0 0 0 1 0 0 0] [0 0 1 0 2 0 1 0 2 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0] [0 1 0 2 0 1 0 2 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0] [1 0 2 0 1 0 2 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0] [0 2 0 1 0 2 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0] [2 0 1 0 2 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0] [0 1 0 2 0 1 0 0 0 1 0 2 0 0 1 0 0 0 0 0 0 0 0 0] [1 0 2 0 1 0 0 0 1 0 2 0 0 1 0 0 0 0 0 0 0 0 0 0] [0 2 0 1 0 0 0 1 0 2 0 1 1 0 0 0 0 0 0 0 0 0 0 0] [2 0 1 0 0 0 1 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1] [0 1 0 0 0 1 0 1 0 1 0 2 0 0 0 0 0 0 0 0 0 0 1 0] [1 0 0 0 1 0 1 0 1 0 2 0 0 0 0 0 0 0 0 0 0 1 0 0] [0 0 0 1 0 1 0 1 0 2 0 1 0 0 0 0 0 0 0 0 1 0 0 0] [0 0 1 0 1 0 1 0 2 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0] [0 1 0 1 0 1 0 2 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0] [1 0 1 0 1 0 2 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0] [0 1 0 1 0 2 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0] [1 0 1 0 2 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0] [0 1 0 2 0 1 0 0 0 1 0 1 0 0 1 0 0 0 0 0 0 0 0 0] [1 0 2 0 1 0 0 0 1 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0] [0 2 0 1 0 0 0 1 0 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0]] eigenvalues from eigenvals: 1 x CRootOf(lambda**8 - 10*lambda**6 + 15*lambda**4 - 10*lambda**2 + 1, 0) 1 x CRootOf(lambda**4 - 5*lambda**2 + 1, 0) 1 x -1 1 x CRootOf(lambda**4 - 5*lambda**2 + 1, 1) 1 x CRootOf(lambda**8 - 10*lambda**6 + 15*lambda**4 - 10*lambda**2 + 1, 1) 1 x CRootOf(lambda**2 - 6*lambda - 1, 0) 1 x CRootOf(lambda**2 - 8*lambda + 1, 0) 1 x CRootOf(lambda**8 - 10*lambda**6 + 15*lambda**4 - 10*lambda**2 + 1, 2) 1 x CRootOf(lambda**4 - 5*lambda**2 + 1, 2) 1 x 1 1 x CRootOf(lambda**4 - 5*lambda**2 + 1, 3) 1 x CRootOf(lambda**8 - 10*lambda**6 + 15*lambda**4 - 10*lambda**2 + 1, 3) 1 x CRootOf(lambda**2 - 6*lambda - 1, 1) 1 x CRootOf(lambda**2 - 8*lambda + 1, 1) 1 x CRootOf(lambda**8 - 10*lambda**6 + 15*lambda**4 - 10*lambda**2 + 1, 4) 1 x CRootOf(lambda**8 - 10*lambda**6 + 15*lambda**4 - 10*lambda**2 + 1, 5) 1 x CRootOf(lambda**8 - 10*lambda**6 + 15*lambda**4 - 10*lambda**2 + 1, 6) 1 x CRootOf(lambda**8 - 10*lambda**6 + 15*lambda**4 - 10*lambda**2 + 1, 7) matrix dimensions: (24, 24) sum of multiplicities: 18 eigenvalues from eigenvects (multiplicities appear to be correct): 2 x -1 2 x 1 2 x -sqrt(5/2 - sqrt(21)/2) 2 x sqrt(5/2 - sqrt(21)/2) 1 x 3 - sqrt(10) 1 x 3 + sqrt(10) 1 x 4 - sqrt(15) 1 x sqrt(15) + 4 2 x -sqrt(sqrt(21)/2 + 5/2) 2 x sqrt(sqrt(21)/2 + 5/2) 1 x -cos(atan(sqrt(-33 + 20*sqrt(3))/(5 - 2*sqrt(3)))/2) - I*sin(atan(sqrt(-33 + 20*sqrt(3))/(5 - 2*sqrt(3)))/2) 1 x -cos(atan(sqrt(-33 + 20*sqrt(3))/(5 - 2*sqrt(3)))/2) + I*sin(atan(sqrt(-33 + 20*sqrt(3))/(5 - 2*sqrt(3)))/2) 1 x cos(atan(sqrt(-33 + 20*sqrt(3))/(5 - 2*sqrt(3)))/2) - I*sin(atan(sqrt(-33 + 20*sqrt(3))/(5 - 2*sqrt(3)))/2) 1 x cos(atan(sqrt(-33 + 20*sqrt(3))/(5 - 2*sqrt(3)))/2) + I*sin(atan(sqrt(-33 + 20*sqrt(3))/(5 - 2*sqrt(3)))/2) 1 x -sqrt(sqrt(3) + 5/2 + sqrt(33 + 20*sqrt(3))/2) 1 x sqrt(sqrt(3) + 5/2 + sqrt(33 + 20*sqrt(3))/2) 1 x -sqrt(-sqrt(33 + 20*sqrt(3))/2 + sqrt(3) + 5/2) 1 x sqrt(-sqrt(33 + 20*sqrt(3))/2 + sqrt(3) + 5/2) ``` The multiplicities from `eigenvals` function only sum up to 18, while the matrix is of dimension 24. The output from `eigenvects` appears to be correct (and it's impressive that it can solve the 8th degree equation symbolically).
I think that some roots are ignored from `roots`. I'm trying to investigate whether the matrix code is responsible for something but `sum(roots(mat.charpoly()).values())` already behaves suspicious. With `multiple=True`, the result seems correct Eigenvects are computable but quite slow though.
[ { "body": "To reproduce:\r\n```\r\nimport numpy as np\r\nimport sympy as sp\r\n\r\nprint(sp.__version__)\r\n\r\ndd = sd = [0] * 11 + [1]\r\nds = [2,0,1,0,0,0,1,0,1,0,1,0]\r\nss = ds.copy()\r\nss[8] = 2\r\n\r\ndef rotate(x, i):\r\n return x[i:] + x[:i]\r\n\r\nmat = []\r\nfor i in range(12):\r\n mat.append(rotate(ss,i) + rotate(sd,i))\r\nfor i in range(12):\r\n mat.append(rotate(ds,i) + rotate(dd,i))\r\n\r\nprint(np.array(mat))\r\n\r\nmat = sp.Matrix(mat)\r\neigenvalues = mat.eigenvals()\r\nprint('eigenvalues from eigenvals:')\r\nfor i in eigenvalues:\r\n print(eigenvalues[i], 'x', i)\r\nprint('matrix dimensions:', mat.shape)\r\nprint('sum of multiplicities:', sum([eigenvalues[i] for i in eigenvalues]))\r\n\r\neigenvectors = mat.eigenvects()\r\nprint('eigenvalues from eigenvects (multiplicities appear to be correct):')\r\nfor val, mult, _ in eigenvectors:\r\n print(mult, 'x', val)\r\n```\r\n\r\nOutput:\r\n```\r\n1.12\r\n[[2 0 1 0 0 0 1 0 2 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1]\r\n [0 1 0 0 0 1 0 2 0 1 0 2 0 0 0 0 0 0 0 0 0 0 1 0]\r\n [1 0 0 0 1 0 2 0 1 0 2 0 0 0 0 0 0 0 0 0 0 1 0 0]\r\n [0 0 0 1 0 2 0 1 0 2 0 1 0 0 0 0 0 0 0 0 1 0 0 0]\r\n [0 0 1 0 2 0 1 0 2 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0]\r\n [0 1 0 2 0 1 0 2 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0]\r\n [1 0 2 0 1 0 2 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0]\r\n [0 2 0 1 0 2 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0]\r\n [2 0 1 0 2 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0]\r\n [0 1 0 2 0 1 0 0 0 1 0 2 0 0 1 0 0 0 0 0 0 0 0 0]\r\n [1 0 2 0 1 0 0 0 1 0 2 0 0 1 0 0 0 0 0 0 0 0 0 0]\r\n [0 2 0 1 0 0 0 1 0 2 0 1 1 0 0 0 0 0 0 0 0 0 0 0]\r\n [2 0 1 0 0 0 1 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1]\r\n [0 1 0 0 0 1 0 1 0 1 0 2 0 0 0 0 0 0 0 0 0 0 1 0]\r\n [1 0 0 0 1 0 1 0 1 0 2 0 0 0 0 0 0 0 0 0 0 1 0 0]\r\n [0 0 0 1 0 1 0 1 0 2 0 1 0 0 0 0 0 0 0 0 1 0 0 0]\r\n [0 0 1 0 1 0 1 0 2 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0]\r\n [0 1 0 1 0 1 0 2 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0]\r\n [1 0 1 0 1 0 2 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0]\r\n [0 1 0 1 0 2 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0]\r\n [1 0 1 0 2 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0]\r\n [0 1 0 2 0 1 0 0 0 1 0 1 0 0 1 0 0 0 0 0 0 0 0 0]\r\n [1 0 2 0 1 0 0 0 1 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0]\r\n [0 2 0 1 0 0 0 1 0 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0]]\r\neigenvalues from eigenvals:\r\n1 x CRootOf(lambda**8 - 10*lambda**6 + 15*lambda**4 - 10*lambda**2 + 1, 0)\r\n1 x CRootOf(lambda**4 - 5*lambda**2 + 1, 0)\r\n1 x -1\r\n1 x CRootOf(lambda**4 - 5*lambda**2 + 1, 1)\r\n1 x CRootOf(lambda**8 - 10*lambda**6 + 15*lambda**4 - 10*lambda**2 + 1, 1)\r\n1 x CRootOf(lambda**2 - 6*lambda - 1, 0)\r\n1 x CRootOf(lambda**2 - 8*lambda + 1, 0)\r\n1 x CRootOf(lambda**8 - 10*lambda**6 + 15*lambda**4 - 10*lambda**2 + 1, 2)\r\n1 x CRootOf(lambda**4 - 5*lambda**2 + 1, 2)\r\n1 x 1\r\n1 x CRootOf(lambda**4 - 5*lambda**2 + 1, 3)\r\n1 x CRootOf(lambda**8 - 10*lambda**6 + 15*lambda**4 - 10*lambda**2 + 1, 3)\r\n1 x CRootOf(lambda**2 - 6*lambda - 1, 1)\r\n1 x CRootOf(lambda**2 - 8*lambda + 1, 1)\r\n1 x CRootOf(lambda**8 - 10*lambda**6 + 15*lambda**4 - 10*lambda**2 + 1, 4)\r\n1 x CRootOf(lambda**8 - 10*lambda**6 + 15*lambda**4 - 10*lambda**2 + 1, 5)\r\n1 x CRootOf(lambda**8 - 10*lambda**6 + 15*lambda**4 - 10*lambda**2 + 1, 6)\r\n1 x CRootOf(lambda**8 - 10*lambda**6 + 15*lambda**4 - 10*lambda**2 + 1, 7)\r\nmatrix dimensions: (24, 24)\r\nsum of multiplicities: 18\r\neigenvalues from eigenvects (multiplicities appear to be correct):\r\n2 x -1\r\n2 x 1\r\n2 x -sqrt(5/2 - sqrt(21)/2)\r\n2 x sqrt(5/2 - sqrt(21)/2)\r\n1 x 3 - sqrt(10)\r\n1 x 3 + sqrt(10)\r\n1 x 4 - sqrt(15)\r\n1 x sqrt(15) + 4\r\n2 x -sqrt(sqrt(21)/2 + 5/2)\r\n2 x sqrt(sqrt(21)/2 + 5/2)\r\n1 x -cos(atan(sqrt(-33 + 20*sqrt(3))/(5 - 2*sqrt(3)))/2) - I*sin(atan(sqrt(-33 + 20*sqrt(3))/(5 - 2*sqrt(3)))/2)\r\n1 x -cos(atan(sqrt(-33 + 20*sqrt(3))/(5 - 2*sqrt(3)))/2) + I*sin(atan(sqrt(-33 + 20*sqrt(3))/(5 - 2*sqrt(3)))/2)\r\n1 x cos(atan(sqrt(-33 + 20*sqrt(3))/(5 - 2*sqrt(3)))/2) - I*sin(atan(sqrt(-33 + 20*sqrt(3))/(5 - 2*sqrt(3)))/2)\r\n1 x cos(atan(sqrt(-33 + 20*sqrt(3))/(5 - 2*sqrt(3)))/2) + I*sin(atan(sqrt(-33 + 20*sqrt(3))/(5 - 2*sqrt(3)))/2)\r\n1 x -sqrt(sqrt(3) + 5/2 + sqrt(33 + 20*sqrt(3))/2)\r\n1 x sqrt(sqrt(3) + 5/2 + sqrt(33 + 20*sqrt(3))/2)\r\n1 x -sqrt(-sqrt(33 + 20*sqrt(3))/2 + sqrt(3) + 5/2)\r\n1 x sqrt(-sqrt(33 + 20*sqrt(3))/2 + sqrt(3) + 5/2)\r\n```\r\nThe multiplicities from `eigenvals` function only sum up to 18, while the matrix is of dimension 24. The output from `eigenvects` appears to be correct (and it's impressive that it can solve the 8th degree equation symbolically).", "number": 25282, "title": "incorrect multiplicities from `eigenvals`" } ]
9d4517c6b1840aa2c2dfd1617aea32ccedc7041d
{ "head_commit": "9e6500a3348eeb18072fb775681b8050c00c6026", "head_commit_message": "Use counter", "patch_to_review": "diff --git a/sympy/matrices/eigen.py b/sympy/matrices/eigen.py\nindex ea39e83a940c..cc421077a3e8 100644\n--- a/sympy/matrices/eigen.py\n+++ b/sympy/matrices/eigen.py\n@@ -258,7 +258,7 @@ def _eigenvals_dict(\n f = charpoly.as_expr()\n x = charpoly.gen\n try:\n- eigs = {CRootOf(f, x, idx): 1 for idx in range(degree)}\n+ eigs = Counter(CRootOf(f, x, idx) for idx in range(degree))\n except NotImplementedError:\n if error_when_incomplete:\n raise MatrixError(eigenvals_error_message)\ndiff --git a/sympy/matrices/tests/test_eigen.py b/sympy/matrices/tests/test_eigen.py\nindex 75ec80bcdd55..e952d225f74e 100644\n--- a/sympy/matrices/tests/test_eigen.py\n+++ b/sympy/matrices/tests/test_eigen.py\n@@ -705,3 +705,21 @@ def test_issue_20752():\n b = symbols('b', nonzero=True)\n m = Matrix([[0, 0, 0], [0, b, 0], [0, 0, b]])\n assert m.is_positive_semidefinite is None\n+\n+\n+def test_issue_25282():\n+ dd = sd = [0] * 11 + [1]\n+ ds = [2, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0]\n+ ss = ds.copy()\n+ ss[8] = 2\n+\n+ def rotate(x, i):\n+ return x[i:] + x[:i]\n+\n+ mat = []\n+ for i in range(12):\n+ mat.append(rotate(ss, i) + rotate(sd, i))\n+ for i in range(12):\n+ mat.append(rotate(ds, i) + rotate(dd, i))\n+\n+ assert sum(Matrix(mat).eigenvals().values()) == 24\n" }
[ { "diff_hunk": "@@ -258,7 +258,7 @@ def _eigenvals_dict(\n f = charpoly.as_expr()\n x = charpoly.gen\n try:\n- eigs = {CRootOf(f, x, idx): 1 for idx in range(degree)}\n+ eigs = Counter(CRootOf(f, x, idx) for idx in range(degree))", "line": null, "original_line": 261, "original_start_line": null, "path": "sympy/matrices/eigen.py", "start_line": null, "text": "@user1:\nMaybe it could be `charpoly.all_roots(multiple=False)`" } ]
545f9e4c502ec939fe3aaa6e4632600ddcb3c0cc
diff --git a/sympy/matrices/eigen.py b/sympy/matrices/eigen.py index ea39e83a940c..cec37c9e61b0 100644 --- a/sympy/matrices/eigen.py +++ b/sympy/matrices/eigen.py @@ -210,11 +210,8 @@ def _eigenvals_list( eigs = roots(charpoly, multiple=True, **flags) if len(eigs) != block.rows: - degree = int(charpoly.degree()) - f = charpoly.as_expr() - x = charpoly.gen try: - eigs = [CRootOf(f, x, idx) for idx in range(degree)] + eigs = charpoly.all_roots(multiple=True) except NotImplementedError: if error_when_incomplete: raise MatrixError(eigenvals_error_message) @@ -254,11 +251,8 @@ def _eigenvals_dict( eigs = roots(charpoly, multiple=False, **flags) if sum(eigs.values()) != block.rows: - degree = int(charpoly.degree()) - f = charpoly.as_expr() - x = charpoly.gen try: - eigs = {CRootOf(f, x, idx): 1 for idx in range(degree)} + eigs = dict(charpoly.all_roots(multiple=False)) except NotImplementedError: if error_when_incomplete: raise MatrixError(eigenvals_error_message) diff --git a/sympy/matrices/tests/test_eigen.py b/sympy/matrices/tests/test_eigen.py index 75ec80bcdd55..e952d225f74e 100644 --- a/sympy/matrices/tests/test_eigen.py +++ b/sympy/matrices/tests/test_eigen.py @@ -705,3 +705,21 @@ def test_issue_20752(): b = symbols('b', nonzero=True) m = Matrix([[0, 0, 0], [0, b, 0], [0, 0, b]]) assert m.is_positive_semidefinite is None + + +def test_issue_25282(): + dd = sd = [0] * 11 + [1] + ds = [2, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0] + ss = ds.copy() + ss[8] = 2 + + def rotate(x, i): + return x[i:] + x[:i] + + mat = [] + for i in range(12): + mat.append(rotate(ss, i) + rotate(sd, i)) + for i in range(12): + mat.append(rotate(ds, i) + rotate(dd, i)) + + assert sum(Matrix(mat).eigenvals().values()) == 24
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Bug Fixes" }
sympy__sympy-25295@ef8b664
sympy/sympy
Python
25,295
Added the logo to the docs
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> Closes #25275. #### Brief description of what is fixed or changed Added the SymPy logo to the docs, under the "How-To Guides" section. #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> NO ENTRY <!-- END RELEASE NOTES -->
2023-06-27T14:12:05Z
Add logo guide to the docs We should have a page in the docs that just has our logo that people can copy, and a guide on how to use it. Right now the logo is available if you build it in the source, but it's not easily accessible from a Google search.
Hi @asmeurer when you said, 'a guide on how to use it'. Do you mean like a rst file for how to generate the logo?
[ { "body": "We should have a page in the docs that just has our logo that people can copy, and a guide on how to use it. Right now the logo is available if you build it in the source, but it's not easily accessible from a Google search. ", "number": 25275, "title": "Add logo guide to the docs" } ]
ad95bcc5282aecf3085675b2a2658419836c8a91
{ "head_commit": "ef8b6640f75fc2adfe654c2018f026729e8746a7", "head_commit_message": "Moved the SymPy Logo page to the How-To Guides section and added the logo image inline.", "patch_to_review": "diff --git a/doc/src/contributing/documentation-style-guide.rst b/doc/src/contributing/documentation-style-guide.rst\nindex 230a5df08420..60f06fea06d0 100644\n--- a/doc/src/contributing/documentation-style-guide.rst\n+++ b/doc/src/contributing/documentation-style-guide.rst\n@@ -177,7 +177,7 @@ Narrative documentation can be written using either Restructured Text\n (``.rst``) or Markdown (``.md``). Markdown documentation uses `MyST\n <https://myst-parser.readthedocs.io/en/latest/index.html>`_. See `this guide\n <https://myst-parser.readthedocs.io/en/latest/syntax/syntax.html>`_ for more\n-information on how to write documents in MArkdown. Markdown is only supported\n+information on how to write documents in Markdown. Markdown is only supported\n for narrative documentation. Docstrings should continue to use RST syntax. Any\n part of this style guide that is not specific to RST syntax should still apply\n to Markdown documents.\ndiff --git a/doc/src/contributing/new-contributors-guide/build-docs.rst b/doc/src/contributing/new-contributors-guide/build-docs.rst\nindex 9a9ff3597df2..acde1219be03 100644\n--- a/doc/src/contributing/new-contributors-guide/build-docs.rst\n+++ b/doc/src/contributing/new-contributors-guide/build-docs.rst\n@@ -159,21 +159,6 @@ required dependencies locally, the documentation can be built by running the\n \n make html\n \n-\n-SymPy Logos\n-~~~~~~~~~~~\n-\n-SymPy has a collection of official logos, which can be generated from sympy.svg in your local copy of SymPy by:\n-\n-.. code-block:: none\n-\n- $ cd doc\n-\n- $ make logo # will be stored in the _build/logo subdirectory\n-\n-The license of all the logos is the same as SymPy: BSD. See the\n-`LICENSE file <https://github.com/sympy/sympy/blob/master/LICENSE>`_ for more information.\n-\n View the Docs\n ^^^^^^^^^^^^^\n \ndiff --git a/doc/src/guides/index.rst b/doc/src/guides/index.rst\nindex 7a47a3dfbd28..adf79f5ce805 100644\n--- a/doc/src/guides/index.rst\n+++ b/doc/src/guides/index.rst\n@@ -18,4 +18,5 @@ For a deeper and elaborate exploration of other SymPy topics, see the\n custom-functions.md\n physics/index.rst\n solving/index.md\n+ logo.rst\n ../citing.md\ndiff --git a/doc/src/guides/logo.rst b/doc/src/guides/logo.rst\nnew file mode 100644\nindex 000000000000..7eb197a12f3d\n--- /dev/null\n+++ b/doc/src/guides/logo.rst\n@@ -0,0 +1,32 @@\n+===========\n+SymPy Logo\n+===========\n+\n+We would like to make it easy for you to include the SymPy project identity in\n+your next academic paper, course materials, or presentation.\n+\n+.. image:: ../logo/sympy.svg\n+ :width: 600\n+ :align: center\n+ :alt: SymPy Logo\n+\n+You can find this high-resolution version of the SymPy logo\n+`here <https://github.com/sympy/sympy/blob/master/doc/src/logo/sympy.svg>`_.\n+\n+If you would like to generate SymPy's collection of official logos yourself,\n+you can do so by running:\n+\n+.. code-block:: none\n+\n+ $ cd doc\n+\n+ $ make logo # will be stored in the _build/logo subdirectory\n+\n+which will generate the logos by using the ``sympy.svg`` file in your local\n+copy of SymPy.\n+\n+There is also a ``sympy/doc/generate_logos.py`` script that allows for a wider\n+variety of options while generating the logo.\n+\n+The license of all the logos is the same as SymPy: BSD. See the\n+`LICENSE file <https://github.com/sympy/sympy/blob/master/LICENSE>`_ for more information.\n" }
[ { "diff_hunk": "@@ -0,0 +1,32 @@\n+===========\n+SymPy Logo\n+===========\n+\n+We would like to make it easy for you to include the SymPy project identity in\n+your next academic paper, course materials, or presentation.\n+\n+.. image:: ../logo/sympy.svg\n+ :width: 600\n+ :align: center\n+ :alt: SymPy Logo\n+\n+You can find this high-resolution version of the SymPy logo\n+`here <https://github.com/sympy/sympy/blob/master/doc/src/logo/sympy.svg>`_.", "line": null, "original_line": 14, "original_start_line": null, "path": "doc/src/guides/logo.rst", "start_line": null, "text": "@user1:\nThere's no reason to have this since the above image is basically the same thing. \n\n@author:\nRemoved." } ]
cbe12dd9a1c496b2477c08aa414d96c8b79aa21a
diff --git a/doc/src/contributing/documentation-style-guide.rst b/doc/src/contributing/documentation-style-guide.rst index 230a5df08420..60f06fea06d0 100644 --- a/doc/src/contributing/documentation-style-guide.rst +++ b/doc/src/contributing/documentation-style-guide.rst @@ -177,7 +177,7 @@ Narrative documentation can be written using either Restructured Text (``.rst``) or Markdown (``.md``). Markdown documentation uses `MyST <https://myst-parser.readthedocs.io/en/latest/index.html>`_. See `this guide <https://myst-parser.readthedocs.io/en/latest/syntax/syntax.html>`_ for more -information on how to write documents in MArkdown. Markdown is only supported +information on how to write documents in Markdown. Markdown is only supported for narrative documentation. Docstrings should continue to use RST syntax. Any part of this style guide that is not specific to RST syntax should still apply to Markdown documents. diff --git a/doc/src/contributing/new-contributors-guide/build-docs.rst b/doc/src/contributing/new-contributors-guide/build-docs.rst index 9a9ff3597df2..acde1219be03 100644 --- a/doc/src/contributing/new-contributors-guide/build-docs.rst +++ b/doc/src/contributing/new-contributors-guide/build-docs.rst @@ -159,21 +159,6 @@ required dependencies locally, the documentation can be built by running the make html - -SymPy Logos -~~~~~~~~~~~ - -SymPy has a collection of official logos, which can be generated from sympy.svg in your local copy of SymPy by: - -.. code-block:: none - - $ cd doc - - $ make logo # will be stored in the _build/logo subdirectory - -The license of all the logos is the same as SymPy: BSD. See the -`LICENSE file <https://github.com/sympy/sympy/blob/master/LICENSE>`_ for more information. - View the Docs ^^^^^^^^^^^^^ diff --git a/doc/src/guides/index.rst b/doc/src/guides/index.rst index 7a47a3dfbd28..adf79f5ce805 100644 --- a/doc/src/guides/index.rst +++ b/doc/src/guides/index.rst @@ -18,4 +18,5 @@ For a deeper and elaborate exploration of other SymPy topics, see the custom-functions.md physics/index.rst solving/index.md + logo.rst ../citing.md diff --git a/doc/src/guides/logo.rst b/doc/src/guides/logo.rst new file mode 100644 index 000000000000..a8fd0ad85991 --- /dev/null +++ b/doc/src/guides/logo.rst @@ -0,0 +1,45 @@ +=========== +SymPy Logo +=========== + +We would like to make it easy for you to include the SymPy project identity in +your next academic paper, course materials, or presentation. + +.. image:: ../logo/sympy.svg + :width: 600 + :align: center + :alt: SymPy Logo + +The above image logo is an SVG version of the logo. We also have a PNG version of the logo: + +.. image:: ../../_build/logo/sympy-500px.png + :width: 500 + :align: center + :alt: SymPy Logo + +If you would like one without the "SymPy" text, we have that too: + +.. image:: ../../_build/logo/sympy-notext-500px.png + :width: 500 + :align: center + :alt: SymPy Logo + +Note: The text version should be preferred unless the "SymPy" name is already present separately. + +If you would like to generate SymPy's collection of official logos yourself, +you can do so by first :ref:`installing the required dependencies <build-the-documentation>`, and then running: + +.. code-block:: none + + $ cd doc + + $ make logo # will be stored in the _build/logo subdirectory + +which will generate the logos by using the ``sympy.svg`` file in your local +copy of SymPy. + +There is also a ``sympy/doc/generate_logos.py`` script that allows for a wider +variety of options while generating the logo. + +The license of all the logos is the same as SymPy: BSD. See the +`LICENSE file <https://github.com/sympy/sympy/blob/master/LICENSE>`_ for more information.
{ "difficulty": "low", "estimated_review_effort": 2, "problem_domain": "Documentation Updates" }
sympy__sympy-25256@40ea678
sympy/sympy
Python
25,256
Removed lazy calculation and storage of norm
#### References to other Issues or PRs Fixes #25254 #### Release Notes <!-- BEGIN RELEASE NOTES --> NO ENTRY <!-- END RELEASE NOTES -->
2023-06-19T08:38:10Z
Quaternions in version 12 Dear friends, I have detected a problem when multiplying Quaternions using the * to do the product between two. It also fails when using q.mul(p). In my case, one of the quaternions was obtained by the "from Axis angle". This code gives me problems: `p= Quaternion(1,0,1,0); q = Quaternion.from_axis_angle((1,1,1), 3*np.pi/4); qi =q.inverse(); p1= q*p*qi ` The error is not there for any previous versions 11.x. Many thanks for considering my request.
CC @evbernardes ```python In [3]: p= Quaternion(1,0,1,0); q = Quaternion.from_axis_angle((1,1,1), 3*np.pi/4); qi =q.inverse() In [4]: q*p --------------------------------------------------------------------------- ValueError: Incompatible value for norm. ``` Hello @singladur. I think the bug itself comes from `3*np.pi/4` line, since the used value or `pi` is the numerical value from `numpy`. Changing it to `3*sympy.pi/4` would solve it. This shouldn't be a big deal though though, so I'd say it's more of a **design fault from my part**. To remove this problem from the `Sympy` source, this: ```python def norm(self): if self._norm is None: # check if norm is pre-defined q = self self._norm = sqrt(trigsimp(q.a**2 + q.b**2 + q.c**2 + q.d**2)) return self._norm ``` Can be changed to this: ```python def norm(self): if self._norm is None: # check if norm is pre-defined q = self return sqrt(trigsimp(q.a**2 + q.b**2 + q.c**2 + q.d**2)) return self._norm ``` Since removing this lazy calculation and storage of the norm value would avoid forcing the `norm` attritube in every `Quaternion`, except when explicitly asked. @oscarbenjamin If this is okay for you, I made a PR with this. Except if you plan on removing the norm system entirely, like we were discussing some months ago.
[ { "body": "Dear friends,\r\nI have detected a problem when multiplying Quaternions using the * to do the product between two. It also fails when using q.mul(p). In my case, one of the quaternions was obtained by the \"from Axis angle\". This code gives me problems:\r\n\r\n`p= Quaternion(1,0,1,0);\r\nq = Quaternion.from_axis_angle((1,1,1), 3*np.pi/4);\r\nqi =q.inverse();\r\np1= q*p*qi `\r\n\r\nThe error is not there for any previous versions 11.x.\r\n\r\nMany thanks for considering my request.", "number": 25254, "title": "Quaternions in version 12" } ]
a0673de13b81e06b0fcdba9ea938cbbd098f9ea7
{ "head_commit": "40ea678b641201ed091142267f69479932b1bdd8", "head_commit_message": "fixed test", "patch_to_review": "diff --git a/sympy/algebras/quaternion.py b/sympy/algebras/quaternion.py\nindex d0a8915ee9c0..42dd989ec648 100644\n--- a/sympy/algebras/quaternion.py\n+++ b/sympy/algebras/quaternion.py\n@@ -884,7 +884,7 @@ def norm(self):\n q = self\n # trigsimp is used to simplify sin(x)^2 + cos(x)^2 (these terms\n # arise when from_axis_angle is used).\n- self._norm = sqrt(trigsimp(q.a**2 + q.b**2 + q.c**2 + q.d**2))\n+ return sqrt(trigsimp(q.a**2 + q.b**2 + q.c**2 + q.d**2))\n \n return self._norm\n \ndiff --git a/sympy/algebras/tests/test_quaternion.py b/sympy/algebras/tests/test_quaternion.py\nindex edf65ec18e08..4896a0e73e29 100644\n--- a/sympy/algebras/tests/test_quaternion.py\n+++ b/sympy/algebras/tests/test_quaternion.py\n@@ -13,6 +13,7 @@\n from sympy.simplify.trigsimp import trigsimp\n from sympy.algebras.quaternion import Quaternion\n from sympy.testing.pytest import raises\n+import numpy as np\n from itertools import permutations, product\n \n w, x, y, z = symbols('w:z')\n@@ -46,6 +47,18 @@ def test_quaternion_construction_norm():\n assert (q1 * q3).norm() == q1.norm()\n \n \n+def test_issue_25254():\n+ # calculating the inverse cached the norm which caused problems\n+ # when multiplying\n+ p = Quaternion(1, 0, 1, 0)\n+ q = Quaternion.from_axis_angle((1,1,1), 3 * np.pi/4)\n+ qi = q.inverse() # this operation cached the norm\n+ try:\n+ test = q * p * qi\n+ except ValueError as exc:\n+ assert False, f\"'10 / 5' raised an exception {exc}\"\n+\n+\n def test_to_and_from_Matrix():\n q = Quaternion(w, x, y, z)\n q_full = Quaternion.from_Matrix(q.to_Matrix())\n" }
[ { "diff_hunk": "@@ -46,6 +47,18 @@ def test_quaternion_construction_norm():\n assert (q1 * q3).norm() == q1.norm()\n \n \n+def test_issue_25254():\n+ # calculating the inverse cached the norm which caused problems\n+ # when multiplying\n+ p = Quaternion(1, 0, 1, 0)\n+ q = Quaternion.from_axis_angle((1,1,1), 3 * np.pi/4)", "line": null, "original_line": 54, "original_start_line": null, "path": "sympy/algebras/tests/test_quaternion.py", "start_line": null, "text": "@user1:\nWe can test with `math.pi` constant to remove dependency with numpy." } ]
5cf7d0e73681c83fb8282f720765108aab011d26
diff --git a/sympy/algebras/quaternion.py b/sympy/algebras/quaternion.py index d0a8915ee9c0..42dd989ec648 100644 --- a/sympy/algebras/quaternion.py +++ b/sympy/algebras/quaternion.py @@ -884,7 +884,7 @@ def norm(self): q = self # trigsimp is used to simplify sin(x)^2 + cos(x)^2 (these terms # arise when from_axis_angle is used). - self._norm = sqrt(trigsimp(q.a**2 + q.b**2 + q.c**2 + q.d**2)) + return sqrt(trigsimp(q.a**2 + q.b**2 + q.c**2 + q.d**2)) return self._norm diff --git a/sympy/algebras/tests/test_quaternion.py b/sympy/algebras/tests/test_quaternion.py index edf65ec18e08..f05230424761 100644 --- a/sympy/algebras/tests/test_quaternion.py +++ b/sympy/algebras/tests/test_quaternion.py @@ -13,6 +13,7 @@ from sympy.simplify.trigsimp import trigsimp from sympy.algebras.quaternion import Quaternion from sympy.testing.pytest import raises +import math from itertools import permutations, product w, x, y, z = symbols('w:z') @@ -46,6 +47,16 @@ def test_quaternion_construction_norm(): assert (q1 * q3).norm() == q1.norm() +def test_issue_25254(): + # calculating the inverse cached the norm which caused problems + # when multiplying + p = Quaternion(1, 0, 0, 0) + q = Quaternion.from_axis_angle((1, 1, 1), 3 * math.pi/4) + qi = q.inverse() # this operation cached the norm + test = q * p * qi + assert ((test - p).norm() < 1E-10) + + def test_to_and_from_Matrix(): q = Quaternion(w, x, y, z) q_full = Quaternion.from_Matrix(q.to_Matrix())
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Bug Fixes" }
sympy__sympy-25056@2e34024
sympy/sympy
Python
25,056
Fix substituting TensExpr in Mul
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> Fixes #25051 #### Brief description of what is fixed or changed Earlier, substituting a TensExpr into a Mul would output a Mul. Now, the Mul is converted to a TensMul. I've also added some tests for Mul->TensMul and Add->TensAdd. #### Other comments The conversion already seemed to work for Add->TensAdd, so I've just added tests for it. I have put the tests in tensor/tests/test_tensor.py, but is there a better place for them? #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> * tensor * Substituting a Tensor into a Mul now produces a TensMul as expected. <!-- END RELEASE NOTES -->
2023-04-15T05:23:48Z
Substituting Tensor into Mul returns Mul rather than TensMul When I substitute a tensor into a Mul (using `subs` or `replace`), I would expect the output to be a `TensMul`, but it remains a `Mul`. (in isympy prompt) ``` In [1]: from sympy import symbols In [2]: from sympy.tensor.tensor import TensorIndexType, tensor_indices, TensorHead In [3]: R3 = TensorIndexType('R3', dim=3) In [4]: i = tensor_indices("i", R3) In [5]: K = TensorHead("K", [R3]) In [7]: expr = (x*2).replace(x, K(i)) In [9]: type(expr) Out[9]: sympy.core.mul.Mul ``` This is a problem in more complicated expressions, since the user might not even notice that one of the terms has the wrong type, and then they might wonder why various methods (e.g. expand) don't work as expected. One possible way to fix this (but only for `subs`) is to check in `Mul._eval_subs` if the argument `new` is a `TensExpr`, and cast the output accordingly. However, this would not fix the issue for `xreplace` and `replace`. Any ideas on how this can be fixed? Is it acceptable to add a check in `AssocOp.__new__`? Sympy version: 1.11.1 Python version: 3.10.10 OS: Arch Linux
This should be handled with postprocessors, similar to https://github.com/sympy/sympy/blob/master/sympy/matrices/expressions/matexpr.py#L518. More generally, it might be a good idea to make tensor expressions and matrix expressions share some code, but that could be a lot of work. I've made a PR: https://github.com/sympy/sympy/pull/25056
[ { "body": "When I substitute a tensor into a Mul (using `subs` or `replace`), I would expect the output to be a `TensMul`, but it remains a `Mul`.\r\n\r\n(in isympy prompt)\r\n```\r\nIn [1]: from sympy import symbols\r\nIn [2]: from sympy.tensor.tensor import TensorIndexType, tensor_indices, TensorHead\r\nIn [3]: R3 = TensorIndexType('R3', dim=3)\r\nIn [4]: i = tensor_indices(\"i\", R3)\r\nIn [5]: K = TensorHead(\"K\", [R3])\r\nIn [7]: expr = (x*2).replace(x, K(i))\r\nIn [9]: type(expr)\r\nOut[9]: sympy.core.mul.Mul\r\n```\r\n\r\nThis is a problem in more complicated expressions, since the user might not even notice that one of the terms has the wrong type, and then they might wonder why various methods (e.g. expand) don't work as expected.\r\n\r\nOne possible way to fix this (but only for `subs`) is to check in `Mul._eval_subs` if the argument `new` is a `TensExpr`, and cast the output accordingly.\r\n\r\nHowever, this would not fix the issue for `xreplace` and `replace`. Any ideas on how this can be fixed? Is it acceptable to add a check in `AssocOp.__new__`?\r\n\r\nSympy version: 1.11.1\r\nPython version: 3.10.10\r\nOS: Arch Linux", "number": 25051, "title": "Substituting Tensor into Mul returns Mul rather than TensMul" } ]
bce780e23c0408e76c83d1c2866b469baa1e1e66
{ "head_commit": "2e340240950626c3f652386c98d2370cbf3916f7", "head_commit_message": "tensor: get_postprocessor: remove unnecessary use of list in any()\n\nflake8 does not like it.", "patch_to_review": "diff --git a/sympy/tensor/tensor.py b/sympy/tensor/tensor.py\nindex 3d0c5da67b89..07280946d1fb 100644\n--- a/sympy/tensor/tensor.py\n+++ b/sympy/tensor/tensor.py\n@@ -4861,3 +4861,23 @@ def _expand(expr, **kwargs):\n return expr._expand(**kwargs)\n else:\n return expr.expand(**kwargs)\n+\n+\n+def get_postprocessor(cls):\n+ def _postprocessor(expr):\n+ tens_class = {Mul: TensMul, Add: TensAdd}[cls]\n+ if any(isinstance(a, TensExpr) for a in expr.args):\n+ return tens_class(*expr.args)\n+ else:\n+ return expr\n+\n+ return _postprocessor\n+\n+\"\"\"\n+The following makes sure that if a user tries to put a tensor in a Mul, it\n+automatically gets converted to a TensMul (see github issue #25051). The\n+conversion already seems to work for TensAdd, so we don't do anything for that.\n+\"\"\"\n+Basic._constructor_postprocessor_mapping[TensExpr] = {\n+ \"Mul\": [get_postprocessor(Mul)],\n+}\ndiff --git a/sympy/tensor/tests/test_tensor.py b/sympy/tensor/tests/test_tensor.py\nindex 043dd9641416..a13675860bc1 100644\n--- a/sympy/tensor/tests/test_tensor.py\n+++ b/sympy/tensor/tests/test_tensor.py\n@@ -2040,3 +2040,22 @@ def test_TensorType():\n def test_dummy_fmt():\n with warns_deprecated_sympy():\n TensorIndexType('Lorentz', dummy_fmt='L')\n+\n+def test_postprocessor():\n+ \"\"\"\n+ Test if substituting a Tensor into a Mul or Add automatically converts it\n+ to TensMul or TensAdd respectively. See github issue #25051\n+ \"\"\"\n+ R3 = TensorIndexType('R3', dim=3)\n+ i = tensor_indices(\"i\", R3)\n+ K = TensorHead(\"K\", [R3])\n+ x,y,z = symbols(\"x y z\")\n+\n+ assert isinstance((x*2).xreplace({x: K(i)}), TensMul)\n+ assert isinstance((x+2).xreplace({x: K(i)*K(-i)}), TensAdd)\n+\n+ assert isinstance((x*2).subs({x: K(i)}), TensMul)\n+ assert isinstance((x+2).subs({x: K(i)*K(-i)}), TensAdd)\n+\n+ assert isinstance((x*2).replace(x, K(i)), TensMul)\n+ assert isinstance((x+2).replace(x, K(i)*K(-i)), TensAdd)\n" }
[ { "diff_hunk": "@@ -4861,3 +4861,23 @@ def _expand(expr, **kwargs):\n return expr._expand(**kwargs)\n else:\n return expr.expand(**kwargs)\n+\n+\n+def get_postprocessor(cls):\n+ def _postprocessor(expr):\n+ tens_class = {Mul: TensMul, Add: TensAdd}[cls]\n+ if any(isinstance(a, TensExpr) for a in expr.args):\n+ return tens_class(*expr.args)\n+ else:\n+ return expr\n+\n+ return _postprocessor\n+\n+\"\"\"\n+The following makes sure that if a user tries to put a tensor in a Mul, it\n+automatically gets converted to a TensMul (see github issue #25051). The\n+conversion already seems to work for TensAdd, so we don't do anything for that.\n+\"\"\"", "line": null, "original_line": 4880, "original_start_line": null, "path": "sympy/tensor/tensor.py", "start_line": null, "text": "@user1:\nUse # for comments. You also don't need to mention all these details in a comment. Just the first sentence here is enough, or even nothing at all, since people can just look up what the constructor postprocessor is in basic.py if they don't know what this code does. \n\n@author:\nOkay, I've removed that comment." } ]
e9e59ff5f8ae169b9f115ab04fd19d48f70b26c5
diff --git a/sympy/tensor/tensor.py b/sympy/tensor/tensor.py index 3d0c5da67b89..9601ae75d8fb 100644 --- a/sympy/tensor/tensor.py +++ b/sympy/tensor/tensor.py @@ -4861,3 +4861,18 @@ def _expand(expr, **kwargs): return expr._expand(**kwargs) else: return expr.expand(**kwargs) + + +def get_postprocessor(cls): + def _postprocessor(expr): + tens_class = {Mul: TensMul, Add: TensAdd}[cls] + if any(isinstance(a, TensExpr) for a in expr.args): + return tens_class(*expr.args) + else: + return expr + + return _postprocessor + +Basic._constructor_postprocessor_mapping[TensExpr] = { + "Mul": [get_postprocessor(Mul)], +} diff --git a/sympy/tensor/tests/test_tensor.py b/sympy/tensor/tests/test_tensor.py index 043dd9641416..a13675860bc1 100644 --- a/sympy/tensor/tests/test_tensor.py +++ b/sympy/tensor/tests/test_tensor.py @@ -2040,3 +2040,22 @@ def test_TensorType(): def test_dummy_fmt(): with warns_deprecated_sympy(): TensorIndexType('Lorentz', dummy_fmt='L') + +def test_postprocessor(): + """ + Test if substituting a Tensor into a Mul or Add automatically converts it + to TensMul or TensAdd respectively. See github issue #25051 + """ + R3 = TensorIndexType('R3', dim=3) + i = tensor_indices("i", R3) + K = TensorHead("K", [R3]) + x,y,z = symbols("x y z") + + assert isinstance((x*2).xreplace({x: K(i)}), TensMul) + assert isinstance((x+2).xreplace({x: K(i)*K(-i)}), TensAdd) + + assert isinstance((x*2).subs({x: K(i)}), TensMul) + assert isinstance((x+2).subs({x: K(i)*K(-i)}), TensAdd) + + assert isinstance((x*2).replace(x, K(i)), TensMul) + assert isinstance((x+2).replace(x, K(i)*K(-i)), TensAdd)
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Bug Fixes" }
sympy__sympy-24974@8b1e363
sympy/sympy
Python
24,974
physics: Unit dyads access from ReferenceFrames
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> fixes #24965 #### Brief description of what is fixed or changed Unit dyads and unit dyadic can be accessed through attributes of a ReferenceFrame object. It is an addition related to issue #24965, allowing easy access to unit dyads without the need to use the outer() function. This commit adds 10 functions with the property decorator, along with a few examples in the constructor's documentation. #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> * physics.vector * Added access to unit dyads and unit dyadic directly from ReferenceFrame objects. <!-- END RELEASE NOTES -->
2023-03-25T08:50:06Z
ReferenceFrame should have attributes to access unit dyads and its unit dyadic ReferenceFrame has `.x,.y,.z` to access the unit vectors. It would be nice if you could access unit dyads also with `.xx,.xy,.xz,...`. Then you can construct dyadics without having to do `outer(N.x, N.x)`. This is a simple fix and addition. It would need to be documented and shown in some examples.
We could add an attribute for the unit dyadic. Maybe `N.dyadic` or `N.unit`. Not sure what the best name would be. Hello, I am willing to work on this. I wanted to ask if the intended behavior should be something like: ``` python >>> N = ReferenceFrame('N') >>> N.xx (N.x|N.x) >>> N.yz (N.y|N.z) ``` I have some basic linear algebra knowledge, however I haven't worked with dyadics before, and probably I have misunderstood this. Yes that's the idea. No linear algebra knowledge needed. `.xx` should simply execute `outer(self.x, self.x)`. Thank you, I will work on that! I made a function inside ReferenceFrame class that returns unit_dyadics of a given reference frame. The code is like this def unit_dyadics(N): """ Returns a dictionary of unit dyadics in the given reference frame N. parameters : N : ReferenceFrame The reference frame in which to create the unit dyadics. returns : dict A dictionary of unit dyadics with keys of the form "xx", "xy", "xz", etc. """ unit_dyadics = {} for i, b1 in enumerate(N): for j, b2 in enumerate(N): unit_dyadics[b1.name + b2.name] = outer(b1, b2) if i == j else 0 return unit_dyadics This creates a matrix of dyadics I will also try to write code for dyads in sometime Should I start a pull request now ? > I made a function inside ReferenceFrame class that returns unit_dyadics of a given reference frame. No, that is not what we are looking for. A unit dyadic is simply an identity matrix. More info on unit dyads and the unit dyadic can be found here: https://moorepants.github.io/learn-multibody-dynamics/mass.html
[ { "body": "ReferenceFrame has `.x,.y,.z` to access the unit vectors. It would be nice if you could access unit dyads also with `.xx,.xy,.xz,...`. Then you can construct dyadics without having to do `outer(N.x, N.x)`. This is a simple fix and addition. It would need to be documented and shown in some examples.", "number": 24965, "title": "ReferenceFrame should have attributes to access unit dyads and its unit dyadic" } ]
403b9b0dd24f9bd64e5b599c2ad1e87dad06da44
{ "head_commit": "8b1e36354c4ace4b01d12cb58fde49279e5460d8", "head_commit_message": "Fixed unit dyad example documentation mistake", "patch_to_review": "diff --git a/.mailmap b/.mailmap\nindex b3945e4c9ad8..856c6064202e 100644\n--- a/.mailmap\n+++ b/.mailmap\n@@ -802,6 +802,7 @@ Kiyohito Yamazaki <[email protected]>\n Klaus Rettinghaus <[email protected]>\n Konrad Meyer <[email protected]>\n Konstantin Togoi <[email protected]> <[email protected]>\n+Konstantinos Riganas <[email protected]> kostas-rigan <[email protected]>\n Kristian Brünn <[email protected]> Kristianmitk <[email protected]>\n Krit Karan <[email protected]> <[email protected]>\n Kshitij <[email protected]> Kshitij Parwani <[email protected]>\ndiff --git a/sympy/physics/vector/frame.py b/sympy/physics/vector/frame.py\nindex 23f9c0a841de..bd13f7f37b18 100644\n--- a/sympy/physics/vector/frame.py\n+++ b/sympy/physics/vector/frame.py\n@@ -145,6 +145,30 @@ def __init__(self, name, indices=None, latexs=None, variables=None):\n >>> type(A) == type(D)\n True\n \n+ Unit dyads for the ReferenceFrame can be accessed through the attributes ``xx``, ``xy``, etc. For example:\n+\n+ >>> from sympy.physics.vector import ReferenceFrame\n+ >>> N = ReferenceFrame('N')\n+ >>> N.yz\n+ (N.y|N.z)\n+ >>> N.zx\n+ (N.z|N.x)\n+ >>> P = ReferenceFrame('P', indices=['1', '2', '3'])\n+ >>> P.xx\n+ (P['1']|P['1'])\n+ >>> P.zy\n+ (P['3']|P['2'])\n+\n+ Unit dyadic is also accessible via the ``u`` attribute:\n+\n+ >>> from sympy.physics.vector import ReferenceFrame\n+ >>> N = ReferenceFrame('N')\n+ >>> N.unit\n+ (N.x|N.x) + (N.y|N.y) + (N.z|N.z)\n+ >>> P = ReferenceFrame('P', indices=['1', '2', '3'])\n+ >>> P.u\n+ (P['1']|P['1']) + (P['2']|P['2']) + (P['3']|P['3'])\n+\n \"\"\"\n \n if not isinstance(name, str):\n@@ -1388,6 +1412,56 @@ def z(self):\n \"\"\"The basis Vector for the ReferenceFrame, in the z direction. \"\"\"\n return self._z\n \n+ @property\n+ def xx(self):\n+ \"\"\"Unit dyad of basis Vectors x and x for the ReferenceFrame.\"\"\"\n+ return Vector.outer(self.x, self.x)\n+\n+ @property\n+ def xy(self):\n+ \"\"\"Unit dyad of basis Vectors x and y for the ReferenceFrame.\"\"\"\n+ return Vector.outer(self.x, self.y)\n+\n+ @property\n+ def xz(self):\n+ \"\"\"Unit dyad of basis Vectors x and z for the ReferenceFrame.\"\"\"\n+ return Vector.outer(self.x, self.z)\n+\n+ @property\n+ def yx(self):\n+ \"\"\"Unit dyad of basis Vectors y and x for the ReferenceFrame.\"\"\"\n+ return Vector.outer(self.y, self.x)\n+\n+ @property\n+ def yy(self):\n+ \"\"\"Unit dyad of basis Vectors y and y for the ReferenceFrame.\"\"\"\n+ return Vector.outer(self.y, self.y)\n+\n+ @property\n+ def yz(self):\n+ \"\"\"Unit dyad of basis Vectors y and z for the ReferenceFrame.\"\"\"\n+ return Vector.outer(self.y, self.z)\n+\n+ @property\n+ def zx(self):\n+ \"\"\"Unit dyad of basis Vectors z and x for the ReferenceFrame.\"\"\"\n+ return Vector.outer(self.z, self.x)\n+\n+ @property\n+ def zy(self):\n+ \"\"\"Unit dyad of basis Vectors z and y for the ReferenceFrame.\"\"\"\n+ return Vector.outer(self.z, self.y)\n+\n+ @property\n+ def zz(self):\n+ \"\"\"Unit dyad of basis Vectors z and z for the ReferenceFrame.\"\"\"\n+ return Vector.outer(self.z, self.z)\n+\n+ @property\n+ def u(self):\n+ \"\"\"Unit dyadic for the ReferenceFrame.\"\"\"\n+ return self.xx + self.yy + self.zz\n+\n def partial_velocity(self, frame, *gen_speeds):\n \"\"\"Returns the partial angular velocities of this frame in the given\n frame with respect to one or more provided generalized speeds.\ndiff --git a/sympy/physics/vector/tests/test_frame.py b/sympy/physics/vector/tests/test_frame.py\nindex 69cbc97d7451..7d19b99ea01f 100644\n--- a/sympy/physics/vector/tests/test_frame.py\n+++ b/sympy/physics/vector/tests/test_frame.py\n@@ -658,3 +658,63 @@ def test_dcm_cache_dict():\n assert A._dcm_dict == A._dcm_cache\n assert B._dcm_dict == {C: Matrix([[1, 0, 0],[0, cos(b), -sin(b)],[0, sin(b), cos(b)]]), \\\n A: Matrix([[1, 0, 0],[0, cos(b), -sin(b)],[0, sin(b), cos(b)]])}\n+\n+def test_xx_dyad():\n+ N = ReferenceFrame('N')\n+ F = ReferenceFrame('F', indices=['1', '2', '3'])\n+ assert N.xx == Vector.outer(N.x, N.x)\n+ assert F.xx == Vector.outer(F.x, F.x)\n+\n+def test_xy_dyad():\n+ N = ReferenceFrame('N')\n+ F = ReferenceFrame('F', indices=['1', '2', '3'])\n+ assert N.xy == Vector.outer(N.x, N.y)\n+ assert F.xy == Vector.outer(F.x, F.y)\n+\n+def test_xz_dyad():\n+ N = ReferenceFrame('N')\n+ F = ReferenceFrame('F', indices=['1', '2', '3'])\n+ assert N.xz == Vector.outer(N.x, N.z)\n+ assert F.xz == Vector.outer(F.x, F.z)\n+\n+def test_yx_dyad():\n+ N = ReferenceFrame('N')\n+ F = ReferenceFrame('F', indices=['1', '2', '3'])\n+ assert N.yx == Vector.outer(N.y, N.x)\n+ assert F.yx == Vector.outer(F.y, F.x)\n+\n+def test_yy_dyad():\n+ N = ReferenceFrame('N')\n+ F = ReferenceFrame('F', indices=['1', '2', '3'])\n+ assert N.yy == Vector.outer(N.y, N.y)\n+ assert F.yy == Vector.outer(F.y, F.y)\n+\n+def test_yz_dyad():\n+ N = ReferenceFrame('N')\n+ F = ReferenceFrame('F', indices=['1', '2', '3'])\n+ assert N.yz == Vector.outer(N.y, N.z)\n+ assert F.yz == Vector.outer(F.y, F.z)\n+\n+def test_zx_dyad():\n+ N = ReferenceFrame('N')\n+ F = ReferenceFrame('F', indices=['1', '2', '3'])\n+ assert N.zx == Vector.outer(N.z, N.x)\n+ assert F.zx == Vector.outer(F.z, F.x)\n+\n+def test_zy_dyad():\n+ N = ReferenceFrame('N')\n+ F = ReferenceFrame('F', indices=['1', '2', '3'])\n+ assert N.zy == Vector.outer(N.z, N.y)\n+ assert F.zy == Vector.outer(F.z, F.y)\n+\n+def test_zz_dyad():\n+ N = ReferenceFrame('N')\n+ F = ReferenceFrame('F', indices=['1', '2', '3'])\n+ assert N.zz == Vector.outer(N.z, N.z)\n+ assert F.zz == Vector.outer(F.z, F.z)\n+\n+def test_unit_dyadic():\n+ N = ReferenceFrame('N')\n+ F = ReferenceFrame('F', indices=['1', '2', '3'])\n+ assert N.u == N.xx + N.yy + N.zz\n+ assert F.u == F.xx + F.yy + F.zz\n" }
[ { "diff_hunk": "@@ -145,6 +145,30 @@ def __init__(self, name, indices=None, latexs=None, variables=None):\n >>> type(A) == type(D)\n True\n \n+ Unit dyads for the ReferenceFrame can be accessed through the attributes ``xx``, ``xy``, etc. For example:\n+\n+ >>> from sympy.physics.vector import ReferenceFrame\n+ >>> N = ReferenceFrame('N')\n+ >>> N.yz\n+ (N.y|N.z)\n+ >>> N.zx\n+ (N.z|N.x)\n+ >>> P = ReferenceFrame('P', indices=['1', '2', '3'])\n+ >>> P.xx\n+ (P['1']|P['1'])\n+ >>> P.zy\n+ (P['3']|P['2'])\n+\n+ Unit dyadic is also accessible via the ``u`` attribute:\n+\n+ >>> from sympy.physics.vector import ReferenceFrame\n+ >>> N = ReferenceFrame('N')\n+ >>> N.unit", "line": null, "original_line": 166, "original_start_line": null, "path": "sympy/physics/vector/frame.py", "start_line": null, "text": "@user1:\n```suggestion\r\n >>> N.u\r\n```" } ]
70d761f30c2173867786e1e0941b26b8f46b6bca
diff --git a/.mailmap b/.mailmap index b3945e4c9ad8..856c6064202e 100644 --- a/.mailmap +++ b/.mailmap @@ -802,6 +802,7 @@ Kiyohito Yamazaki <[email protected]> Klaus Rettinghaus <[email protected]> Konrad Meyer <[email protected]> Konstantin Togoi <[email protected]> <[email protected]> +Konstantinos Riganas <[email protected]> kostas-rigan <[email protected]> Kristian Brünn <[email protected]> Kristianmitk <[email protected]> Krit Karan <[email protected]> <[email protected]> Kshitij <[email protected]> Kshitij Parwani <[email protected]> diff --git a/sympy/physics/vector/frame.py b/sympy/physics/vector/frame.py index 23f9c0a841de..f337a23e5ad4 100644 --- a/sympy/physics/vector/frame.py +++ b/sympy/physics/vector/frame.py @@ -145,6 +145,30 @@ def __init__(self, name, indices=None, latexs=None, variables=None): >>> type(A) == type(D) True + Unit dyads for the ReferenceFrame can be accessed through the attributes ``xx``, ``xy``, etc. For example: + + >>> from sympy.physics.vector import ReferenceFrame + >>> N = ReferenceFrame('N') + >>> N.yz + (N.y|N.z) + >>> N.zx + (N.z|N.x) + >>> P = ReferenceFrame('P', indices=['1', '2', '3']) + >>> P.xx + (P['1']|P['1']) + >>> P.zy + (P['3']|P['2']) + + Unit dyadic is also accessible via the ``u`` attribute: + + >>> from sympy.physics.vector import ReferenceFrame + >>> N = ReferenceFrame('N') + >>> N.u + (N.x|N.x) + (N.y|N.y) + (N.z|N.z) + >>> P = ReferenceFrame('P', indices=['1', '2', '3']) + >>> P.u + (P['1']|P['1']) + (P['2']|P['2']) + (P['3']|P['3']) + """ if not isinstance(name, str): @@ -1388,6 +1412,56 @@ def z(self): """The basis Vector for the ReferenceFrame, in the z direction. """ return self._z + @property + def xx(self): + """Unit dyad of basis Vectors x and x for the ReferenceFrame.""" + return Vector.outer(self.x, self.x) + + @property + def xy(self): + """Unit dyad of basis Vectors x and y for the ReferenceFrame.""" + return Vector.outer(self.x, self.y) + + @property + def xz(self): + """Unit dyad of basis Vectors x and z for the ReferenceFrame.""" + return Vector.outer(self.x, self.z) + + @property + def yx(self): + """Unit dyad of basis Vectors y and x for the ReferenceFrame.""" + return Vector.outer(self.y, self.x) + + @property + def yy(self): + """Unit dyad of basis Vectors y and y for the ReferenceFrame.""" + return Vector.outer(self.y, self.y) + + @property + def yz(self): + """Unit dyad of basis Vectors y and z for the ReferenceFrame.""" + return Vector.outer(self.y, self.z) + + @property + def zx(self): + """Unit dyad of basis Vectors z and x for the ReferenceFrame.""" + return Vector.outer(self.z, self.x) + + @property + def zy(self): + """Unit dyad of basis Vectors z and y for the ReferenceFrame.""" + return Vector.outer(self.z, self.y) + + @property + def zz(self): + """Unit dyad of basis Vectors z and z for the ReferenceFrame.""" + return Vector.outer(self.z, self.z) + + @property + def u(self): + """Unit dyadic for the ReferenceFrame.""" + return self.xx + self.yy + self.zz + def partial_velocity(self, frame, *gen_speeds): """Returns the partial angular velocities of this frame in the given frame with respect to one or more provided generalized speeds. diff --git a/sympy/physics/vector/tests/test_frame.py b/sympy/physics/vector/tests/test_frame.py index 69cbc97d7451..7d19b99ea01f 100644 --- a/sympy/physics/vector/tests/test_frame.py +++ b/sympy/physics/vector/tests/test_frame.py @@ -658,3 +658,63 @@ def test_dcm_cache_dict(): assert A._dcm_dict == A._dcm_cache assert B._dcm_dict == {C: Matrix([[1, 0, 0],[0, cos(b), -sin(b)],[0, sin(b), cos(b)]]), \ A: Matrix([[1, 0, 0],[0, cos(b), -sin(b)],[0, sin(b), cos(b)]])} + +def test_xx_dyad(): + N = ReferenceFrame('N') + F = ReferenceFrame('F', indices=['1', '2', '3']) + assert N.xx == Vector.outer(N.x, N.x) + assert F.xx == Vector.outer(F.x, F.x) + +def test_xy_dyad(): + N = ReferenceFrame('N') + F = ReferenceFrame('F', indices=['1', '2', '3']) + assert N.xy == Vector.outer(N.x, N.y) + assert F.xy == Vector.outer(F.x, F.y) + +def test_xz_dyad(): + N = ReferenceFrame('N') + F = ReferenceFrame('F', indices=['1', '2', '3']) + assert N.xz == Vector.outer(N.x, N.z) + assert F.xz == Vector.outer(F.x, F.z) + +def test_yx_dyad(): + N = ReferenceFrame('N') + F = ReferenceFrame('F', indices=['1', '2', '3']) + assert N.yx == Vector.outer(N.y, N.x) + assert F.yx == Vector.outer(F.y, F.x) + +def test_yy_dyad(): + N = ReferenceFrame('N') + F = ReferenceFrame('F', indices=['1', '2', '3']) + assert N.yy == Vector.outer(N.y, N.y) + assert F.yy == Vector.outer(F.y, F.y) + +def test_yz_dyad(): + N = ReferenceFrame('N') + F = ReferenceFrame('F', indices=['1', '2', '3']) + assert N.yz == Vector.outer(N.y, N.z) + assert F.yz == Vector.outer(F.y, F.z) + +def test_zx_dyad(): + N = ReferenceFrame('N') + F = ReferenceFrame('F', indices=['1', '2', '3']) + assert N.zx == Vector.outer(N.z, N.x) + assert F.zx == Vector.outer(F.z, F.x) + +def test_zy_dyad(): + N = ReferenceFrame('N') + F = ReferenceFrame('F', indices=['1', '2', '3']) + assert N.zy == Vector.outer(N.z, N.y) + assert F.zy == Vector.outer(F.z, F.y) + +def test_zz_dyad(): + N = ReferenceFrame('N') + F = ReferenceFrame('F', indices=['1', '2', '3']) + assert N.zz == Vector.outer(N.z, N.z) + assert F.zz == Vector.outer(F.z, F.z) + +def test_unit_dyadic(): + N = ReferenceFrame('N') + F = ReferenceFrame('F', indices=['1', '2', '3']) + assert N.u == N.xx + N.yy + N.zz + assert F.u == F.xx + F.yy + F.zz
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "New Feature Additions" }
sympy__sympy-24909@db16c7a
sympy/sympy
Python
24,909
Fix bug when multiplying Prefix and Quantity.
#### References to other Issues or PRs Fixes #24832 #### Brief description of what is fixed or changed Previously, when multiplying a prefix and a quantity, the code would return 1 if the product of their scale factors was equal to 1. This resulted in incorrect output in some cases, such as when multiplying "milli" and "W", which would evaluate to 1 instead of W/1000. To fix this issue, the code has been updated to only return 1 when a prefix is multiplied by another prefix. When a prefix is multiplied by a quantity, the product of their scale factors is no longer checked for equality to 1. #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> * physics.units * Corrected the bug in multiplication between prefix and quantity, e.g. milli * W = 1 which should be W/1000. <!-- END RELEASE NOTES -->
2023-03-13T14:24:25Z
Bug with milli prefix What happened: ``` In [1]: from sympy.physics.units import milli, W In [2]: milli*W == 1 Out[2]: True In [3]: W*milli Out[3]: watt*Prefix(milli, m, -3, 10) ``` What I expected to happen: milli*W should evaluate to milli watts / mW `milli*W` or more generally `milli` times some unit evaluates to the number 1. I have tried this with Watts and Volts, I'm not sure what other cases this happens. I'm using sympy version 1.11.1-1 on Arch Linux with Python 3.10.9. If you cannot reproduce I would be happy to be of any assitance.
I get a 1 for all of the following (and some are redundant like "V" and "volt"): ```python W, joule, ohm, newton, volt, V, v, volts, henrys, pa, kilogram, ohms, kilograms, Pa, weber, tesla, Wb, H, wb, newtons, kilometers, webers, pascals, kilometer, watt, T, km, kg, joules, pascal, watts, J, henry, kilo, teslas ``` Plus it's only milli. ``` In [65]: for p in PREFIXES: ...: print(p, PREFIXES[p]*W) ...: Y 1000000000000000000000000*watt Z 1000000000000000000000*watt E 1000000000000000000*watt P 1000000000000000*watt T 1000000000000*watt G 1000000000*watt M 1000000*watt k 1000*watt h 100*watt da 10*watt d watt/10 c watt/100 m 1 mu watt/1000000 n watt/1000000000 p watt/1000000000000 f watt/1000000000000000 a watt/1000000000000000000 z watt/1000000000000000000000 y watt/1000000000000000000000000 ``` Dear team, I am excited to contribute to this project and offer my skills. Please let me support the team's efforts and collaborate effectively. Looking forward to working with you all. @Sourabh5768 Thanks for showing interest, you don't need to ask for a contribution If you know how to fix an issue, you can just make a pull request to fix it. I looked at the code and change it a little bit to fix this issue. If anyone has time, please take a look at the pull request.
[ { "body": "What happened:\r\n```\r\nIn [1]: from sympy.physics.units import milli, W\r\nIn [2]: milli*W == 1\r\nOut[2]: True\r\nIn [3]: W*milli\r\nOut[3]: watt*Prefix(milli, m, -3, 10)\r\n```\r\nWhat I expected to happen: milli*W should evaluate to milli watts / mW\r\n\r\n`milli*W` or more generally `milli` times some unit evaluates to the number 1. I have tried this with Watts and Volts, I'm not sure what other cases this happens. I'm using sympy version 1.11.1-1 on Arch Linux with Python 3.10.9. If you cannot reproduce I would be happy to be of any assitance.", "number": 24832, "title": "Bug with milli prefix" } ]
d3b4158dea271485e3daa11bf82e69b8dab348ce
{ "head_commit": "db16c7a9a6b7e198a2e0a272a2d0f06294d5f132", "head_commit_message": "fix code quality of the previous commit", "patch_to_review": "diff --git a/.mailmap b/.mailmap\nindex 7f81e5547e2b..05415c6a450c 100644\n--- a/.mailmap\n+++ b/.mailmap\n@@ -821,6 +821,7 @@ Langston Barrett <[email protected]>\n Lars Buitinck <[email protected]>\n Laura Domine <[email protected]>\n Lauren Glattly <[email protected]>\n+Le Cong Minh Hieu <[email protected]>\n Lee Johnston <[email protected]>\n Lejla Metohajrova <[email protected]>\n Lennart Fricke <[email protected]>\ndiff --git a/sympy/physics/units/prefixes.py b/sympy/physics/units/prefixes.py\nindex ca6a642156bf..56a3de188119 100644\n--- a/sympy/physics/units/prefixes.py\n+++ b/sympy/physics/units/prefixes.py\n@@ -6,7 +6,7 @@\n \"\"\"\n from sympy.core.expr import Expr\n from sympy.core.sympify import sympify\n-\n+from sympy.core.singleton import S\n \n class Prefix(Expr):\n \"\"\"\n@@ -85,9 +85,9 @@ def __mul__(self, other):\n \n fact = self.scale_factor * other.scale_factor\n \n- if fact == 1:\n- return 1\n- elif isinstance(other, Prefix):\n+ if isinstance(other, Prefix):\n+ if fact == 1:\n+ return S.One\n # simplify prefix\n for p in PREFIXES:\n if PREFIXES[p].scale_factor == fact:\n@@ -103,7 +103,7 @@ def __truediv__(self, other):\n fact = self.scale_factor / other.scale_factor\n \n if fact == 1:\n- return 1\n+ return S.One\n elif isinstance(other, Prefix):\n for p in PREFIXES:\n if PREFIXES[p].scale_factor == fact:\ndiff --git a/sympy/physics/units/tests/test_prefixes.py b/sympy/physics/units/tests/test_prefixes.py\nindex 8a7ae3a2c497..8c70c31fc668 100644\n--- a/sympy/physics/units/tests/test_prefixes.py\n+++ b/sympy/physics/units/tests/test_prefixes.py\n@@ -2,7 +2,7 @@\n from sympy.core.numbers import Rational\n from sympy.core.singleton import S\n from sympy.core.symbol import (Symbol, symbols)\n-from sympy.physics.units import Quantity, length, meter\n+from sympy.physics.units import Quantity, length, meter, W\n from sympy.physics.units.prefixes import PREFIXES, Prefix, prefix_unit, kilo, \\\n kibi\n from sympy.physics.units.systems import SI\n@@ -17,7 +17,8 @@ def test_prefix_operations():\n \n dodeca = Prefix('dodeca', 'dd', 1, base=12)\n \n- assert m * k == 1\n+ assert m * k == S.One\n+ assert m * W == W / 1000\n assert k * k == M\n assert 1 / m == k\n assert k / m == M\n@@ -25,7 +26,7 @@ def test_prefix_operations():\n assert dodeca * dodeca == 144\n assert 1 / dodeca == S.One / 12\n assert k / dodeca == S(1000) / 12\n- assert dodeca / dodeca == 1\n+ assert dodeca / dodeca == S.One\n \n m = Quantity(\"fake_meter\")\n SI.set_quantity_dimension(m, S.One)\n" }
[ { "diff_hunk": "@@ -17,15 +17,16 @@ def test_prefix_operations():\n \n dodeca = Prefix('dodeca', 'dd', 1, base=12)\n \n- assert m * k == 1\n+ assert m * k == S.One", "line": null, "original_line": 20, "original_start_line": null, "path": "sympy/physics/units/tests/test_prefixes.py", "start_line": null, "text": "@user1:\nsince 1==S.One, this would need to be tested using `is` instead `==`.\n\n@author:\nI changed accordingly. Thank you." } ]
a0b5e4fa25ee318973e2d983fa9096f998bc042f
diff --git a/.mailmap b/.mailmap index 7f81e5547e2b..05415c6a450c 100644 --- a/.mailmap +++ b/.mailmap @@ -821,6 +821,7 @@ Langston Barrett <[email protected]> Lars Buitinck <[email protected]> Laura Domine <[email protected]> Lauren Glattly <[email protected]> +Le Cong Minh Hieu <[email protected]> Lee Johnston <[email protected]> Lejla Metohajrova <[email protected]> Lennart Fricke <[email protected]> diff --git a/sympy/physics/units/prefixes.py b/sympy/physics/units/prefixes.py index ca6a642156bf..56a3de188119 100644 --- a/sympy/physics/units/prefixes.py +++ b/sympy/physics/units/prefixes.py @@ -6,7 +6,7 @@ """ from sympy.core.expr import Expr from sympy.core.sympify import sympify - +from sympy.core.singleton import S class Prefix(Expr): """ @@ -85,9 +85,9 @@ def __mul__(self, other): fact = self.scale_factor * other.scale_factor - if fact == 1: - return 1 - elif isinstance(other, Prefix): + if isinstance(other, Prefix): + if fact == 1: + return S.One # simplify prefix for p in PREFIXES: if PREFIXES[p].scale_factor == fact: @@ -103,7 +103,7 @@ def __truediv__(self, other): fact = self.scale_factor / other.scale_factor if fact == 1: - return 1 + return S.One elif isinstance(other, Prefix): for p in PREFIXES: if PREFIXES[p].scale_factor == fact: diff --git a/sympy/physics/units/tests/test_prefixes.py b/sympy/physics/units/tests/test_prefixes.py index 8a7ae3a2c497..7b180102ecd0 100644 --- a/sympy/physics/units/tests/test_prefixes.py +++ b/sympy/physics/units/tests/test_prefixes.py @@ -2,7 +2,7 @@ from sympy.core.numbers import Rational from sympy.core.singleton import S from sympy.core.symbol import (Symbol, symbols) -from sympy.physics.units import Quantity, length, meter +from sympy.physics.units import Quantity, length, meter, W from sympy.physics.units.prefixes import PREFIXES, Prefix, prefix_unit, kilo, \ kibi from sympy.physics.units.systems import SI @@ -17,7 +17,8 @@ def test_prefix_operations(): dodeca = Prefix('dodeca', 'dd', 1, base=12) - assert m * k == 1 + assert m * k is S.One + assert m * W == W / 1000 assert k * k == M assert 1 / m == k assert k / m == M @@ -25,7 +26,7 @@ def test_prefix_operations(): assert dodeca * dodeca == 144 assert 1 / dodeca == S.One / 12 assert k / dodeca == S(1000) / 12 - assert dodeca / dodeca == 1 + assert dodeca / dodeca is S.One m = Quantity("fake_meter") SI.set_quantity_dimension(m, S.One)
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Bug Fixes" }
sympy__sympy-24867@2e4cd91
sympy/sympy
Python
24,867
Fixes diff and Derivative different results in matix expressions
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> Fixes #24859 #### Brief description of what is fixed or changed #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> NO ENTRY <!-- END RELEASE NOTES -->
2023-03-06T19:53:26Z
diff and Derivative return different results Here's a short code snippet to create this different results: ``` import sympy as sp A=sp.MatrixSymbol('A', 3, 4) B=sp.MatrixSymbol('B', 4, 3) J=A*B Jinv=sp.Matrix(J).adjugate()#/sp.Matrix(J).det() alpha=sp.symbols('alpha') u=sp.MatrixSymbol('u', 3, 4) x=sp.symbols('x') Jk=Jinv.subs(A,A+x*u) print('Result from diff') sp.pprint(Jk[0,0].diff(x)) print('Result from Derivative') sp.Derivative(Jk[0,0], x).doit() ``` I'm anticipating both would return the same result. Is this intended?
This is the output: ```python In [3]: Jk[0, 0] Out[3]: (x⋅u + A)[1, 0]⋅(x⋅u + A)[2, 1]⋅B₀₁⋅B₁₂ - (x⋅u + A)[1, 0]⋅(x⋅u + A)[2, 1]⋅B₀₂⋅B₁₁ + (x⋅u + A)[1, 0] ⋅(x⋅u + A)[2, 2]⋅B₀₁⋅B₂₂ - (x⋅u + A)[1, 0]⋅(x⋅u + A)[2, 2]⋅B₀₂⋅B₂₁ + (x⋅u + A)[1, 0]⋅(x⋅u + A)[2, 3 ]⋅B₀₁⋅B₃₂ - (x⋅u + A)[1, 0]⋅(x⋅u + A)[2, 3]⋅B₀₂⋅B₃₁ - (x⋅u + A)[1, 1]⋅(x⋅u + A)[2, 0]⋅B₀₁⋅B₁₂ + (x⋅ u + A)[1, 1]⋅(x⋅u + A)[2, 0]⋅B₀₂⋅B₁₁ + (x⋅u + A)[1, 1]⋅(x⋅u + A)[2, 2]⋅B₁₁⋅B₂₂ - (x⋅u + A)[1, 1]⋅(x ⋅u + A)[2, 2]⋅B₁₂⋅B₂₁ + (x⋅u + A)[1, 1]⋅(x⋅u + A)[2, 3]⋅B₁₁⋅B₃₂ - (x⋅u + A)[1, 1]⋅(x⋅u + A)[2, 3]⋅B ₁₂⋅B₃₁ - (x⋅u + A)[1, 2]⋅(x⋅u + A)[2, 0]⋅B₀₁⋅B₂₂ + (x⋅u + A)[1, 2]⋅(x⋅u + A)[2, 0]⋅B₀₂⋅B₂₁ - (x⋅u + A)[1, 2]⋅(x⋅u + A)[2, 1]⋅B₁₁⋅B₂₂ + (x⋅u + A)[1, 2]⋅(x⋅u + A)[2, 1]⋅B₁₂⋅B₂₁ + (x⋅u + A)[1, 2]⋅(x⋅u + A)[2, 3]⋅B₂₁⋅B₃₂ - (x⋅u + A)[1, 2]⋅(x⋅u + A)[2, 3]⋅B₂₂⋅B₃₁ - (x⋅u + A)[1, 3]⋅(x⋅u + A)[2, 0]⋅B₀₁⋅ B₃₂ + (x⋅u + A)[1, 3]⋅(x⋅u + A)[2, 0]⋅B₀₂⋅B₃₁ - (x⋅u + A)[1, 3]⋅(x⋅u + A)[2, 1]⋅B₁₁⋅B₃₂ + (x⋅u + A) [1, 3]⋅(x⋅u + A)[2, 1]⋅B₁₂⋅B₃₁ - (x⋅u + A)[1, 3]⋅(x⋅u + A)[2, 2]⋅B₂₁⋅B₃₂ + (x⋅u + A)[1, 3]⋅(x⋅u + A )[2, 2]⋅B₂₂⋅B₃₁ In [4]: Jk[0, 0].diff(x) Out[4]: 0 In [5]: diff(Jk[0, 0], x) Out[5]: 0 In [6]: Derivative(Jk[0, 0], x).doit() Out[6]: (x⋅u₁₀ + A₁₀)⋅B₀₁⋅B₁₂⋅u₂₁ + (x⋅u₁₀ + A₁₀)⋅B₀₁⋅B₂₂⋅u₂₂ + (x⋅u₁₀ + A₁₀)⋅B₀₁⋅B₃₂⋅u₂₃ - (x⋅u₁₀ + A₁₀)⋅B ₀₂⋅B₁₁⋅u₂₁ - (x⋅u₁₀ + A₁₀)⋅B₀₂⋅B₂₁⋅u₂₂ - (x⋅u₁₀ + A₁₀)⋅B₀₂⋅B₃₁⋅u₂₃ - (x⋅u₁₁ + A₁₁)⋅B₀₁⋅B₁₂⋅u₂₀ + (x ⋅u₁₁ + A₁₁)⋅B₀₂⋅B₁₁⋅u₂₀ + (x⋅u₁₁ + A₁₁)⋅B₁₁⋅B₂₂⋅u₂₂ + (x⋅u₁₁ + A₁₁)⋅B₁₁⋅B₃₂⋅u₂₃ - (x⋅u₁₁ + A₁₁)⋅B₁₂ ⋅B₂₁⋅u₂₂ - (x⋅u₁₁ + A₁₁)⋅B₁₂⋅B₃₁⋅u₂₃ - (x⋅u₁₂ + A₁₂)⋅B₀₁⋅B₂₂⋅u₂₀ + (x⋅u₁₂ + A₁₂)⋅B₀₂⋅B₂₁⋅u₂₀ - (x⋅u ₁₂ + A₁₂)⋅B₁₁⋅B₂₂⋅u₂₁ + (x⋅u₁₂ + A₁₂)⋅B₁₂⋅B₂₁⋅u₂₁ + (x⋅u₁₂ + A₁₂)⋅B₂₁⋅B₃₂⋅u₂₃ - (x⋅u₁₂ + A₁₂)⋅B₂₂⋅B ₃₁⋅u₂₃ - (x⋅u₁₃ + A₁₃)⋅B₀₁⋅B₃₂⋅u₂₀ + (x⋅u₁₃ + A₁₃)⋅B₀₂⋅B₃₁⋅u₂₀ - (x⋅u₁₃ + A₁₃)⋅B₁₁⋅B₃₂⋅u₂₁ + (x⋅u₁₃ + A₁₃)⋅B₁₂⋅B₃₁⋅u₂₁ - (x⋅u₁₃ + A₁₃)⋅B₂₁⋅B₃₂⋅u₂₂ + (x⋅u₁₃ + A₁₃)⋅B₂₂⋅B₃₁⋅u₂₂ - (x⋅u₂₀ + A₂₀)⋅B₀₁⋅B₁₂ ⋅u₁₁ - (x⋅u₂₀ + A₂₀)⋅B₀₁⋅B₂₂⋅u₁₂ - (x⋅u₂₀ + A₂₀)⋅B₀₁⋅B₃₂⋅u₁₃ + (x⋅u₂₀ + A₂₀)⋅B₀₂⋅B₁₁⋅u₁₁ + (x⋅u₂₀ + A₂₀)⋅B₀₂⋅B₂₁⋅u₁₂ + (x⋅u₂₀ + A₂₀)⋅B₀₂⋅B₃₁⋅u₁₃ + (x⋅u₂₁ + A₂₁)⋅B₀₁⋅B₁₂⋅u₁₀ - (x⋅u₂₁ + A₂₁)⋅B₀₂⋅B₁₁⋅u ₁₀ - (x⋅u₂₁ + A₂₁)⋅B₁₁⋅B₂₂⋅u₁₂ - (x⋅u₂₁ + A₂₁)⋅B₁₁⋅B₃₂⋅u₁₃ + (x⋅u₂₁ + A₂₁)⋅B₁₂⋅B₂₁⋅u₁₂ + (x⋅u₂₁ + A ₂₁)⋅B₁₂⋅B₃₁⋅u₁₃ + (x⋅u₂₂ + A₂₂)⋅B₀₁⋅B₂₂⋅u₁₀ - (x⋅u₂₂ + A₂₂)⋅B₀₂⋅B₂₁⋅u₁₀ + (x⋅u₂₂ + A₂₂)⋅B₁₁⋅B₂₂⋅u₁₁ - (x⋅u₂₂ + A₂₂)⋅B₁₂⋅B₂₁⋅u₁₁ - (x⋅u₂₂ + A₂₂)⋅B₂₁⋅B₃₂⋅u₁₃ + (x⋅u₂₂ + A₂₂)⋅B₂₂⋅B₃₁⋅u₁₃ + (x⋅u₂₃ + A₂₃ )⋅B₀₁⋅B₃₂⋅u₁₀ - (x⋅u₂₃ + A₂₃)⋅B₀₂⋅B₃₁⋅u₁₀ + (x⋅u₂₃ + A₂₃)⋅B₁₁⋅B₃₂⋅u₁₁ - (x⋅u₂₃ + A₂₃)⋅B₁₂⋅B₃₁⋅u₁₁ + (x⋅u₂₃ + A₂₃)⋅B₂₁⋅B₃₂⋅u₁₂ - (x⋅u₂₃ + A₂₃)⋅B₂₂⋅B₃₁⋅u₁₂ ``` > I'm anticipating both would return the same result. Is this intended? No it is not intended. They should give the same result. The problem is here: https://github.com/sympy/sympy/blob/940f1af711d0148d44f6418c3d92ec781f701341/sympy/matrices/expressions/matexpr.py#L635-L639 In the debugger: ```python 635 if not isinstance(v, MatrixElement): 636 from sympy.matrices.matrices import MatrixBase 637 if isinstance(self.parent, MatrixBase): 638 return self.parent.diff(v)[self.i, self.j] 639 -> return S.Zero 640 641 M = self.args[0] 642 643 m, n = self.parent.shape 644 (Pdb) p self (x*u + A)[1, 0] (Pdb) p self.parent x*u + A (Pdb) p type(self.parent) <class 'sympy.matrices.expressions.matadd.MatAdd'> (Pdb) p isinstance(self.parent, MatrixBase) False (Pdb) p self.parent.diff(v) u + 0 (Pdb) p self.parent.diff(v)[self.i, self.j] u[1, 0] ``` I'm not sure why the check is for `MatrixBase` which is the superclass for explicit matrix types (as opposed to matrix expressions). It should be guaranteed that the parent is some kind of matrix. The difference with doit is that it is called recursively to change other parts of the expression before computing the derivative: ```python In [3]: from sympy.matrices.expressions.matexpr import MatrixElement In [4]: MatrixElement((x*u + A), 1, 0) Out[4]: (x⋅u + A)[1, 0] In [5]: MatrixElement((x*u + A), 1, 0).doit() Out[5]: x⋅u₁₀ + A₁₀ In [6]: (x*u + A)[1, 0] Out[6]: x⋅u₁₀ + A₁₀ In [7]: MatrixElement((x*u + A), 1, 0).diff(x) Out[7]: 0 In [8]: MatrixElement((x*u + A), 1, 0).doit().diff(x) Out[8]: u₁₀ In [9]: (x*u + A)[1, 0].diff(x) Out[9]: u₁₀ ``` As far as I can tell this fixes it: ```diff diff --git a/sympy/matrices/expressions/matexpr.py b/sympy/matrices/expressions/matexpr.py index a433c8b..75ee617 100644 --- a/sympy/matrices/expressions/matexpr.py +++ b/sympy/matrices/expressions/matexpr.py @@ -633,10 +633,7 @@ def indices(self): def _eval_derivative(self, v): if not isinstance(v, MatrixElement): - from sympy.matrices.matrices import MatrixBase - if isinstance(self.parent, MatrixBase): - return self.parent.diff(v)[self.i, self.j] - return S.Zero + return self.parent.diff(v)[self.i, self.j] M = self.args[0] ``` CC @Upabjojr the relevant code here is all from your commits. I don't understand why this is checking if the parent is MatrixBase. It is clearly some kind of matrix.
[ { "body": "Here's a short code snippet to create this different results:\r\n\r\n```\r\nimport sympy as sp\r\nA=sp.MatrixSymbol('A', 3, 4)\r\nB=sp.MatrixSymbol('B', 4, 3)\r\nJ=A*B\r\nJinv=sp.Matrix(J).adjugate()#/sp.Matrix(J).det()\r\n\r\nalpha=sp.symbols('alpha')\r\nu=sp.MatrixSymbol('u', 3, 4)\r\nx=sp.symbols('x')\r\nJk=Jinv.subs(A,A+x*u)\r\n\r\nprint('Result from diff')\r\nsp.pprint(Jk[0,0].diff(x))\r\n\r\nprint('Result from Derivative')\r\nsp.Derivative(Jk[0,0], x).doit()\r\n```\r\n\r\nI'm anticipating both would return the same result. Is this intended?", "number": 24859, "title": "diff and Derivative return different results" } ]
05516b96c3957e0f9424148cb3b5460f2262656a
{ "head_commit": "2e4cd9199f25a901382b2fc9fcf926111cf78fe3", "head_commit_message": "added simple tests", "patch_to_review": "diff --git a/sympy/matrices/expressions/matexpr.py b/sympy/matrices/expressions/matexpr.py\nindex a433c8bbb49d..75ee6177c0f6 100644\n--- a/sympy/matrices/expressions/matexpr.py\n+++ b/sympy/matrices/expressions/matexpr.py\n@@ -633,10 +633,7 @@ def indices(self):\n def _eval_derivative(self, v):\n \n if not isinstance(v, MatrixElement):\n- from sympy.matrices.matrices import MatrixBase\n- if isinstance(self.parent, MatrixBase):\n- return self.parent.diff(v)[self.i, self.j]\n- return S.Zero\n+ return self.parent.diff(v)[self.i, self.j]\n \n M = self.args[0]\n \ndiff --git a/sympy/matrices/expressions/tests/test_matexpr.py b/sympy/matrices/expressions/tests/test_matexpr.py\nindex 36b3846c2ec5..53604c6c74ea 100644\n--- a/sympy/matrices/expressions/tests/test_matexpr.py\n+++ b/sympy/matrices/expressions/tests/test_matexpr.py\n@@ -404,6 +404,18 @@ def test_issue_21195():\n assert A.diff(x) == B\n \n \n+def test_issue_24859():\n+ A = MatrixSymbol('A', 2, 3)\n+ B = MatrixSymbol('B', 3, 2)\n+ J = A*B\n+ Jinv = Matrix(J).adjugate()\n+ u = MatrixSymbol('u', 2, 3)\n+ Jk = Jinv.subs(A, A + x*u)\n+\n+ assert Jk[0,0].diff(x).simplify() == B[0, 1]*u[1, 0] + B[1, 1]*u[1, 1] + B[2, 1]*u[1, 2]\n+ assert diff(Jk[0,0], x).doit().simplify() == B[0, 1]*u[1, 0] + B[1, 1]*u[1, 1] + B[2, 1]*u[1, 2]\n+\n+\n def test_MatMul_postprocessor():\n z = zeros(2)\n z1 = ZeroMatrix(2, 2)\n" }
[ { "diff_hunk": "@@ -404,6 +404,18 @@ def test_issue_21195():\n assert A.diff(x) == B\n \n \n+def test_issue_24859():\n+ A = MatrixSymbol('A', 2, 3)\n+ B = MatrixSymbol('B', 3, 2)\n+ J = A*B\n+ Jinv = Matrix(J).adjugate()\n+ u = MatrixSymbol('u', 2, 3)\n+ Jk = Jinv.subs(A, A + x*u)\n+\n+ assert Jk[0,0].diff(x).simplify() == B[0, 1]*u[1, 0] + B[1, 1]*u[1, 1] + B[2, 1]*u[1, 2]\n+ assert diff(Jk[0,0], x).doit().simplify() == B[0, 1]*u[1, 0] + B[1, 1]*u[1, 1] + B[2, 1]*u[1, 2]", "line": null, "original_line": 416, "original_start_line": 415, "path": "sympy/matrices/expressions/tests/test_matexpr.py", "start_line": null, "text": "@user1:\nThree things:\r\n\r\n1. These lines are quite long. Can you refactor the RHSs of the equality checks to an assigned variable that can be reused, something like `expected = B[0, 1]*u[1, 0] + B[1, 1]*u[1, 1] + B[2, 1]*u[1, 2]`.\r\n2. Can we please format the matrix indexing consistently, i.e. change the `Jk[0,0]` to `Jk[0, 0]`.\r\n3. Calls to `simplify` should be avoided if possible because it uses a heuristic approach. If you know the type of simplification needed to achieve the desired result then use one of the [targeted simplification functions](https://docs.sympy.org/latest/tutorials/intro-tutorial/simplification.html). Running locally it looks like the `simplify` isn't needed.\r\n\r\n```suggestion\r\n expected = B[0, 1]*u[1, 0] + B[1, 1]*u[1, 1] + B[2, 1]*u[1, 2]\r\n assert Jk[0, 0].diff(x) == expected\r\n assert diff(Jk[0, 0], x).doit() == expected\r\n```\n\n@user1:\nAddressed by https://github.com/sympy/sympy/pull/24867/commits/0fd5f03d09d8be1728da7c3c815cdd39350a07e1." } ]
0fd5f03d09d8be1728da7c3c815cdd39350a07e1
diff --git a/sympy/matrices/expressions/matexpr.py b/sympy/matrices/expressions/matexpr.py index a433c8bbb49d..75ee6177c0f6 100644 --- a/sympy/matrices/expressions/matexpr.py +++ b/sympy/matrices/expressions/matexpr.py @@ -633,10 +633,7 @@ def indices(self): def _eval_derivative(self, v): if not isinstance(v, MatrixElement): - from sympy.matrices.matrices import MatrixBase - if isinstance(self.parent, MatrixBase): - return self.parent.diff(v)[self.i, self.j] - return S.Zero + return self.parent.diff(v)[self.i, self.j] M = self.args[0] diff --git a/sympy/matrices/expressions/tests/test_matexpr.py b/sympy/matrices/expressions/tests/test_matexpr.py index 36b3846c2ec5..d08933cf8535 100644 --- a/sympy/matrices/expressions/tests/test_matexpr.py +++ b/sympy/matrices/expressions/tests/test_matexpr.py @@ -404,6 +404,19 @@ def test_issue_21195(): assert A.diff(x) == B +def test_issue_24859(): + A = MatrixSymbol('A', 2, 3) + B = MatrixSymbol('B', 3, 2) + J = A*B + Jinv = Matrix(J).adjugate() + u = MatrixSymbol('u', 2, 3) + Jk = Jinv.subs(A, A + x*u) + + expected = B[0, 1]*u[1, 0] + B[1, 1]*u[1, 1] + B[2, 1]*u[1, 2] + assert Jk[0, 0].diff(x) == expected + assert diff(Jk[0, 0], x).doit() == expected + + def test_MatMul_postprocessor(): z = zeros(2) z1 = ZeroMatrix(2, 2)
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Bug Fixes" }
sympy__sympy-24747@f6a3e20
sympy/sympy
Python
24,747
Fix erroring test in `sympy/printing/tests/test_aesaracode.py`
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> Fixes #24746 #### Brief description of what is fixed or changed Use Aesara `true_divide` function in place of deprecated `true_div`. #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> NO ENTRY <!-- END RELEASE NOTES -->
2023-02-20T12:48:12Z
`optional-dependencies` jobs in CI failing due to test error in `sympy/printing/tests/test_aesaracode.py` Full traceback is: ``` ____________ sympy/printing/tests/test_aesaracode.py:test_Rationals ____________ Traceback (most recent call last): File "/home/runner/work/sympy/sympy/sympy/testing/runtests.py", line 1337, in _timeout function() File "/home/runner/work/sympy/sympy/sympy/printing/tests/test_aesaracode.py", line 267, in test_Rationals assert theq(aesara_code_(sy.Integer(2) / 3), aet.true_div(2, 3)) File "/home/runner/work/sympy/sympy/sympy/printing/tests/test_aesaracode.py", line 57, in aesara_code_ return aesara_code(expr, **kwargs) File "/home/runner/work/sympy/sympy/sympy/printing/aesaracode.py", line 347, in aesara_code return AesaraPrinter(cache=cache, settings={}).doprint(expr, **kwargs) File "/home/runner/work/sympy/sympy/sympy/printing/aesaracode.py", line 307, in doprint return self._print(expr, dtypes=dtypes, broadcastables=broadcastables) File "/home/runner/work/sympy/sympy/sympy/printing/printer.py", line 331, in _print return printmethod(expr, **kwargs) File "/home/runner/work/sympy/sympy/sympy/printing/aesaracode.py", line 241, in _print_Rational return aet.true_div(self._print(expr.p, **kwargs), File "/opt/hostedtoolcache/Python/3.9.16/x64/lib/python3.9/site-packages/aesara/tensor/__init__.py", line 172, in __getattr__ warn(msg, DeprecationWarning, stacklevel=2) DeprecationWarning: `true_div` is deprecated; use `true_divide` or `divide` instead. ```
This is currently causing multiple job failures in all CI. Opened PR #24747 to fix. Issue seems to stem from the use of `aesara.tensor.true_div` in the Aesara printer` as well as in the test, because this has been deprecated. Interestingly, it doesn't seem to be erroring with pytest. > Interestingly, it doesn't seem to be erroring with pytest. With pytest warnings are collected and shown at the end. You can use `-Werror` to make warnings become errors under pytest: https://docs.pytest.org/en/7.1.x/how-to/capture-warnings.html
[ { "body": "Full traceback is:\r\n```\r\n____________ sympy/printing/tests/test_aesaracode.py:test_Rationals ____________\r\nTraceback (most recent call last):\r\n File \"/home/runner/work/sympy/sympy/sympy/testing/runtests.py\", line 1337, in _timeout\r\n function()\r\n File \"/home/runner/work/sympy/sympy/sympy/printing/tests/test_aesaracode.py\", line 267, in test_Rationals\r\n assert theq(aesara_code_(sy.Integer(2) / 3), aet.true_div(2, 3))\r\n File \"/home/runner/work/sympy/sympy/sympy/printing/tests/test_aesaracode.py\", line 57, in aesara_code_\r\n return aesara_code(expr, **kwargs)\r\n File \"/home/runner/work/sympy/sympy/sympy/printing/aesaracode.py\", line 347, in aesara_code\r\n return AesaraPrinter(cache=cache, settings={}).doprint(expr, **kwargs)\r\n File \"/home/runner/work/sympy/sympy/sympy/printing/aesaracode.py\", line 307, in doprint\r\n return self._print(expr, dtypes=dtypes, broadcastables=broadcastables)\r\n File \"/home/runner/work/sympy/sympy/sympy/printing/printer.py\", line 331, in _print\r\n return printmethod(expr, **kwargs)\r\n File \"/home/runner/work/sympy/sympy/sympy/printing/aesaracode.py\", line 241, in _print_Rational\r\n return aet.true_div(self._print(expr.p, **kwargs),\r\n File \"/opt/hostedtoolcache/Python/3.9.16/x64/lib/python3.9/site-packages/aesara/tensor/__init__.py\", line 172, in __getattr__\r\n warn(msg, DeprecationWarning, stacklevel=2)\r\nDeprecationWarning: `true_div` is deprecated; use `true_divide` or `divide` instead.\r\n```", "number": 24746, "title": "`optional-dependencies` jobs in CI failing due to test error in `sympy/printing/tests/test_aesaracode.py`" } ]
c6cb7c5602fa48034ab1bd43c2347a7e8488f12e
{ "head_commit": "f6a3e207d1b16c3e34abb0914b81f7c7559f1d91", "head_commit_message": "Fix typo in module name", "patch_to_review": "diff --git a/sympy/printing/aesaracode.py b/sympy/printing/aesaracode.py\nindex 1806a0721328..5f6fc2e56a19 100644\n--- a/sympy/printing/aesaracode.py\n+++ b/sympy/printing/aesaracode.py\n@@ -17,6 +17,13 @@\n from aesara.tensor.elemwise import Elemwise\n from aesara.tensor.elemwise import DimShuffle\n \n+ # `true_divide` replaced `true_div` in Aesara 2.8.11 (released 2023) to\n+ # match NumPy\n+ # XXX: Remove this when not needed to support older versions.\n+ true_divide = getattr(aet, 'true_divide', None)\n+ if true_divide is None:\n+ true_divide = aet.true_div\n+\n mapping = {\n sympy.Add: aet.add,\n sympy.Mul: aet.mul,\n@@ -69,6 +76,8 @@\n sympy.Transpose: DimShuffle((False, False), [1, 0]),\n }\n \n+ aesara_version = tuple(getattr(aesara, '__version__').split('.'))\n+\n \n class AesaraPrinter(Printer):\n \"\"\" Code printer which creates Aesara symbolic expression graphs.\n@@ -238,8 +247,8 @@ def _print_Piecewise(self, expr, **kwargs):\n return aet.switch(p_cond, p_e, p_remaining)\n \n def _print_Rational(self, expr, **kwargs):\n- return aet.true_div(self._print(expr.p, **kwargs),\n- self._print(expr.q, **kwargs))\n+ return true_divide(self._print(expr.p, **kwargs),\n+ self._print(expr.q, **kwargs))\n \n def _print_Integer(self, expr, **kwargs):\n return expr.p\ndiff --git a/sympy/printing/tests/test_aesaracode.py b/sympy/printing/tests/test_aesaracode.py\nindex d820b9fc97ed..21484626dce9 100644\n--- a/sympy/printing/tests/test_aesaracode.py\n+++ b/sympy/printing/tests/test_aesaracode.py\n@@ -30,6 +30,8 @@\n from aesara.tensor.elemwise import Elemwise, DimShuffle\n from aesara.tensor.math import Dot\n \n+ from sympy.printing.aesaracode import true_divide\n+\n xt, yt, zt = [aet.scalar(name, 'floatX') for name in 'xyz']\n Xt, Yt, Zt = [aet.tensor('floatX', (False, False), name=n) for n in 'XYZ']\n else:\n@@ -264,8 +266,8 @@ def test_MatAdd():\n \n \n def test_Rationals():\n- assert theq(aesara_code_(sy.Integer(2) / 3), aet.true_div(2, 3))\n- assert theq(aesara_code_(S.Half), aet.true_div(1, 2))\n+ assert theq(aesara_code_(sy.Integer(2) / 3), true_divide(2, 3))\n+ assert theq(aesara_code_(S.Half), true_divide(1, 2))\n \n def test_Integers():\n assert aesara_code_(sy.Integer(3)) == 3\n" }
[ { "diff_hunk": "@@ -69,6 +76,8 @@\n sympy.Transpose: DimShuffle((False, False), [1, 0]),\n }\n \n+ aesara_version = tuple(getattr(aesara, '__version__').split('.'))", "line": null, "original_line": 79, "original_start_line": null, "path": "sympy/printing/aesaracode.py", "start_line": null, "text": "@user1:\nI guess we don't need this any more.\n\n@author:\nGood spot. Address in https://github.com/sympy/sympy/pull/24747/commits/72b658b0f8f70c78ba3d645a2dacfe3bbcbc6f90." } ]
72b658b0f8f70c78ba3d645a2dacfe3bbcbc6f90
diff --git a/sympy/printing/aesaracode.py b/sympy/printing/aesaracode.py index 1806a0721328..87117e06fadb 100644 --- a/sympy/printing/aesaracode.py +++ b/sympy/printing/aesaracode.py @@ -17,6 +17,13 @@ from aesara.tensor.elemwise import Elemwise from aesara.tensor.elemwise import DimShuffle + # `true_divide` replaced `true_div` in Aesara 2.8.11 (released 2023) to + # match NumPy + # XXX: Remove this when not needed to support older versions. + true_divide = getattr(aet, 'true_divide', None) + if true_divide is None: + true_divide = aet.true_div + mapping = { sympy.Add: aet.add, sympy.Mul: aet.mul, @@ -238,8 +245,8 @@ def _print_Piecewise(self, expr, **kwargs): return aet.switch(p_cond, p_e, p_remaining) def _print_Rational(self, expr, **kwargs): - return aet.true_div(self._print(expr.p, **kwargs), - self._print(expr.q, **kwargs)) + return true_divide(self._print(expr.p, **kwargs), + self._print(expr.q, **kwargs)) def _print_Integer(self, expr, **kwargs): return expr.p diff --git a/sympy/printing/tests/test_aesaracode.py b/sympy/printing/tests/test_aesaracode.py index d820b9fc97ed..21484626dce9 100644 --- a/sympy/printing/tests/test_aesaracode.py +++ b/sympy/printing/tests/test_aesaracode.py @@ -30,6 +30,8 @@ from aesara.tensor.elemwise import Elemwise, DimShuffle from aesara.tensor.math import Dot + from sympy.printing.aesaracode import true_divide + xt, yt, zt = [aet.scalar(name, 'floatX') for name in 'xyz'] Xt, Yt, Zt = [aet.tensor('floatX', (False, False), name=n) for n in 'XYZ'] else: @@ -264,8 +266,8 @@ def test_MatAdd(): def test_Rationals(): - assert theq(aesara_code_(sy.Integer(2) / 3), aet.true_div(2, 3)) - assert theq(aesara_code_(S.Half), aet.true_div(1, 2)) + assert theq(aesara_code_(sy.Integer(2) / 3), true_divide(2, 3)) + assert theq(aesara_code_(S.Half), true_divide(1, 2)) def test_Integers(): assert aesara_code_(sy.Integer(3)) == 3
{ "difficulty": "low", "estimated_review_effort": 2, "problem_domain": "Bug Fixes" }
sympy__sympy-24750@516a019
sympy/sympy
Python
24,750
Test case added for numerical evaluation in test_simplify
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> Fixes #11004 #### Brief description of what is fixed or changed #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> NO ENTRY <!-- END RELEASE NOTES -->
2023-02-20T20:29:22Z
problems with numerical evaluation I working with a simple statistical mechanics example, using the Stirling approximation of the factorial in combinatorial functions: ``` from __future__ import division from sympy.abc import * from sympy import * from sympy.physics.units import avogadro_number as N # Stirling's approximation for factorial def f(n): return sqrt(2*pi*n)*(n/E)**n # number of k-equipartitions of n particles def m(n, k): return f(n)/ (f(n/k)**k) # probability of a k-partition being an equipartition def p(n,k): return m(n, k)/(k**n) q = (p(n, 2)/p(n, 3)).simplify() z = log(p(n, k)/p(n, k+1)).expand(force=True) ``` Working tests: ``` >>> print q.n(4) 0.9648*n**0.5 >>> print q.n(4,subs={n: N}) 7.487e+11 >>> print z k*n*log(n)/(k + 1) - k*n*log(k + 1)/(k + 1) - k*n/(k + 1) + k*log(k)/2 - k*log(k + 1)/2 - n*log(n) + n*log(k + 1) + n + n*log(n)/(k + 1) - n*log(k + 1)/(k + 1) - n/(k + 1) + log(n)/2 - log(k + 1)/2 + log(2)/2 + log(pi)/2 >>> print z.simplify() k*log(k)/2 - k*log(k + 1)/2 + log(n)/2 - log(k + 1)/2 + log(2)/2 + log(pi)/2 >>> print z.simplify().n(4) 0.5*k*log(k) - 0.5*k*log(k + 1) + 0.5*log(n) - 0.5*log(k + 1) + 0.9189 >>> print z.n(4).simplify() 0.5*k*log(k) - 0.5*k*log(k + 1) + 0.5*log(n) - 0.5*log(k + 1) + 0.9189 >>> print z.subs(n,N).simplify() k*log(k)/2 - k*log(k + 1)/2 - log(k + 1)/2 + log(2)/2 + log(pi)/2 + log(602214179000000000000000)/2 >>> print z.subs(n,N).simplify().n(4) 0.5*k*log(k) - 0.5*k*log(k + 1) - 0.5*log(k + 1) + 28.3 ``` The problem cases: - Substitution not working `n` is not replaced by `N` - note that `print q.n(4,subs={n: N})` above works ``` >>> print z.n(4, subs={n: N}) k*n*log(n)/(k + 1.0) - k*n*log(k + 1)/(k + 1.0) - k*n/(k + 1.0) + 0.5*k*log(k) - 0.5*k*log(k + 1) - n*log(n) + n*log(k + 1) + n + n*log(n)/(k + 1.0) - n*log(k + 1)/(k + 1.0) - n/(k + 1.0) + 0.5*log(n) - 0.5*log(k + 1) + 0.9189 >>> print z.simplify().n(4, subs={n: N}) 0.5*k*log(k) - 0.5*k*log(k + 1) + 0.5*log(n) - 0.5*log(k + 1) + 0.9189 ``` - This looses the `log(n)` term completely and performs some other invalid transformations ``` >>> print z.subs(n,N).n(4).simplify() 0.5*k*(k*log(k) - k*log(k + 1) + log(k))/(k + 1) ```
Nothing stands out to me as being wrong: ``` >>> z.subs(n,2).n(4).simplify() 0.5*k*log(k) - 0.5*k*log(k + 1) - 0.5*log(k + 1) + 1.266 >>> z.simplify().subs(n,2).n(4) 0.5*k*log(k) - 0.5*k*log(k + 1) - 0.5*log(k + 1) + 1.266 >>> z.simplify().n(4, subs={n:2}) 0.5*k*log(k) - 0.5*k*log(k + 1) + 0.5*log(n) - 0.5*log(k + 1) + 0.9189 ``` The only thing different in case 3 is that the n has not been replaced (because doing so doesn't change the expression to a number that can be evaluated) so it isn't combined in with the numerical term -- thus 0.9189 instead of 1.266. Is there something that I am missing? In my last example ``` >>> print z.subs(n,N).n(4).simplify() 0.5*k*(k*log(k) - k*log(k + 1) + log(k))/(k + 1) ``` There is no numerical term that comes from `N`. It does not depend on `N` at all. Where did the `N` go? Also when you divide through by `(k + 1)` coefficient of `log(k + 1)` is `0.5*k*k/(k + 1)` which not equivalent to that in the other evaluations. In the `z.simplify().n(4, subs={n:2})` case I do not understand the explanation: > the n has not been replaced (because doing so doesn't change the expression to a number that can be evaluated) What is the difference between `X.n(4, subs={n:2})` and `X.subs(n,2).n(4)`? I thought they should be equvalent. I think the problem is just that evalf doesn't work very well with expressions involving symbols unless all of them are fully substituted for actual numbers. > What is the difference between `X.n(4, subs={n:2})` and `X.subs(n,2).n(4)`? > There is no numerical term that comes from `N`. It does not depend on `N` at all. Where did the `N` go? It is now there in master: ``` >>> simplify(z.subs(n,N).n(4)) 0.5*k*log(k) - 0.5*k*log(k + 1) + 0.5*log(avogadro_number) - 0.5*log(k + 1) + 0.9189 ``` This issue could probably be closed with a test (or confirmation that a test addressing the root issue has already been added by the commit that fixed this). I can't find avogadro_number defined in the physics module. Tried adding test cases to `test_simplify.py` ```python N=Symbol('N') def f(n): return sqrt(2*pi*n)*(n/E)**n def m(n, k): return f(n)/ (f(n/k)**k) def p(n,k): return m(n, k)/(k**n) q = (p(n, 2)/p(n, 3)).simplify() z = log(p(n, k)/p(n, k+1)).expand(force=True) assert simplify(z.subs(n,N).n(4)) == 0.5*k*log(k) - 0.5*k*log(k + 1) + 0.5*log(N) - 0.5*log(k + 1) + 0.9189 ``` Still, it gives assertion error, but checking for simplify(z.subs(n,N).n(4)), it returns the same expression as above. It returns `0.5*k*log(k) - 0.5*k*log(k + 1) + 0.5*log(N) - 0.5*log(k + 1) + 0.9189` What could be the possible reason @smichr @oscarbenjamin @mhlr? You need to create floats with the correct precision: ```python In [23]: r = simplify(z.subs(n,N).n(4)) In [24]: half = Float('0.5', 4) In [25]: r == half*k*log(k) - half*k*log(k + 1) + half*log(N) - half*log(k + 1) + Float(0.9189224, 4) Out[25]: True ``` Thank you for your assistance. Kindly review the PR. > I can't find avogadro_number defined in the physics module. > > Tried adding test cases to `test_simplify.py` > > ```python > N=Symbol('N') > def f(n): > return sqrt(2*pi*n)*(n/E)**n > def m(n, k): > return f(n)/ (f(n/k)**k) > def p(n,k): > return m(n, k)/(k**n) > q = (p(n, 2)/p(n, 3)).simplify() > z = log(p(n, k)/p(n, k+1)).expand(force=True) > assert simplify(z.subs(n,N).n(4)) == 0.5*k*log(k) - 0.5*k*log(k + 1) + 0.5*log(N) - 0.5*log(k + 1) + 0.9189 > ``` > > Still, it gives assertion error, but checking for simplify(z.subs(n,N).n(4)), it returns the same expression as above. It returns `0.5*k*log(k) - 0.5*k*log(k + 1) + 0.5*log(N) - 0.5*log(k + 1) + 0.9189` What could be the possible reason @smichr @oscarbenjamin @mhlr? You can find avogadro number using following code ![image](https://user-images.githubusercontent.com/78342519/137574753-a191bc72-708a-45e1-a6ea-dd0e59cb9b7d.png)
[ { "body": "I working with a simple statistical mechanics example,\nusing the Stirling approximation of the factorial in combinatorial functions:\n\n```\nfrom __future__ import division\nfrom sympy.abc import *\nfrom sympy import *\nfrom sympy.physics.units import avogadro_number as N\n\n\n# Stirling's approximation for factorial\ndef f(n):\n return sqrt(2*pi*n)*(n/E)**n\n\n# number of k-equipartitions of n particles\ndef m(n, k):\n return f(n)/ (f(n/k)**k)\n\n# probability of a k-partition being an equipartition\ndef p(n,k):\n return m(n, k)/(k**n)\n\nq = (p(n, 2)/p(n, 3)).simplify()\n\nz = log(p(n, k)/p(n, k+1)).expand(force=True)\n```\n\nWorking tests:\n\n```\n>>> print q.n(4)\n0.9648*n**0.5\n>>> print q.n(4,subs={n: N})\n7.487e+11\n>>> print z \nk*n*log(n)/(k + 1) - k*n*log(k + 1)/(k + 1) - k*n/(k + 1) + k*log(k)/2 - k*log(k + 1)/2 - n*log(n) + n*log(k + 1) + n + n*log(n)/(k + 1) - n*log(k + 1)/(k + 1) - n/(k + 1) + log(n)/2 - log(k + 1)/2 + log(2)/2 + log(pi)/2\n>>> print z.simplify()\nk*log(k)/2 - k*log(k + 1)/2 + log(n)/2 - log(k + 1)/2 + log(2)/2 + log(pi)/2\n>>> print z.simplify().n(4)\n0.5*k*log(k) - 0.5*k*log(k + 1) + 0.5*log(n) - 0.5*log(k + 1) + 0.9189\n>>> print z.n(4).simplify()\n0.5*k*log(k) - 0.5*k*log(k + 1) + 0.5*log(n) - 0.5*log(k + 1) + 0.9189\n>>> print z.subs(n,N).simplify()\nk*log(k)/2 - k*log(k + 1)/2 - log(k + 1)/2 + log(2)/2 + log(pi)/2 + log(602214179000000000000000)/2\n>>> print z.subs(n,N).simplify().n(4)\n0.5*k*log(k) - 0.5*k*log(k + 1) - 0.5*log(k + 1) + 28.3\n```\n\nThe problem cases:\n- Substitution not working `n` is not replaced by `N` - note that `print q.n(4,subs={n: N})` above works\n\n```\n>>> print z.n(4, subs={n: N})\nk*n*log(n)/(k + 1.0) - k*n*log(k + 1)/(k + 1.0) - k*n/(k + 1.0) + 0.5*k*log(k) - 0.5*k*log(k + 1) - n*log(n) + n*log(k + 1) + n + n*log(n)/(k + 1.0) - n*log(k + 1)/(k + 1.0) - n/(k + 1.0) + 0.5*log(n) - 0.5*log(k + 1) + 0.9189\n\n>>> print z.simplify().n(4, subs={n: N}) \n0.5*k*log(k) - 0.5*k*log(k + 1) + 0.5*log(n) - 0.5*log(k + 1) + 0.9189\n```\n- This looses the `log(n)` term completely and performs some other invalid transformations\n\n```\n>>> print z.subs(n,N).n(4).simplify()\n0.5*k*(k*log(k) - k*log(k + 1) + log(k))/(k + 1)\n```\n", "number": 11004, "title": "problems with numerical evaluation" } ]
7459acbf979ad88cedb781de016d19722bc7a959
{ "head_commit": "516a019633a622330ef4067b516dd2a3d1f2643c", "head_commit_message": "fixed formatting", "patch_to_review": "diff --git a/sympy/simplify/tests/test_simplify.py b/sympy/simplify/tests/test_simplify.py\nindex 4ed4fc47f6f2..4cf06956408d 100644\n--- a/sympy/simplify/tests/test_simplify.py\n+++ b/sympy/simplify/tests/test_simplify.py\n@@ -1036,6 +1036,22 @@ def test_issue_23543():\n x, y, z = symbols(\"x y z\", commutative=False)\n assert (x*(y + z/2)).simplify() == x*(2*y + z)/2\n \n+def test_issue_11004():\n+ def f(n):\n+ return sqrt(2*pi*n) * (n/E)**n\n+\n+ def m(n, k):\n+ return f(n) / (f(n/k)**k)\n+\n+ def p(n,k):\n+ return m(n, k) / (k**n)\n+\n+ N, k = symbols('N k')\n+ half = Float('0.5', 4)\n+ z = log(p(n, k) / p(n, k + 1)).expand(force=True)\n+ r = simplify(z.subs(n, N).n(4))\n+ assert r == (half*k*log(k) - half*k*log(k + 1) + half*log(N) - half*log(k + 1) + Float(0.9189224, 4))\n+\n \n def test_issue_19161():\n polynomial = Poly('x**2').simplify()\n" }
[ { "diff_hunk": "@@ -1036,6 +1036,22 @@ def test_issue_23543():\n x, y, z = symbols(\"x y z\", commutative=False)\n assert (x*(y + z/2)).simplify() == x*(2*y + z)/2\n \n+def test_issue_11004():\n+ def f(n):\n+ return sqrt(2*pi*n) * (n/E)**n\n+\n+ def m(n, k):\n+ return f(n) / (f(n/k)**k)\n+\n+ def p(n,k):\n+ return m(n, k) / (k**n)\n+\n+ N, k = symbols('N k')\n+ half = Float('0.5', 4)\n+ z = log(p(n, k) / p(n, k + 1)).expand(force=True)\n+ r = simplify(z.subs(n, N).n(4))\n+ assert r == (half*k*log(k) - half*k*log(k + 1) + half*log(N) - half*log(k + 1) + Float(0.9189224, 4))", "line": null, "original_line": 1053, "original_start_line": null, "path": "sympy/simplify/tests/test_simplify.py", "start_line": null, "text": "@user1:\nPersonally I think this line is too long at over 100 characters. Black (with more SymPy-esque whitespace around operators) would suggest to reformat it like this:\r\n```suggestion\r\n assert r == (\r\n half*k*log(k)\r\n - half*k*log(k + 1)\r\n + half*log(N)\r\n - half*log(k + 1)\r\n + Float(0.9189224, 4)\r\n )\r\n```\r\nOthers may disagree that line length here isn't an issue, but it is considerably longer than anything surrounding it in this tests module." } ]
fac1a5ce5d3801d67ad97d12a4b5998df5d992bd
diff --git a/sympy/simplify/tests/test_simplify.py b/sympy/simplify/tests/test_simplify.py index 4ed4fc47f6f2..a26e8e33a2eb 100644 --- a/sympy/simplify/tests/test_simplify.py +++ b/sympy/simplify/tests/test_simplify.py @@ -1037,6 +1037,30 @@ def test_issue_23543(): assert (x*(y + z/2)).simplify() == x*(2*y + z)/2 +def test_issue_11004(): + + def f(n): + return sqrt(2*pi*n) * (n/E)**n + + def m(n, k): + return f(n) / (f(n/k)**k) + + def p(n,k): + return m(n, k) / (k**n) + + N, k = symbols('N k') + half = Float('0.5', 4) + z = log(p(n, k) / p(n, k + 1)).expand(force=True) + r = simplify(z.subs(n, N).n(4)) + assert r == ( + half*k*log(k) + - half*k*log(k + 1) + + half*log(N) + - half*log(k + 1) + + Float(0.9189224, 4) + ) + + def test_issue_19161(): polynomial = Poly('x**2').simplify() assert (polynomial-x**2).simplify() == 0
{ "difficulty": "low", "estimated_review_effort": 2, "problem_domain": "Bug Fixes" }
sympy__sympy-24666@7d194c7
sympy/sympy
Python
24,666
Added test for simplified Piecewise is missing conditions
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> Fixes #21481 #### Brief description of what is fixed or changed #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> NO ENTRY <!-- END RELEASE NOTES -->
2023-02-06T08:51:03Z
simplified Piecewise is missing conditions I have been trying to write a model for the size of symbolic powers. I define the Piecewise expression as follows for power `b**e` with the expressions of the Piecewise giving a value that is representative of the region in which the size of the power is found (e.g. 1/2 if the value of the power is between 0 and 1): ```python var('b e') A=Piecewise((1, Eq(b, 1) | Eq(e, 0) | (Eq(b, -1) & Eq(Mod(e, 2), 0))), (0, Eq(b, 0) & (e > 0)), (-1, Eq(b, -1) & Eq(Mod(e, 2), 1)), (Piecewise((2, ((b > 1) & (e > 0)) | ((b > 0) & (b < 1) & (e < 0)) | ((e >= 2) & (b < -1) & Eq(Mod(e, 2), 0)) | ((e <= -2) & (b > -1) & (b < 0) & Eq(Mod(e, 2), 0))), (S.Half, ((b > 1) & (e < 0)) | ((b > 0) & (e > 0) & (b < 1)) | ((e <= -2) & (b < -1) & Eq(Mod(e, 2), 0)) | ((e >= 2) & (b > -1) & (b < 0) & Eq(Mod(e, 2), 0))), (-S.Half, Eq(Mod(e, 2), 1) & (((e <= -1) & (b < -1)) | ((e >= 1) & (b > -1) & (b < 0)))), (-2, ((e >= 1) & (b < -1) & Eq(Mod(e, 2), 1)) | ((e <= -1) & (b > -1) & (b < 0) & Eq(Mod(e, 2), 1)))), Eq(im(b), 0) & Eq(im(e), 0))) ``` As a test of folding and simplification I create the following alternative forms for A ```python B=piecewise_fold(A) sa=A.simplify() sb=B.simplify() ``` I then test the forms over a range of `b` and `e` values: ```python v = Tuple(-2, -1, -0.5, 0, 0.5, 1, 2) for i in v: for j in v: r = {b:i,e:j} ok=[k.xreplace(r) for k in (A,B,sa,sb)] if len(set(ok))!=1:print('ab %s %s'%(r,ok)) ``` Although `A` and `B` agree, the simplified forms of each do not and I get these results ```python s {b: -2, e: -1} [-1/2, -1/2, -1/2, nan] s {b: -2, e: 1} [-2, -2, -2, nan] s {b: -1/2, e: 1} [-1/2, -1/2, -1/2, nan] ``` The `nan` indicates that the conditions for the values of `b` and `e` that were passed are not found in the Piecewise expression so a default `nan` is returned. I confirmed that the value returned by `sa` agrees with the value from `A`. I have not yet tracked down the source of the discrepancy from `sb` in which are missing the cases for -2 and -1/2: ```python Piecewise( (1, Eq(b, 1) | Eq(e, 0) | (Eq(b, -1) & Eq(Mod(e, 2), 0))), (0, Eq(b, 0) & (e > 0)), (-1, Eq(b, -1) & Eq(Mod(e, 2), 1)), (2, Eq(im(b), 0) & Eq(im(e), 0) & ((b > 0) | (b > 1)) & ((b > 0) | (e > 0)) & ((b > 1) | (b < 1)) & ((b > 1) | (e < 0)) & ((e > 0) | (b < 1)) & ((e > 0) | (e < 0))), (1/2, Eq(im(b), 0) & Eq(im(e), 0) & ((b > 0) | (b > 1)) & ((b > 0) | (e < 0)) & ((b > 1) | (e > 0)) & ((b > 1) | (b < 1)) & ((e > 0) | (e < 0)) & ((b < 1) | (e <0)))) ```
It looks like it is the Mod that is causing the trouble: ```python >>> p ((e >= 1) & (b < -1) & Eq(Mod(e, 2), 1)) | ((e <= -1) & (b > -1) & (b < 0) & Eq(Mod(e, 2), 1)) >>> p.simplify() (b < -1) & Eq(Mod(e, 2), 1) >>> p.subs(Eq(Mod(e, 2), 1),True) ((e >= 1) & (b < -1)) | ((e <= -1) & (b > -1) & (b < 0)) >>> _.simplify() ((e >= 1) | (e <= -1)) & ((e >= 1) | (b > -1)) & ((e >= 1) | (b < 0)) & ((e <= -1) | (b < -1)) & ((b > -1) | (b < -1)) & ((b < -1) | (b < 0)) ``` This seems to have been fixed in the current master. ``` pprint(sb) ⎧ 1 for (b = -1 ∧ e ⎪ ⎪ 0 fo ⎪ ⎪ -1 for b ⎪ ⎨ 2 for im(b) = 0 ∧ im(e) = 0 ∧ ((b > 1 ∧ e > 0) ∨ (e mod 2 = 0 ∧ e ≥ 2 ⎪ ⎪1/2 for im(b) = 0 ∧ im(e) = 0 ∧ ((b > 1 ∧ e < 0) ∨ (e mod 2 = 0 ∧ e ≤ - ⎪ ⎪-1/2 for im(b) = 0 ∧ im(e) = 0 ∧ e mod 2 = 1 ∧ (e ≥ 1 ∨ e ≤ -1) ∧ (e ≥ 1 ∨ b ⎪ ⎩ -2 for im(b) = 0 ∧ im(e) = 0 ∧ e mod 2 = 1 ∧ (e ≥ 1 ∨ e ≤ -1) ∧ (e ≥ 1 ∨ b ``` (and some more lines...) and ``` In [7]: v = Tuple(-2, -1, -0.5, 0, 0.5, 1, 2) ...: for i in v: ...: for j in v: ...: r = {b:i,e:j} ...: ok=[k.xreplace(r) for k in (A,B,sa,sb)] ...: if len(set(ok))!=1:print('ab %s %s'%(r,ok)) ...: In [8]: ```
[ { "body": "I have been trying to write a model for the size of symbolic powers.\r\nI define the Piecewise expression as follows for power `b**e` with\r\nthe expressions of the Piecewise giving a value that is representative\r\nof the region in which the size of the power is found (e.g. 1/2 if the\r\nvalue of the power is between 0 and 1):\r\n\r\n```python\r\nvar('b e')\r\nA=Piecewise((1, Eq(b, 1) | Eq(e, 0) | (Eq(b, -1) & Eq(Mod(e, 2), 0))),\r\n(0, Eq(b, 0) & (e > 0)), (-1, Eq(b, -1) & Eq(Mod(e, 2), 1)),\r\n(Piecewise((2, ((b > 1) & (e > 0)) | ((b > 0) & (b < 1) & (e < 0)) |\r\n((e >= 2) & (b < -1) & Eq(Mod(e, 2), 0)) | ((e <= -2) & (b > -1) & (b\r\n< 0) & Eq(Mod(e, 2), 0))), (S.Half, ((b > 1) & (e < 0)) | ((b > 0) & (e >\r\n0) & (b < 1)) | ((e <= -2) & (b < -1) & Eq(Mod(e, 2), 0)) | ((e >= 2)\r\n& (b > -1) & (b < 0) & Eq(Mod(e, 2), 0))), (-S.Half, Eq(Mod(e, 2), 1) &\r\n(((e <= -1) & (b < -1)) | ((e >= 1) & (b > -1) & (b < 0)))), (-2, ((e\r\n>= 1) & (b < -1) & Eq(Mod(e, 2), 1)) | ((e <= -1) & (b > -1) & (b < 0)\r\n& Eq(Mod(e, 2), 1)))), Eq(im(b), 0) & Eq(im(e), 0)))\r\n```\r\nAs a test of folding and simplification I create the following alternative forms for A\r\n```python\r\nB=piecewise_fold(A)\r\nsa=A.simplify()\r\nsb=B.simplify()\r\n```\r\nI then test the forms over a range of `b` and `e` values:\r\n```python\r\nv = Tuple(-2, -1, -0.5, 0, 0.5, 1, 2)\r\nfor i in v:\r\n for j in v:\r\n r = {b:i,e:j}\r\n ok=[k.xreplace(r) for k in (A,B,sa,sb)]\r\n if len(set(ok))!=1:print('ab %s %s'%(r,ok))\r\n```\r\nAlthough `A` and `B` agree, the simplified forms of each do not and I get these results\r\n```python\r\ns {b: -2, e: -1} [-1/2, -1/2, -1/2, nan]\r\ns {b: -2, e: 1} [-2, -2, -2, nan]\r\ns {b: -1/2, e: 1} [-1/2, -1/2, -1/2, nan]\r\n```\r\nThe `nan` indicates that the conditions for the values of `b` and `e` that were passed are not found in the Piecewise expression so a default `nan` is returned. I confirmed that the value returned by `sa` agrees with the value from `A`. I have not yet tracked down the source of the discrepancy from `sb` in which are missing the cases for -2 and -1/2:\r\n```python\r\nPiecewise(\r\n (1, Eq(b, 1) | Eq(e, 0) | (Eq(b, -1) & Eq(Mod(e, 2), 0))),\r\n (0, Eq(b, 0) & (e > 0)),\r\n (-1, Eq(b, -1) & Eq(Mod(e, 2), 1)),\r\n (2, Eq(im(b), 0) & Eq(im(e), 0) & ((b > 0) | (b > 1)) & ((b > 0) | (e > 0)) & ((b > 1) | (b < 1)) & ((b > 1) | (e < 0)) & ((e > 0) | (b < 1)) & ((e > 0) | (e < 0))),\r\n (1/2, Eq(im(b), 0) & Eq(im(e), 0) & ((b > 0) | (b > 1)) & ((b > 0) | (e < 0)) & ((b > 1) | (e > 0)) & ((b > 1) | (b < 1)) & ((e > 0) | (e < 0)) & ((b < 1) | (e <0))))\r\n```", "number": 21481, "title": "simplified Piecewise is missing conditions" } ]
9a1de69bf68064a304cc2c0e7e5328647be5ebd0
{ "head_commit": "7d194c72f787c902ace7139a50313bcb0026390e", "head_commit_message": "empty commit", "patch_to_review": "diff --git a/sympy/functions/elementary/tests/test_piecewise.py b/sympy/functions/elementary/tests/test_piecewise.py\nindex 958c671c470f..74618bfa59a9 100644\n--- a/sympy/functions/elementary/tests/test_piecewise.py\n+++ b/sympy/functions/elementary/tests/test_piecewise.py\n@@ -5,6 +5,7 @@\n from sympy.core.expr import unchanged\n from sympy.core.function import (Function, diff, expand)\n from sympy.core.mul import Mul\n+from sympy.core.mod import Mod\n from sympy.core.numbers import (Float, I, Rational, oo, pi, zoo)\n from sympy.core.relational import (Eq, Ge, Gt, Ne)\n from sympy.core.singleton import S\n@@ -1339,6 +1340,31 @@ def test_issue_14787():\n f = Piecewise((x, x < 1), ((S(58) / 7), True))\n assert str(f.evalf()) == \"Piecewise((x, x < 1), (8.28571428571429, True))\"\n \n+def test_issue_21481():\n+ b, e = symbols('b e')\n+ A = Piecewise((1, Eq(b, 1) | Eq(e, 0) | (Eq(b, -1) & Eq(Mod(e, 2), 0))),\n+ (0, Eq(b, 0) & (e > 0)), (-1, Eq(b, -1) & Eq(Mod(e, 2), 1)),\n+ (Piecewise((2, ((b > 1) & (e > 0)) | ((b > 0) & (b < 1) & (e < 0)) |\n+ ((e >= 2) & (b < -1) & Eq(Mod(e, 2), 0)) |\n+ ((e <= -2) & (b > -1) & (b < 0) & Eq(Mod(e, 2), 0))),\n+ (S.Half, ((b > 1) & (e < 0)) | ((b > 0) & (e > 0) & (b < 1)) |\n+ ((e <= -2) & (b < -1) & Eq(Mod(e, 2), 0)) |\n+ ((e >= 2) & (b > -1) & (b < 0) & Eq(Mod(e, 2), 0))),\n+ (-S.Half, Eq(Mod(e, 2), 1) & (((e <= -1) & (b < -1)) |\n+ ((e >= 1) & (b > -1) & (b < 0)))),\n+ (-2, ((e >= 1) & (b < -1) & Eq(Mod(e, 2), 1)) |\n+ ((e <= -1) & (b > -1) & (b < 0) & Eq(Mod(e, 2), 1)))),\n+ Eq(im(b), 0) & Eq(im(e), 0)))\n+ B = piecewise_fold(A)\n+ sa = A.simplify()\n+ sb = B.simplify()\n+ v = Tuple(-2, -1, -0.5, 0, 0.5, 1, 2)\n+ for i in v:\n+ for j in v:\n+ r = {b:i, e:j}\n+ ok = [k.xreplace(r) for k in (A, B, sa, sb)]\n+ assert len(set(ok)) == 1\n+\n \n def test_issue_8458():\n x, y = symbols('x y')\n" }
[ { "diff_hunk": "@@ -1339,6 +1340,31 @@ def test_issue_14787():\n f = Piecewise((x, x < 1), ((S(58) / 7), True))\n assert str(f.evalf()) == \"Piecewise((x, x < 1), (8.28571428571429, True))\"\n \n+def test_issue_21481():\n+ b, e = symbols('b e')\n+ A = Piecewise((1, Eq(b, 1) | Eq(e, 0) | (Eq(b, -1) & Eq(Mod(e, 2), 0))),\n+ (0, Eq(b, 0) & (e > 0)), (-1, Eq(b, -1) & Eq(Mod(e, 2), 1)),\n+ (Piecewise((2, ((b > 1) & (e > 0)) | ((b > 0) & (b < 1) & (e < 0)) |\n+ ((e >= 2) & (b < -1) & Eq(Mod(e, 2), 0)) |\n+ ((e <= -2) & (b > -1) & (b < 0) & Eq(Mod(e, 2), 0))),\n+ (S.Half, ((b > 1) & (e < 0)) | ((b > 0) & (e > 0) & (b < 1)) |\n+ ((e <= -2) & (b < -1) & Eq(Mod(e, 2), 0)) |\n+ ((e >= 2) & (b > -1) & (b < 0) & Eq(Mod(e, 2), 0))),\n+ (-S.Half, Eq(Mod(e, 2), 1) & (((e <= -1) & (b < -1)) |\n+ ((e >= 1) & (b > -1) & (b < 0)))),\n+ (-2, ((e >= 1) & (b < -1) & Eq(Mod(e, 2), 1)) |\n+ ((e <= -1) & (b > -1) & (b < 0) & Eq(Mod(e, 2), 1)))),\n+ Eq(im(b), 0) & Eq(im(e), 0)))\n+ B = piecewise_fold(A)\n+ sa = A.simplify()\n+ sb = B.simplify()\n+ v = Tuple(-2, -1, -0.5, 0, 0.5, 1, 2)", "line": null, "original_line": 1361, "original_start_line": null, "path": "sympy/functions/elementary/tests/test_piecewise.py", "start_line": null, "text": "@user1:\nWhy do you have to use `Tuple` instead of tuple? And is it intended to use `0.5` instead of `S.Half`?\n\n@author:\nNot a specific idea, but Tuple would be better I thought.\n\n@author:\nshould I replace it ?\n\n@user1:\nIt is better to use python iterables because `Tuple` is quite redundant.\n\n@user1:\nAnd also I suggest to use `S.Half` instead of floats." } ]
a0871f8e479463db063790562dbfd0468e2ac626
diff --git a/sympy/functions/elementary/tests/test_piecewise.py b/sympy/functions/elementary/tests/test_piecewise.py index 958c671c470f..2d4de12b284e 100644 --- a/sympy/functions/elementary/tests/test_piecewise.py +++ b/sympy/functions/elementary/tests/test_piecewise.py @@ -5,6 +5,7 @@ from sympy.core.expr import unchanged from sympy.core.function import (Function, diff, expand) from sympy.core.mul import Mul +from sympy.core.mod import Mod from sympy.core.numbers import (Float, I, Rational, oo, pi, zoo) from sympy.core.relational import (Eq, Ge, Gt, Ne) from sympy.core.singleton import S @@ -1339,6 +1340,43 @@ def test_issue_14787(): f = Piecewise((x, x < 1), ((S(58) / 7), True)) assert str(f.evalf()) == "Piecewise((x, x < 1), (8.28571428571429, True))" +def test_issue_21481(): + b, e = symbols('b e') + C = Piecewise( + (2, + ((b > 1) & (e > 0)) | + ((b > 0) & (b < 1) & (e < 0)) | + ((e >= 2) & (b < -1) & Eq(Mod(e, 2), 0)) | + ((e <= -2) & (b > -1) & (b < 0) & Eq(Mod(e, 2), 0))), + (S.Half, + ((b > 1) & (e < 0)) | + ((b > 0) & (e > 0) & (b < 1)) | + ((e <= -2) & (b < -1) & Eq(Mod(e, 2), 0)) | + ((e >= 2) & (b > -1) & (b < 0) & Eq(Mod(e, 2), 0))), + (-S.Half, + Eq(Mod(e, 2), 1) & + (((e <= -1) & (b < -1)) | ((e >= 1) & (b > -1) & (b < 0)))), + (-2, + ((e >= 1) & (b < -1) & Eq(Mod(e, 2), 1)) | + ((e <= -1) & (b > -1) & (b < 0) & Eq(Mod(e, 2), 1))) + ) + A = Piecewise( + (1, Eq(b, 1) | Eq(e, 0) | (Eq(b, -1) & Eq(Mod(e, 2), 0))), + (0, Eq(b, 0) & (e > 0)), + (-1, Eq(b, -1) & Eq(Mod(e, 2), 1)), + (C, Eq(im(b), 0) & Eq(im(e), 0)) + ) + + B = piecewise_fold(A) + sa = A.simplify() + sb = B.simplify() + v = (-2, -1, -S.Half, 0, S.Half, 1, 2) + for i in v: + for j in v: + r = {b:i, e:j} + ok = [k.xreplace(r) for k in (A, B, sa, sb)] + assert len(set(ok)) == 1 + def test_issue_8458(): x, y = symbols('x y')
{ "difficulty": "low", "estimated_review_effort": 2, "problem_domain": "Bug Fixes" }
sympy__sympy-24586@49edcdc
sympy/sympy
Python
24,586
define dummies once for geometry module
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> #### Brief description of what is fixed or changed fixes #24581 #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> * geometry * fixes memory leak caused by unnecessarily creating new Dummies <!-- END RELEASE NOTES -->
2023-01-25T04:48:31Z
memory leak, not a cache issue Hello I have the following code that uses the latest sympy (1.11.1) tested with python 3.8.10 on Windows 10 and with python 3.11.1 on macOS 13.1, the memory is constantly growing. Any ideas why? ``` from sympy import Point, Ray, Circle while True: circle = Circle(Point(0.0, 0.0), 0.5) ray = Ray(Point(0.2, 0.3), Point(0.3, 0.2)) res = circle.intersection(ray) ``` NOTE: the growth in geometry has been addressed, but the scope of the issue is greater than that module so the issue is still open.
I can reproduce this but no idea why it happens. Clearing the cache doesn't help. Calling `gc.collect` doesn't help. With this diff the memory usage stays constant: ```diff diff --git a/sympy/geometry/ellipse.py b/sympy/geometry/ellipse.py index 0c1c5d0..35d92af 100644 --- a/sympy/geometry/ellipse.py +++ b/sympy/geometry/ellipse.py @@ -14,7 +14,7 @@ from sympy.core.logic import fuzzy_bool from sympy.core.numbers import Rational, oo from sympy.core.sorting import ordered -from sympy.core.symbol import Dummy, uniquely_named_symbol, _symbol +from sympy.core.symbol import Symbol, Dummy, uniquely_named_symbol, _symbol from sympy.simplify import simplify, trigsimp from sympy.functions.elementary.miscellaneous import sqrt, Max from sympy.functions.elementary.trigonometric import cos, sin @@ -664,8 +664,8 @@ def intersection(self, o): [Point2D(-17/5, -12/5), Point2D(-17/5, 12/5), Point2D(7/5, -12/5), Point2D(7/5, 12/5)] """ # TODO: Replace solve with nonlinsolve, when nonlinsolve will be able to solve in real domain - x = Dummy('x', real=True) - y = Dummy('y', real=True) + x = Symbol('x', real=True) + y = Symbol('y', real=True) if isinstance(o, Point): if o in self: ``` So what is happening is that every time `intersection` is called two new `Dummy` symbols are created. Somewhere there is some sort of cache that grows without bound if more and more symbols are created. It's here (and a similar one in fields.py): https://github.com/sympy/sympy/blob/f8e33851e174bd686ac8dc88d75f955cf5dc8eeb/sympy/polys/rings.py#L194 This cache grows without bound every time a polynomial ring with new symbols is used. A fix could be to use `lru_cache` instead of a global cache. I am not entirely sure that this is only a cache though because it also associates with a particular PolyElement class and it is possible that that class needs to be globally unique. Can't that be solved by using a WeakDictionary? I've never been a huge fan of this class design in the polys FWIW. I think that a WeakValue dictionary could work but it would probably not achieve the performance benefits intended by the cache in the first place. The point is that this second call is faster: ```python In [1]: %time K = QQ[x,y] CPU times: user 4 ms, sys: 0 ns, total: 4 ms Wall time: 2.38 ms In [2]: %time K = QQ[x,y] CPU times: user 0 ns, sys: 0 ns, total: 0 ns Wall time: 89.6 µs ``` Usually these rings are created transiently as part of an operation like `factor`, `cancel` etc. Since the ring is only a temporary object there will probably not be many in existence at any one time so with a WeakValue dictionary the ring will always disappear. In that case we would be better off just not having any cache. On the other hand of many operations like `factor`, `cancel` etc are used as part of a complex operation it is likely that the same ring will be reconstructed many times so keeping a cache can save time on that. I think that probably a LRU cache is good here but given the size of these rings it should probably be kept quite small: ```diff diff --git a/sympy/polys/fields.py b/sympy/polys/fields.py index a3f239c..cdb079a 100644 --- a/sympy/polys/fields.py +++ b/sympy/polys/fields.py @@ -6,6 +6,7 @@ from operator import add, mul, lt, le, gt, ge +from sympy.core.cache import __cacheit as _cacheit from sympy.core.expr import Expr from sympy.core.mod import Mod from sympy.core.numbers import Exp1 @@ -99,11 +100,11 @@ def sfield(exprs, *symbols, **options): else: return (_field, fracs) -_field_cache: dict[Any, Any] = {} class FracField(DefaultPrinting): """Multivariate distributed rational function field. """ + @_cacheit(10) def __new__(cls, symbols, domain, order=lex): from sympy.polys.rings import PolyRing ring = PolyRing(symbols, domain, order) @@ -113,7 +114,7 @@ def __new__(cls, symbols, domain, order=lex): order = ring.order _hash_tuple = (cls.__name__, symbols, ngens, domain, order) - obj = _field_cache.get(_hash_tuple) + obj = None if obj is None: obj = object.__new__(cls) @@ -138,8 +139,6 @@ def __new__(cls, symbols, domain, order=lex): if not hasattr(obj, name): setattr(obj, name, generator) - _field_cache[_hash_tuple] = obj - return obj def _gens(self): diff --git a/sympy/polys/rings.py b/sympy/polys/rings.py index 0db1897..3a56486 100644 --- a/sympy/polys/rings.py +++ b/sympy/polys/rings.py @@ -7,6 +7,7 @@ from functools import reduce from types import GeneratorType +from sympy.core.cache import __cacheit as _cacheit from sympy.core.expr import Expr from sympy.core.numbers import igcd, oo from sympy.core.symbol import Symbol, symbols as _symbols @@ -191,11 +192,11 @@ def _parse_symbols(symbols): raise GeneratorsError("expected a string, Symbol or expression or a non-empty sequence of strings, Symbols or expressions") -_ring_cache: dict[Any, Any] = {} class PolyRing(DefaultPrinting, IPolys): """Multivariate distributed polynomial ring. """ + @_cacheit(10) def __new__(cls, symbols, domain, order=lex): symbols = tuple(_parse_symbols(symbols)) ngens = len(symbols) @@ -203,7 +204,7 @@ def __new__(cls, symbols, domain, order=lex): order = OrderOpt.preprocess(order) _hash_tuple = (cls.__name__, symbols, ngens, domain, order) - obj = _ring_cache.get(_hash_tuple) + obj = None if obj is None: if domain.is_Composite and set(symbols) & set(domain.symbols): @@ -257,8 +258,6 @@ def __new__(cls, symbols, domain, order=lex): if not hasattr(obj, name): setattr(obj, name, generator) - _ring_cache[_hash_tuple] = obj - return obj def _gens(self): ``` In the long run the best solution would be to make the ring objects lighter weight and faster to construct. > With this diff the memory usage stays constant: Those two Dummy symbols should be created at the top of ellipse.py and then used the 4 times when needed in the various classes -- there is no need to keep creating new Dummy symbols. To be clear plenty of other operations can cause this cache to grow and even to grow more quickly: ``` while True: cancel(Dummy('x') + 1) ``` I thought the point was that these objects break if they aren't singletonized? I seem to remember that sort of thing being the case for these classes in the polys, but correct me if I am wrong. If that's the case, you'd need a version of lru_cache that also acts like a weak dictionary (i.e., least recently used items aren't removed from the cache if they are also referenced somewhere). > Those two Dummy symbols should be created at the top of ellipse.py and then used the 4 times when needed in the various classes -- there is no need to keep creating new Dummy symbols. Wouldn't that change the semantics? A dummy symbol is supposed to be unequal to everything except for itself. Two separate intersections should have dummies that are unequal, but reusing the same Dummy would make them equal. > I thought the point was that these objects break if they aren't singletonized? Yes, actually they do. I just tried removing the cache and a bunch of poly tests failed. The problem is e.g. this: https://github.com/sympy/sympy/blob/f8e33851e174bd686ac8dc88d75f955cf5dc8eeb/sympy/polys/rings.py#L419 Each ring dynamically creates a class for its elements and distinct copies of the same ring would have different classes. I've opened #24585 which for now removes the ring cache altogether so we can see if there's a noticeable impact on performance and if the need for the ring cache is gone. Longer term I would like to remove the dynamic class generation altogether. Also we should just make the construction of rings faster.
[ { "body": "Hello\r\n\r\nI have the following code that uses the latest sympy (1.11.1) tested with python 3.8.10 on Windows 10 and with python 3.11.1 on macOS 13.1, the memory is constantly growing. Any ideas why?\r\n\r\n```\r\nfrom sympy import Point, Ray, Circle\r\nwhile True:\r\n circle = Circle(Point(0.0, 0.0), 0.5)\r\n ray = Ray(Point(0.2, 0.3), Point(0.3, 0.2))\r\n res = circle.intersection(ray)\r\n```\r\n\r\nNOTE: the growth in geometry has been addressed, but the scope of the issue is greater than that module so the issue is still open.", "number": 24581, "title": "memory leak, not a cache issue" } ]
fcf3dd948f2db2599a4a2cba8773c3e1f44f12ef
{ "head_commit": "49edcdc06bfa7b2e4e864e07a96dfcc19e1eee81", "head_commit_message": "define dummies once for module", "patch_to_review": "diff --git a/sympy/geometry/ellipse.py b/sympy/geometry/ellipse.py\nindex 0c1c5d06c4cb..bc38a5427e2f 100644\n--- a/sympy/geometry/ellipse.py\n+++ b/sympy/geometry/ellipse.py\n@@ -34,6 +34,8 @@\n \n import random\n \n+x = Dummy('x', real=True)\n+y = Dummy('y', real=True)\n \n class Ellipse(GeometrySet):\n \"\"\"An elliptical GeometryEntity.\n@@ -103,9 +105,6 @@ class Ellipse(GeometrySet):\n \n def __contains__(self, o):\n if isinstance(o, Point):\n- x = Dummy('x', real=True)\n- y = Dummy('y', real=True)\n-\n res = self.equation(x, y).subs({x: o.x, y: o.y})\n return trigsimp(simplify(res)) is S.Zero\n elif isinstance(o, Ellipse):\n@@ -664,8 +663,6 @@ def intersection(self, o):\n [Point2D(-17/5, -12/5), Point2D(-17/5, 12/5), Point2D(7/5, -12/5), Point2D(7/5, 12/5)]\n \"\"\"\n # TODO: Replace solve with nonlinsolve, when nonlinsolve will be able to solve in real domain\n- x = Dummy('x', real=True)\n- y = Dummy('y', real=True)\n \n if isinstance(o, Point):\n if o in self:\n@@ -925,7 +922,6 @@ def normal_lines(self, p, prec=None):\n \n # find the 4 normal points and construct lines through them with\n # the corresponding slope\n- x, y = Dummy('x', real=True), Dummy('y', real=True)\n eq = self.equation(x, y)\n dydx = idiff(eq, y, x)\n norm = -1/dydx\n@@ -1299,7 +1295,6 @@ def tangent_lines(self, p):\n # else p is outside the ellipse or we can't tell. In case of the\n # latter, the solutions returned will only be valid if\n # the point is not inside the ellipse; if it is, nan will result.\n- x, y = Dummy('x'), Dummy('y')\n eq = self.equation(x, y)\n dydx = idiff(eq, y, x)\n slope = Line(p, Point(x, y)).slope\ndiff --git a/sympy/geometry/entity.py b/sympy/geometry/entity.py\nindex 0a016bd22c40..f3d002537d40 100644\n--- a/sympy/geometry/entity.py\n+++ b/sympy/geometry/entity.py\n@@ -64,6 +64,9 @@\n ]\n \n \n+x, y = Dummy(), Dummy()\n+T = Dummy('t', real=True)\n+\n class GeometryEntity(Basic, EvalfMixin):\n \"\"\"The base class for all geometrical entities.\n \n@@ -392,15 +395,15 @@ def reflect(self, line):\n l = line\n o = Point(0, 0)\n if l.slope.is_zero:\n- y = l.args[0].y\n- if not y: # x-axis\n+ v = l.args[0].y\n+ if not v: # x-axis\n return g.scale(y=-1)\n- reps = [(p, p.translate(y=2*(y - p.y))) for p in g.atoms(Point)]\n+ reps = [(p, p.translate(y=2*(v - p.y))) for p in g.atoms(Point)]\n elif l.slope is oo:\n- x = l.args[0].x\n- if not x: # y-axis\n+ v = l.args[0].x\n+ if not v: # y-axis\n return g.scale(x=-1)\n- reps = [(p, p.translate(x=2*(x - p.x))) for p in g.atoms(Point)]\n+ reps = [(p, p.translate(x=2*(v - p.x))) for p in g.atoms(Point)]\n else:\n if not hasattr(g, 'reflect') and not all(\n isinstance(arg, Point) for arg in g.args):\n@@ -410,7 +413,6 @@ def reflect(self, line):\n c = l.coefficients\n d = -c[-1]/c[1] # y-intercept\n # apply the transform to a single point\n- x, y = Dummy(), Dummy()\n xf = Point(x, y)\n xf = xf.translate(y=-d).rotate(-a, o).scale(y=-1\n ).rotate(a, o).translate(y=d)\n@@ -528,7 +530,6 @@ def parameter_value(self, other, t):\n other = Point(other, dim=self.ambient_dimension)\n if not isinstance(other, Point):\n raise ValueError(\"other must be a point\")\n- T = Dummy('t', real=True)\n sol = solve(self.arbitrary_point(T) - other, T, dict=True)\n if not sol:\n raise ValueError(\"Given point is not on %s\" % func_name(self))\ndiff --git a/sympy/geometry/line.py b/sympy/geometry/line.py\nindex b7c023ab1ad4..5a6911c3e272 100644\n--- a/sympy/geometry/line.py\n+++ b/sympy/geometry/line.py\n@@ -45,6 +45,9 @@\n import random\n \n \n+t, u = [Dummy(i) for i in 'tu']\n+\n+\n class LinearEntity(GeometrySet):\n \"\"\"A base class for all linear entities (Line, Ray and Segment)\n in n-dimensional Euclidean space.\n@@ -547,7 +550,6 @@ def intersect_parallel_segments(seg1, seg2):\n # arbitrary points, when equal, both give a\n # non-negative parameter when the arbitrary point\n # coordinates are equated\n- t, u = [Dummy(i) for i in 'tu']\n tu = solve(self.arbitrary_point(t) - other.arbitrary_point(u),\n t, u, dict=True)[0]\n def ok(p, l):\n@@ -1038,7 +1040,6 @@ def random_point(self, seed=None):\n rng = random.Random(seed)\n else:\n rng = random\n- t = Dummy()\n pt = self.arbitrary_point(t)\n if isinstance(self, Ray):\n v = abs(rng.gauss(0, 1))\ndiff --git a/sympy/geometry/plane.py b/sympy/geometry/plane.py\nindex 8c677bdc337d..f6bbdc8492ad 100644\n--- a/sympy/geometry/plane.py\n+++ b/sympy/geometry/plane.py\n@@ -24,6 +24,9 @@\n import random\n \n \n+x, y, z = map(Dummy, 'xyz')\n+t = Dummy() # intentionally name left blank\n+\n class Plane(GeometryEntity):\n \"\"\"\n A plane is a flat, two-dimensional surface. A plane is the two-dimensional\n@@ -74,10 +77,8 @@ def __new__(cls, p1, a=None, b=None, **kwargs):\n return GeometryEntity.__new__(cls, p1, normal_vector, **kwargs)\n \n def __contains__(self, o):\n- x, y, z = map(Dummy, 'xyz')\n k = self.equation(x, y, z)\n if isinstance(o, (LinearEntity, LinearEntity3D)):\n- t = Dummy()\n d = Point3D(o.arbitrary_point(t))\n e = k.subs([(x, d.x), (y, d.y), (z, d.z)])\n return e.equals(0)\n@@ -404,7 +405,6 @@ def intersection(self, o):\n if o in self:\n return [o]\n else:\n- t = Dummy() # unnamed else it may clash with a symbol in o\n a = Point3D(o.arbitrary_point(t))\n p1, n = self.p1, Point3D(self.normal_vector)\n \n@@ -454,7 +454,6 @@ def is_coplanar(self, o):\n True\n \"\"\"\n if isinstance(o, Plane):\n- x, y, z = map(Dummy, 'xyz')\n return not cancel(self.equation(x, y, z)/o.equation(x, y, z)).has(x, y, z)\n if isinstance(o, Point3D):\n return o in self\n@@ -818,11 +817,10 @@ def random_point(self, seed=None):\n rng = random.Random(seed)\n else:\n rng = random\n- u, v = Dummy('u'), Dummy('v')\n params = {\n- u: 2*Rational(rng.gauss(0, 1)) - 1,\n- v: 2*Rational(rng.gauss(0, 1)) - 1}\n- return self.arbitrary_point(u, v).subs(params)\n+ x: 2*Rational(rng.gauss(0, 1)) - 1,\n+ y: 2*Rational(rng.gauss(0, 1)) - 1}\n+ return self.arbitrary_point(x, y).subs(params)\n \n def parameter_value(self, other, u, v=None):\n \"\"\"Return the parameter(s) corresponding to the given point.\ndiff --git a/sympy/geometry/polygon.py b/sympy/geometry/polygon.py\nindex 003568e9689b..4badbafb1f4b 100644\n--- a/sympy/geometry/polygon.py\n+++ b/sympy/geometry/polygon.py\n@@ -22,6 +22,9 @@\n import warnings\n \n \n+x, y, T = symbols('x, y, T', real=True, cls=Dummy)\n+\n+\n class Polygon(GeometrySet):\n \"\"\"A two-dimensional polygon.\n \n@@ -848,7 +851,6 @@ def parameter_value(self, other, t):\n if other.free_symbols:\n raise NotImplementedError('non-numeric coordinates')\n unknown = False\n- T = Dummy('t', real=True)\n p = self.arbitrary_point(T)\n for pt, cond in p.args:\n sol = solve(pt - other, T, dict=True)\n@@ -1004,7 +1006,6 @@ def cut_section(self, line):\n points = list(self.vertices)\n points.append(points[0])\n \n- x, y = symbols('x, y', real=True, cls=Dummy)\n eq = line.equation(x, y)\n \n # considering equation of line to be `ax +by + c`\n" }
[ { "diff_hunk": "@@ -34,6 +34,8 @@\n \n import random\n \n+x = Dummy('x', real=True)\n+y = Dummy('y', real=True)", "line": null, "original_line": 38, "original_start_line": 37, "path": "sympy/geometry/ellipse.py", "start_line": null, "text": "@user1:\nWould it be better to use more anonymous dummies `Dummy()` for anywhere where users seeing dummies should be a bug?\r\nI think that it confuses, if such bug exist, at least in jupyter.\n\n@author:\nWhat about the last commit where a generic name is used. I have found debugging to be a little difficult at times when trying to track down a generic \"dummy_123\" or \"x\". Now if the symbol appears it will be obvious where it was generated.\n\n@user2:\nIf a Dummy is returned to a user, it should use a mathematically meaningful name (typically a single letter name like x or t). I agree it's useful to name internal dummies for debugging. \n\n@author:\nNone of these should be returned.\n\n@user2:\nI guess I misunderstood. I thought these were dummies that are returned to the user as a new parameter in the solution. If they are only used internally, then I agree that they can be reused. For dummies like that, having a long (but also unique) name like `Dummy('geometry_internal')` would be best, so that it's very obvious where it came from if it ever does leak out." } ]
e35b5cfdf3c828265454a8cd9d33466f15ea2601
diff --git a/sympy/geometry/ellipse.py b/sympy/geometry/ellipse.py index 0c1c5d06c4cb..96dc2dc1171c 100644 --- a/sympy/geometry/ellipse.py +++ b/sympy/geometry/ellipse.py @@ -34,6 +34,8 @@ import random +x, y = [Dummy('ellipse_dummy', real=True) for i in range(2)] + class Ellipse(GeometrySet): """An elliptical GeometryEntity. @@ -103,9 +105,6 @@ class Ellipse(GeometrySet): def __contains__(self, o): if isinstance(o, Point): - x = Dummy('x', real=True) - y = Dummy('y', real=True) - res = self.equation(x, y).subs({x: o.x, y: o.y}) return trigsimp(simplify(res)) is S.Zero elif isinstance(o, Ellipse): @@ -664,8 +663,6 @@ def intersection(self, o): [Point2D(-17/5, -12/5), Point2D(-17/5, 12/5), Point2D(7/5, -12/5), Point2D(7/5, 12/5)] """ # TODO: Replace solve with nonlinsolve, when nonlinsolve will be able to solve in real domain - x = Dummy('x', real=True) - y = Dummy('y', real=True) if isinstance(o, Point): if o in self: @@ -925,7 +922,6 @@ def normal_lines(self, p, prec=None): # find the 4 normal points and construct lines through them with # the corresponding slope - x, y = Dummy('x', real=True), Dummy('y', real=True) eq = self.equation(x, y) dydx = idiff(eq, y, x) norm = -1/dydx @@ -1299,7 +1295,6 @@ def tangent_lines(self, p): # else p is outside the ellipse or we can't tell. In case of the # latter, the solutions returned will only be valid if # the point is not inside the ellipse; if it is, nan will result. - x, y = Dummy('x'), Dummy('y') eq = self.equation(x, y) dydx = idiff(eq, y, x) slope = Line(p, Point(x, y)).slope diff --git a/sympy/geometry/entity.py b/sympy/geometry/entity.py index 0a016bd22c40..5ea1e807542c 100644 --- a/sympy/geometry/entity.py +++ b/sympy/geometry/entity.py @@ -64,6 +64,10 @@ ] +x, y = [Dummy('entity_dummy') for i in range(2)] +T = Dummy('entity_dummy', real=True) + + class GeometryEntity(Basic, EvalfMixin): """The base class for all geometrical entities. @@ -392,15 +396,15 @@ def reflect(self, line): l = line o = Point(0, 0) if l.slope.is_zero: - y = l.args[0].y - if not y: # x-axis + v = l.args[0].y + if not v: # x-axis return g.scale(y=-1) - reps = [(p, p.translate(y=2*(y - p.y))) for p in g.atoms(Point)] + reps = [(p, p.translate(y=2*(v - p.y))) for p in g.atoms(Point)] elif l.slope is oo: - x = l.args[0].x - if not x: # y-axis + v = l.args[0].x + if not v: # y-axis return g.scale(x=-1) - reps = [(p, p.translate(x=2*(x - p.x))) for p in g.atoms(Point)] + reps = [(p, p.translate(x=2*(v - p.x))) for p in g.atoms(Point)] else: if not hasattr(g, 'reflect') and not all( isinstance(arg, Point) for arg in g.args): @@ -410,7 +414,6 @@ def reflect(self, line): c = l.coefficients d = -c[-1]/c[1] # y-intercept # apply the transform to a single point - x, y = Dummy(), Dummy() xf = Point(x, y) xf = xf.translate(y=-d).rotate(-a, o).scale(y=-1 ).rotate(a, o).translate(y=d) @@ -528,7 +531,6 @@ def parameter_value(self, other, t): other = Point(other, dim=self.ambient_dimension) if not isinstance(other, Point): raise ValueError("other must be a point") - T = Dummy('t', real=True) sol = solve(self.arbitrary_point(T) - other, T, dict=True) if not sol: raise ValueError("Given point is not on %s" % func_name(self)) diff --git a/sympy/geometry/line.py b/sympy/geometry/line.py index b7c023ab1ad4..4e6fdb39736d 100644 --- a/sympy/geometry/line.py +++ b/sympy/geometry/line.py @@ -45,6 +45,9 @@ import random +t, u = [Dummy('line_dummy') for i in range(2)] + + class LinearEntity(GeometrySet): """A base class for all linear entities (Line, Ray and Segment) in n-dimensional Euclidean space. @@ -547,7 +550,6 @@ def intersect_parallel_segments(seg1, seg2): # arbitrary points, when equal, both give a # non-negative parameter when the arbitrary point # coordinates are equated - t, u = [Dummy(i) for i in 'tu'] tu = solve(self.arbitrary_point(t) - other.arbitrary_point(u), t, u, dict=True)[0] def ok(p, l): @@ -1038,7 +1040,6 @@ def random_point(self, seed=None): rng = random.Random(seed) else: rng = random - t = Dummy() pt = self.arbitrary_point(t) if isinstance(self, Ray): v = abs(rng.gauss(0, 1)) diff --git a/sympy/geometry/plane.py b/sympy/geometry/plane.py index 8c677bdc337d..1f825948af23 100644 --- a/sympy/geometry/plane.py +++ b/sympy/geometry/plane.py @@ -24,6 +24,9 @@ import random +x, y, z, t = [Dummy('plane_dummy') for i in range(4)] + + class Plane(GeometryEntity): """ A plane is a flat, two-dimensional surface. A plane is the two-dimensional @@ -74,10 +77,8 @@ def __new__(cls, p1, a=None, b=None, **kwargs): return GeometryEntity.__new__(cls, p1, normal_vector, **kwargs) def __contains__(self, o): - x, y, z = map(Dummy, 'xyz') k = self.equation(x, y, z) if isinstance(o, (LinearEntity, LinearEntity3D)): - t = Dummy() d = Point3D(o.arbitrary_point(t)) e = k.subs([(x, d.x), (y, d.y), (z, d.z)]) return e.equals(0) @@ -404,7 +405,6 @@ def intersection(self, o): if o in self: return [o] else: - t = Dummy() # unnamed else it may clash with a symbol in o a = Point3D(o.arbitrary_point(t)) p1, n = self.p1, Point3D(self.normal_vector) @@ -454,7 +454,6 @@ def is_coplanar(self, o): True """ if isinstance(o, Plane): - x, y, z = map(Dummy, 'xyz') return not cancel(self.equation(x, y, z)/o.equation(x, y, z)).has(x, y, z) if isinstance(o, Point3D): return o in self @@ -818,11 +817,10 @@ def random_point(self, seed=None): rng = random.Random(seed) else: rng = random - u, v = Dummy('u'), Dummy('v') params = { - u: 2*Rational(rng.gauss(0, 1)) - 1, - v: 2*Rational(rng.gauss(0, 1)) - 1} - return self.arbitrary_point(u, v).subs(params) + x: 2*Rational(rng.gauss(0, 1)) - 1, + y: 2*Rational(rng.gauss(0, 1)) - 1} + return self.arbitrary_point(x, y).subs(params) def parameter_value(self, other, u, v=None): """Return the parameter(s) corresponding to the given point. diff --git a/sympy/geometry/polygon.py b/sympy/geometry/polygon.py index 003568e9689b..5c923088910c 100644 --- a/sympy/geometry/polygon.py +++ b/sympy/geometry/polygon.py @@ -1,7 +1,7 @@ from sympy.core import Expr, S, oo, pi, sympify from sympy.core.evalf import N from sympy.core.sorting import default_sort_key, ordered -from sympy.core.symbol import _symbol, Dummy, symbols, Symbol +from sympy.core.symbol import _symbol, Dummy, Symbol from sympy.functions.elementary.complexes import sign from sympy.functions.elementary.piecewise import Piecewise from sympy.functions.elementary.trigonometric import cos, sin, tan @@ -22,6 +22,9 @@ import warnings +x, y, T = [Dummy('polygon_dummy', real=True) for i in range(3)] + + class Polygon(GeometrySet): """A two-dimensional polygon. @@ -848,7 +851,6 @@ def parameter_value(self, other, t): if other.free_symbols: raise NotImplementedError('non-numeric coordinates') unknown = False - T = Dummy('t', real=True) p = self.arbitrary_point(T) for pt, cond in p.args: sol = solve(pt - other, T, dict=True) @@ -1004,7 +1006,6 @@ def cut_section(self, line): points = list(self.vertices) points.append(points[0]) - x, y = symbols('x, y', real=True, cls=Dummy) eq = line.equation(x, y) # considering equation of line to be `ax +by + c`
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Bug Fixes" }
sympy__sympy-24714@0b18e2b
sympy/sympy
Python
24,714
Added test for sring extension=True error
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> Fixes #18894 #### Brief description of what is fixed or changed #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. Formerly, `log(-x)` incorrectly gave `-log(x)`. * physics.units * Corrected a semantical error in the conversion between volt and statvolt which reported the volt as being larger than the statvolt. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> NO ENTRY <!-- END RELEASE NOTES -->
2023-02-13T18:36:14Z
sring extension=True error: nan is not in any domain Somehow a nan is generated in the intermediate calculations I guess (same for sfield): ```julia In [1]: items = [S(3)/16 + sqrt(3*sqrt(3) + 10)/8, S(1)/8 + 3*sqrt(3)/16, S(1)/8 + 3*sqrt(3)/16, -S(3)/16 + sqrt(3*sqrt(3) + 10)/8] In [2]: items Out[2]: ⎡ ___________ ___________⎤ ⎢3 ╲╱ 3⋅√3 + 10 1 3⋅√3 1 3⋅√3 3 ╲╱ 3⋅√3 + 10 ⎥ ⎢── + ─────────────, ─ + ────, ─ + ────, - ── + ─────────────⎥ ⎣16 8 8 16 8 16 16 8 ⎦ In [3]: sring(items, extension=True) --------------------------------------------------------------------------- CoercionFailed Traceback (most recent call last) <ipython-input-3-549325ae401c> in <module> ----> 1 sring(items, extension=True) ~/current/sympy/sympy/sympy/polys/rings.py in sring(exprs, *symbols, **options) 173 174 _ring = PolyRing(opt.gens, opt.domain, opt.order) --> 175 polys = list(map(_ring.from_dict, reps)) 176 177 if single: ~/current/sympy/sympy/sympy/polys/rings.py in from_dict(self, element) 357 358 for monom, coeff in element.items(): --> 359 coeff = domain_new(coeff) 360 if coeff: 361 poly[monom] = coeff ~/current/sympy/sympy/sympy/polys/rings.py in domain_new(self, element, orig_domain) 316 317 def domain_new(self, element, orig_domain=None): --> 318 return self.domain.convert(element, orig_domain) 319 320 def ground_new(self, coeff): ~/current/sympy/sympy/sympy/polys/domains/domain.py in convert(self, element, base) 145 if isinstance(element, Basic): 146 try: --> 147 return self.from_sympy(element) 148 except (TypeError, ValueError): 149 pass ~/current/sympy/sympy/sympy/polys/domains/algebraicfield.py in from_sympy(self, a) 75 76 try: ---> 77 return self(to_number_field(a, self.ext).native_coeffs()) 78 except (NotAlgebraic, IsomorphismFailed): 79 raise CoercionFailed( ~/current/sympy/sympy/sympy/polys/numberfields.py in to_number_field(extension, theta, **args) 1077 theta = AlgebraicNumber(theta, gen=gen) 1078 -> 1079 coeffs = field_isomorphism(root, theta) 1080 1081 if coeffs is not None: ~/current/sympy/sympy/sympy/polys/numberfields.py in field_isomorphism(a, b, **args) 1043 if args.get('fast', True): 1044 try: -> 1045 result = field_isomorphism_pslq(a, b) 1046 1047 if result is not None: ~/current/sympy/sympy/sympy/polys/numberfields.py in field_isomorphism_pslq(a, b) 975 976 coeffs = list(reversed(coeffs)) --> 977 h = Poly(coeffs, f.gen, domain='QQ') 978 979 if f.compose(h).rem(g).is_zero: ~/current/sympy/sympy/sympy/polys/polytools.py in __new__(cls, rep, *gens, **args) 150 return cls._from_dict(rep, opt) 151 else: --> 152 return cls._from_list(list(rep), opt) 153 else: 154 rep = sympify(rep) ~/current/sympy/sympy/sympy/polys/polytools.py in _from_list(cls, rep, opt) 245 domain, rep = construct_domain(rep, opt=opt) 246 else: --> 247 rep = list(map(domain.convert, rep)) 248 249 return cls.new(DMP.from_list(rep, level, domain), *gens) ~/current/sympy/sympy/sympy/polys/domains/domain.py in convert(self, element, base) 106 """Convert ``element`` to ``self.dtype``. """ 107 if _not_a_coeff(element): --> 108 raise CoercionFailed('%s is not in any domain' % element) 109 110 if base is not None: CoercionFailed: nan is not in any domain ```
This seems to be working now. Looks like it was fixed in 97129c5f70b32ca39875caccf3d5bb574c497546 from https://github.com/sympy/sympy/pull/20274 @oscarbenjamin seems it working fine on the master, should it require a test to close? ``` python >>> items = [S(3)/16 + sqrt(3*sqrt(3) + 10)/8, S(1)/8 + 3*sqrt(3)/16, S(1)/8 + 3*sqrt(3)/16, -S(3)/16 + sqrt(3*sqrt(3) + 10)/8] >>> items [3/16 + sqrt(3*sqrt(3) + 10)/8, 1/8 + 3*sqrt(3)/16, 1/8 + 3*sqrt(3)/16, -3/16 + sqrt(3*sqrt(3) + 10)/8] >>> sring(items, extension=True) (Polynomial ring in over QQ<sqrt(3) + sqrt(3*sqrt(3) + 10)> with lex order, [-ANP([MPQ(-1,76), MPQ(3,152), MPQ(3,8), MPQ(87,304)], [MPQ(1,1), MPQ(0,1), MPQ(-26,1), MPQ(-36,1), MPQ(22,1)], QQ), ANP([MPQ(3,152), MPQ(-9,304), MPQ(-3,8), MPQ(-7,304)], [MPQ(1,1), MPQ(0,1), MPQ(-26,1), MPQ(-36,1), MPQ(22,1)], QQ), ANP([MPQ(3,152), MPQ(-9,304), MPQ(-3,8), MPQ(-7,304)], [MPQ(1,1), MPQ(0,1), MPQ(-26,1), MPQ(-36,1), MPQ(22,1)], QQ), -ANP([MPQ(-1,76), MPQ(3,152), MPQ(3,8), MPQ(-27,304)], [MPQ(1,1), MPQ(0,1), MPQ(-26,1), MPQ(-36,1), MPQ(22,1)], QQ)]) ``` Yes, there should be a test.
[ { "body": "Somehow a nan is generated in the intermediate calculations I guess (same for sfield):\r\n```julia\r\nIn [1]: items = [S(3)/16 + sqrt(3*sqrt(3) + 10)/8, S(1)/8 + 3*sqrt(3)/16, S(1)/8 + 3*sqrt(3)/16, -S(3)/16 + sqrt(3*sqrt(3) + 10)/8] \r\n\r\nIn [2]: items \r\nOut[2]: \r\n⎡ ___________ ___________⎤\r\n⎢3 ╲╱ 3⋅√3 + 10 1 3⋅√3 1 3⋅√3 3 ╲╱ 3⋅√3 + 10 ⎥\r\n⎢── + ─────────────, ─ + ────, ─ + ────, - ── + ─────────────⎥\r\n⎣16 8 8 16 8 16 16 8 ⎦\r\n\r\nIn [3]: sring(items, extension=True) \r\n---------------------------------------------------------------------------\r\nCoercionFailed Traceback (most recent call last)\r\n<ipython-input-3-549325ae401c> in <module>\r\n----> 1 sring(items, extension=True)\r\n\r\n~/current/sympy/sympy/sympy/polys/rings.py in sring(exprs, *symbols, **options)\r\n 173 \r\n 174 _ring = PolyRing(opt.gens, opt.domain, opt.order)\r\n--> 175 polys = list(map(_ring.from_dict, reps))\r\n 176 \r\n 177 if single:\r\n\r\n~/current/sympy/sympy/sympy/polys/rings.py in from_dict(self, element)\r\n 357 \r\n 358 for monom, coeff in element.items():\r\n--> 359 coeff = domain_new(coeff)\r\n 360 if coeff:\r\n 361 poly[monom] = coeff\r\n\r\n~/current/sympy/sympy/sympy/polys/rings.py in domain_new(self, element, orig_domain)\r\n 316 \r\n 317 def domain_new(self, element, orig_domain=None):\r\n--> 318 return self.domain.convert(element, orig_domain)\r\n 319 \r\n 320 def ground_new(self, coeff):\r\n\r\n~/current/sympy/sympy/sympy/polys/domains/domain.py in convert(self, element, base)\r\n 145 if isinstance(element, Basic):\r\n 146 try:\r\n--> 147 return self.from_sympy(element)\r\n 148 except (TypeError, ValueError):\r\n 149 pass\r\n\r\n~/current/sympy/sympy/sympy/polys/domains/algebraicfield.py in from_sympy(self, a)\r\n 75 \r\n 76 try:\r\n---> 77 return self(to_number_field(a, self.ext).native_coeffs())\r\n 78 except (NotAlgebraic, IsomorphismFailed):\r\n 79 raise CoercionFailed(\r\n\r\n~/current/sympy/sympy/sympy/polys/numberfields.py in to_number_field(extension, theta, **args)\r\n 1077 theta = AlgebraicNumber(theta, gen=gen)\r\n 1078 \r\n-> 1079 coeffs = field_isomorphism(root, theta)\r\n 1080 \r\n 1081 if coeffs is not None:\r\n\r\n~/current/sympy/sympy/sympy/polys/numberfields.py in field_isomorphism(a, b, **args)\r\n 1043 if args.get('fast', True):\r\n 1044 try:\r\n-> 1045 result = field_isomorphism_pslq(a, b)\r\n 1046 \r\n 1047 if result is not None:\r\n\r\n~/current/sympy/sympy/sympy/polys/numberfields.py in field_isomorphism_pslq(a, b)\r\n 975 \r\n 976 coeffs = list(reversed(coeffs))\r\n--> 977 h = Poly(coeffs, f.gen, domain='QQ')\r\n 978 \r\n 979 if f.compose(h).rem(g).is_zero:\r\n\r\n~/current/sympy/sympy/sympy/polys/polytools.py in __new__(cls, rep, *gens, **args)\r\n 150 return cls._from_dict(rep, opt)\r\n 151 else:\r\n--> 152 return cls._from_list(list(rep), opt)\r\n 153 else:\r\n 154 rep = sympify(rep)\r\n\r\n~/current/sympy/sympy/sympy/polys/polytools.py in _from_list(cls, rep, opt)\r\n 245 domain, rep = construct_domain(rep, opt=opt)\r\n 246 else:\r\n--> 247 rep = list(map(domain.convert, rep))\r\n 248 \r\n 249 return cls.new(DMP.from_list(rep, level, domain), *gens)\r\n\r\n~/current/sympy/sympy/sympy/polys/domains/domain.py in convert(self, element, base)\r\n 106 \"\"\"Convert ``element`` to ``self.dtype``. \"\"\"\r\n 107 if _not_a_coeff(element):\r\n--> 108 raise CoercionFailed('%s is not in any domain' % element)\r\n 109 \r\n 110 if base is not None:\r\n\r\nCoercionFailed: nan is not in any domain\r\n```", "number": 18894, "title": "sring extension=True error: nan is not in any domain" } ]
8cc0567ed69bff99f7a12a17846f3809f4151af1
{ "head_commit": "0b18e2be56d9fc7e8f03fa3a6332c955f9cad1e9", "head_commit_message": "Added test for sring extension=True error", "patch_to_review": "diff --git a/sympy/polys/tests/test_rings.py b/sympy/polys/tests/test_rings.py\nindex 35d2b3617230..4bd47b0cd00f 100644\n--- a/sympy/polys/tests/test_rings.py\n+++ b/sympy/polys/tests/test_rings.py\n@@ -9,10 +9,12 @@\n from sympy.polys.orderings import lex, grlex\n from sympy.polys.polyerrors import GeneratorsError, \\\n ExactQuotientFailed, MultivariatePolynomialError, CoercionFailed\n+from sympy.external.gmpy import MPQ\n+from sympy.polys.polyclasses import ANP\n \n from sympy.testing.pytest import raises\n from sympy.core import Symbol, symbols\n-\n+from sympy.core.singleton import S\n from sympy.core.numbers import (oo, pi)\n from sympy.functions.elementary.exponential import exp\n from sympy.functions.elementary.miscellaneous import sqrt\n@@ -1455,6 +1457,16 @@ def test_PolyElement_sqf_list():\n assert f.sqf_part() == p\n assert f.sqf_list() == (1, [(g, 1), (h, 2)])\n \n+def test_issue_18894():\n+ items = [S(3)/16 + sqrt(3*sqrt(3) + 10)/8, S(1)/8 + 3*sqrt(3)/16, S(1)/8 + 3*sqrt(3)/16, -S(3)/16 + sqrt(3*sqrt(3) + 10)/8]\n+ R,a = sring(items, extension=True)\n+ assert R.domain == QQ.algebraic_field(sqrt(3)+sqrt(3*sqrt(3)+10))\n+ assert R.gens == ()\n+ result = []\n+ for item in items:\n+ result.append(R.domain.from_sympy(item))\n+ assert a == result\n+\n def test_PolyElement_factor_list():\n _, x = ring(\"x\", ZZ)\n \n" }
[ { "diff_hunk": "@@ -1455,6 +1457,16 @@ def test_PolyElement_sqf_list():\n assert f.sqf_part() == p\n assert f.sqf_list() == (1, [(g, 1), (h, 2)])\n \n+def test_issue_18894():\n+ items = [S(3)/16 + sqrt(3*sqrt(3) + 10)/8, S(1)/8 + 3*sqrt(3)/16, S(1)/8 + 3*sqrt(3)/16, -S(3)/16 + sqrt(3*sqrt(3) + 10)/8]\n+ R,a = sring(items, extension=True)", "line": null, "original_line": 1462, "original_start_line": null, "path": "sympy/polys/tests/test_rings.py", "start_line": null, "text": "@user1:\nPlease use `R, a = sring(...`" } ]
cc3726119320c73d0f964bc07c7fe68116cf9449
diff --git a/sympy/polys/tests/test_rings.py b/sympy/polys/tests/test_rings.py index 35d2b3617230..a753bdd809c9 100644 --- a/sympy/polys/tests/test_rings.py +++ b/sympy/polys/tests/test_rings.py @@ -12,7 +12,7 @@ from sympy.testing.pytest import raises from sympy.core import Symbol, symbols - +from sympy.core.singleton import S from sympy.core.numbers import (oo, pi) from sympy.functions.elementary.exponential import exp from sympy.functions.elementary.miscellaneous import sqrt @@ -1455,6 +1455,16 @@ def test_PolyElement_sqf_list(): assert f.sqf_part() == p assert f.sqf_list() == (1, [(g, 1), (h, 2)]) +def test_issue_18894(): + items = [S(3)/16 + sqrt(3*sqrt(3) + 10)/8, S(1)/8 + 3*sqrt(3)/16, S(1)/8 + 3*sqrt(3)/16, -S(3)/16 + sqrt(3*sqrt(3) + 10)/8] + R, a = sring(items, extension=True) + assert R.domain == QQ.algebraic_field(sqrt(3)+sqrt(3*sqrt(3)+10)) + assert R.gens == () + result = [] + for item in items: + result.append(R.domain.from_sympy(item)) + assert a == result + def test_PolyElement_factor_list(): _, x = ring("x", ZZ)
{ "difficulty": "low", "estimated_review_effort": 2, "problem_domain": "Bug Fixes" }
sympy__sympy-24463@2c5ce40
sympy/sympy
Python
24,463
Adding electron_rest_mass unit in physics.units
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> Fixes #24462 #### Brief description of what is fixed or changed The electron rest mass (symbol: me) is the mass of a stationary electron, also known as the invariant mass of the electron. It is one of the fundamental constants of physics. #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> * physics.units * Added electron rest mass unit <!-- END RELEASE NOTES -->
2023-01-02T20:16:57Z
Missing electron rest mass unit in physics.units The electron rest mass (symbol: me) is the mass of a stationary electron, also known as the invariant mass of the electron. It is one of the fundamental constants of physics. It has a value of about 9.109×10−31 kilograms It is not in `physics.units` should we add it.
[ { "body": "The electron rest mass (symbol: me) is the mass of a stationary electron, also known as the invariant mass of the electron. It is one of the fundamental constants of physics. It has a value of about 9.109×10−31 kilograms\r\nIt is not in `physics.units` should we add it.", "number": 24462, "title": "Missing electron rest mass unit in physics.units" } ]
40b1af74c7f6826b9ba702c359cb538621e9207c
{ "head_commit": "2c5ce400f731c53cc2cac0ccc7929ff285b2aabf", "head_commit_message": "Adding electron_rest_mass unit", "patch_to_review": "diff --git a/sympy/physics/units/__init__.py b/sympy/physics/units/__init__.py\nindex 18fee8c60119..361dbfdbcb01 100644\n--- a/sympy/physics/units/__init__.py\n+++ b/sympy/physics/units/__init__.py\n@@ -164,6 +164,7 @@\n josephson_constant,\n von_klitzing_constant,\n Da, dalton, amu, amus, atomic_mass_unit, atomic_mass_constant,\n+ me, electron_rest_mass,\n gee, gees, acceleration_due_to_gravity,\n u0, magnetic_constant, vacuum_permeability,\n e0, electric_constant, vacuum_permittivity,\n@@ -398,6 +399,7 @@ def find_unit(quantity, unit_system=\"SI\"):\n 'josephson_constant',\n 'von_klitzing_constant',\n 'Da', 'dalton', 'amu', 'amus', 'atomic_mass_unit', 'atomic_mass_constant',\n+ 'me', 'electron_rest_mass',\n 'gee', 'gees', 'acceleration_due_to_gravity',\n 'u0', 'magnetic_constant', 'vacuum_permeability',\n 'e0', 'electric_constant', 'vacuum_permittivity',\ndiff --git a/sympy/physics/units/definitions/__init__.py b/sympy/physics/units/definitions/__init__.py\nindex f60eb838bed3..ab5a0e1c2f4b 100644\n--- a/sympy/physics/units/definitions/__init__.py\n+++ b/sympy/physics/units/definitions/__init__.py\n@@ -82,6 +82,7 @@\n josephson_constant,\n von_klitzing_constant,\n Da, dalton, amu, amus, atomic_mass_unit, atomic_mass_constant,\n+ me, electron_rest_mass,\n gee, gees, acceleration_due_to_gravity,\n u0, magnetic_constant, vacuum_permeability,\n e0, electric_constant, vacuum_permittivity,\n@@ -213,6 +214,7 @@\n 'josephson_constant',\n 'von_klitzing_constant',\n 'Da', 'dalton', 'amu', 'amus', 'atomic_mass_unit', 'atomic_mass_constant',\n+ 'me', 'electron_rest_mass',\n 'gee', 'gees', 'acceleration_due_to_gravity',\n 'u0', 'magnetic_constant', 'vacuum_permeability',\n 'e0', 'electric_constant', 'vacuum_permittivity',\ndiff --git a/sympy/physics/units/definitions/unit_definitions.py b/sympy/physics/units/definitions/unit_definitions.py\nindex 4939fa65acb5..cf51b51f45a5 100644\n--- a/sympy/physics/units/definitions/unit_definitions.py\n+++ b/sympy/physics/units/definitions/unit_definitions.py\n@@ -122,6 +122,9 @@\n t = metric_ton = tonne = Quantity(\"tonne\", abbrev=\"t\")\n tonne.set_global_relative_scale_factor(mega, gram)\n \n+# Electron rest mass\n+me = electron_rest_mass = Quantity(\"electron_rest_mass\", abbrev=\"me\")\n+\n \n # Common length units\n \ndiff --git a/sympy/physics/units/systems/si.py b/sympy/physics/units/systems/si.py\nindex 700495ad9d26..02f3e0a4b09f 100644\n--- a/sympy/physics/units/systems/si.py\n+++ b/sympy/physics/units/systems/si.py\n@@ -25,7 +25,7 @@\n coulomb, volt, ohm, siemens, farad, henry, tesla, weber, dioptre, lux,\n katal, gray, becquerel, inch, liter, julian_year, gravitational_constant,\n speed_of_light, elementary_charge, planck, hbar, electronvolt,\n- avogadro_number, avogadro_constant, boltzmann_constant,\n+ avogadro_number, avogadro_constant, boltzmann_constant, electron_rest_mass,\n stefan_boltzmann_constant, Da, atomic_mass_constant, molar_gas_constant,\n faraday_constant, josephson_constant, von_klitzing_constant,\n acceleration_due_to_gravity, magnetic_constant, vacuum_permittivity,\n@@ -229,6 +229,10 @@\n SI.set_quantity_dimension(vacuum_impedance, impedance)\n SI.set_quantity_scale_factor(vacuum_impedance, u0 * c)\n \n+# Electron rest mass\n+SI.set_quantity_dimension(electron_rest_mass, mass)\n+SI.set_quantity_scale_factor(electron_rest_mass, 9.1093837015e-31*kilogram)\n+\n # Coulomb's constant:\n SI.set_quantity_dimension(coulomb_constant, force * length ** 2 / charge ** 2)\n SI.set_quantity_scale_factor(coulomb_constant, 1/(4*pi*vacuum_permittivity))\n@@ -364,7 +368,7 @@\n 'planck_angular_frequency', 'ohm', 'pound', 'planck_pressure', 'G', 'psi',\n 'dHg0', 'von_klitzing_constant', 'planck_length', 'avogadro_number',\n 'mole', 'acceleration', 'information', 'planck_energy_density',\n- 'mebibyte', 's', 'acceleration_due_to_gravity',\n+ 'mebibyte', 's', 'acceleration_due_to_gravity', 'electron_rest_mass',\n 'planck_temperature', 'units', 'mass', 'dimsys_MKSA', 'kelvin', 'kPa',\n 'boltzmann', 'milli_mass_unit', 'planck_impedance', 'electric_constant',\n 'derived_dims', 'kg', 'coulomb', 'siemens', 'byte', 'magnetic_flux',\ndiff --git a/sympy/physics/units/tests/test_quantities.py b/sympy/physics/units/tests/test_quantities.py\nindex 0d6784decf2d..bb0b0afb4c9e 100644\n--- a/sympy/physics/units/tests/test_quantities.py\n+++ b/sympy/physics/units/tests/test_quantities.py\n@@ -18,7 +18,7 @@\n day, foot, grams, hour, inch, kg, km, m, meter, millimeter,\n minute, quart, s, second, speed_of_light, bit,\n byte, kibibyte, mebibyte, gibibyte, tebibyte, pebibyte, exbibyte,\n- kilogram, gravitational_constant)\n+ kilogram, gravitational_constant, electron_rest_mass)\n \n from sympy.physics.units.definitions.dimension_definitions import (\n Dimension, charge, length, time, temperature, pressure,\n@@ -278,6 +278,9 @@ def test_issue_quart():\n assert convert_to(4 * quart / inch ** 3, meter) == 231\n assert convert_to(4 * quart / inch ** 3, millimeter) == 231\n \n+def test_electron_rest_mass():\n+ assert convert_to(electron_rest_mass, kilogram) == 9.1093837015e-31*kilogram\n+ assert convert_to(electron_rest_mass, grams) == 9.1093837015e-28*grams\n \n def test_issue_5565():\n assert (m < s).is_Relational\n@@ -304,11 +307,11 @@ def test_find_unit():\n 'deciliter', 'centiliter', 'deciliters', 'milliliter',\n 'centiliters', 'milliliters', 'planck_volume']\n assert find_unit('voltage') == ['V', 'v', 'volt', 'volts', 'planck_voltage']\n- assert find_unit(grams) == ['g', 't', 'Da', 'kg', 'mg', 'ug', 'amu', 'mmu', 'amus',\n- 'gram', 'mmus', 'grams', 'pound', 'tonne', 'dalton',\n- 'pounds', 'kilogram', 'kilograms', 'microgram', 'milligram',\n- 'metric_ton', 'micrograms', 'milligrams', 'planck_mass',\n- 'milli_mass_unit', 'atomic_mass_unit', 'atomic_mass_constant']\n+ assert find_unit(grams) == ['g', 't', 'Da', 'kg', 'me', 'mg', 'ug', 'amu', 'mmu', 'amus',\n+ 'gram', 'mmus', 'grams', 'pound', 'tonne', 'dalton', 'pounds',\n+ 'kilogram', 'kilograms','microgram', 'milligram', 'metric_ton',\n+ 'micrograms', 'milligrams', 'planck_mass', 'milli_mass_unit', 'atomic_mass_unit',\n+ 'electron_rest_mass', 'atomic_mass_constant']\n \n \n def test_Quantity_derivative():\n" }
[ { "diff_hunk": "@@ -304,11 +307,11 @@ def test_find_unit():\n 'deciliter', 'centiliter', 'deciliters', 'milliliter',\n 'centiliters', 'milliliters', 'planck_volume']\n assert find_unit('voltage') == ['V', 'v', 'volt', 'volts', 'planck_voltage']\n- assert find_unit(grams) == ['g', 't', 'Da', 'kg', 'mg', 'ug', 'amu', 'mmu', 'amus',\n- 'gram', 'mmus', 'grams', 'pound', 'tonne', 'dalton',\n- 'pounds', 'kilogram', 'kilograms', 'microgram', 'milligram',\n- 'metric_ton', 'micrograms', 'milligrams', 'planck_mass',\n- 'milli_mass_unit', 'atomic_mass_unit', 'atomic_mass_constant']\n+ assert find_unit(grams) == ['g', 't', 'Da', 'kg', 'me', 'mg', 'ug', 'amu', 'mmu', 'amus',\n+ 'gram', 'mmus', 'grams', 'pound', 'tonne', 'dalton', 'pounds',\n+ 'kilogram', 'kilograms','microgram', 'milligram', 'metric_ton',", "line": null, "original_line": 312, "original_start_line": null, "path": "sympy/physics/units/tests/test_quantities.py", "start_line": null, "text": "@user1:\nmisses a space between `kilograms` and `microgram`" } ]
148cb89a741149484f03a4c3ee4ba3141a03c5f3
diff --git a/sympy/physics/units/__init__.py b/sympy/physics/units/__init__.py index 18fee8c60119..361dbfdbcb01 100644 --- a/sympy/physics/units/__init__.py +++ b/sympy/physics/units/__init__.py @@ -164,6 +164,7 @@ josephson_constant, von_klitzing_constant, Da, dalton, amu, amus, atomic_mass_unit, atomic_mass_constant, + me, electron_rest_mass, gee, gees, acceleration_due_to_gravity, u0, magnetic_constant, vacuum_permeability, e0, electric_constant, vacuum_permittivity, @@ -398,6 +399,7 @@ def find_unit(quantity, unit_system="SI"): 'josephson_constant', 'von_klitzing_constant', 'Da', 'dalton', 'amu', 'amus', 'atomic_mass_unit', 'atomic_mass_constant', + 'me', 'electron_rest_mass', 'gee', 'gees', 'acceleration_due_to_gravity', 'u0', 'magnetic_constant', 'vacuum_permeability', 'e0', 'electric_constant', 'vacuum_permittivity', diff --git a/sympy/physics/units/definitions/__init__.py b/sympy/physics/units/definitions/__init__.py index f60eb838bed3..ab5a0e1c2f4b 100644 --- a/sympy/physics/units/definitions/__init__.py +++ b/sympy/physics/units/definitions/__init__.py @@ -82,6 +82,7 @@ josephson_constant, von_klitzing_constant, Da, dalton, amu, amus, atomic_mass_unit, atomic_mass_constant, + me, electron_rest_mass, gee, gees, acceleration_due_to_gravity, u0, magnetic_constant, vacuum_permeability, e0, electric_constant, vacuum_permittivity, @@ -213,6 +214,7 @@ 'josephson_constant', 'von_klitzing_constant', 'Da', 'dalton', 'amu', 'amus', 'atomic_mass_unit', 'atomic_mass_constant', + 'me', 'electron_rest_mass', 'gee', 'gees', 'acceleration_due_to_gravity', 'u0', 'magnetic_constant', 'vacuum_permeability', 'e0', 'electric_constant', 'vacuum_permittivity', diff --git a/sympy/physics/units/definitions/unit_definitions.py b/sympy/physics/units/definitions/unit_definitions.py index 4939fa65acb5..cf51b51f45a5 100644 --- a/sympy/physics/units/definitions/unit_definitions.py +++ b/sympy/physics/units/definitions/unit_definitions.py @@ -122,6 +122,9 @@ t = metric_ton = tonne = Quantity("tonne", abbrev="t") tonne.set_global_relative_scale_factor(mega, gram) +# Electron rest mass +me = electron_rest_mass = Quantity("electron_rest_mass", abbrev="me") + # Common length units diff --git a/sympy/physics/units/systems/si.py b/sympy/physics/units/systems/si.py index 700495ad9d26..02f3e0a4b09f 100644 --- a/sympy/physics/units/systems/si.py +++ b/sympy/physics/units/systems/si.py @@ -25,7 +25,7 @@ coulomb, volt, ohm, siemens, farad, henry, tesla, weber, dioptre, lux, katal, gray, becquerel, inch, liter, julian_year, gravitational_constant, speed_of_light, elementary_charge, planck, hbar, electronvolt, - avogadro_number, avogadro_constant, boltzmann_constant, + avogadro_number, avogadro_constant, boltzmann_constant, electron_rest_mass, stefan_boltzmann_constant, Da, atomic_mass_constant, molar_gas_constant, faraday_constant, josephson_constant, von_klitzing_constant, acceleration_due_to_gravity, magnetic_constant, vacuum_permittivity, @@ -229,6 +229,10 @@ SI.set_quantity_dimension(vacuum_impedance, impedance) SI.set_quantity_scale_factor(vacuum_impedance, u0 * c) +# Electron rest mass +SI.set_quantity_dimension(electron_rest_mass, mass) +SI.set_quantity_scale_factor(electron_rest_mass, 9.1093837015e-31*kilogram) + # Coulomb's constant: SI.set_quantity_dimension(coulomb_constant, force * length ** 2 / charge ** 2) SI.set_quantity_scale_factor(coulomb_constant, 1/(4*pi*vacuum_permittivity)) @@ -364,7 +368,7 @@ 'planck_angular_frequency', 'ohm', 'pound', 'planck_pressure', 'G', 'psi', 'dHg0', 'von_klitzing_constant', 'planck_length', 'avogadro_number', 'mole', 'acceleration', 'information', 'planck_energy_density', - 'mebibyte', 's', 'acceleration_due_to_gravity', + 'mebibyte', 's', 'acceleration_due_to_gravity', 'electron_rest_mass', 'planck_temperature', 'units', 'mass', 'dimsys_MKSA', 'kelvin', 'kPa', 'boltzmann', 'milli_mass_unit', 'planck_impedance', 'electric_constant', 'derived_dims', 'kg', 'coulomb', 'siemens', 'byte', 'magnetic_flux', diff --git a/sympy/physics/units/tests/test_quantities.py b/sympy/physics/units/tests/test_quantities.py index 0d6784decf2d..962677d4d00d 100644 --- a/sympy/physics/units/tests/test_quantities.py +++ b/sympy/physics/units/tests/test_quantities.py @@ -18,7 +18,7 @@ day, foot, grams, hour, inch, kg, km, m, meter, millimeter, minute, quart, s, second, speed_of_light, bit, byte, kibibyte, mebibyte, gibibyte, tebibyte, pebibyte, exbibyte, - kilogram, gravitational_constant) + kilogram, gravitational_constant, electron_rest_mass) from sympy.physics.units.definitions.dimension_definitions import ( Dimension, charge, length, time, temperature, pressure, @@ -278,6 +278,9 @@ def test_issue_quart(): assert convert_to(4 * quart / inch ** 3, meter) == 231 assert convert_to(4 * quart / inch ** 3, millimeter) == 231 +def test_electron_rest_mass(): + assert convert_to(electron_rest_mass, kilogram) == 9.1093837015e-31*kilogram + assert convert_to(electron_rest_mass, grams) == 9.1093837015e-28*grams def test_issue_5565(): assert (m < s).is_Relational @@ -304,11 +307,11 @@ def test_find_unit(): 'deciliter', 'centiliter', 'deciliters', 'milliliter', 'centiliters', 'milliliters', 'planck_volume'] assert find_unit('voltage') == ['V', 'v', 'volt', 'volts', 'planck_voltage'] - assert find_unit(grams) == ['g', 't', 'Da', 'kg', 'mg', 'ug', 'amu', 'mmu', 'amus', - 'gram', 'mmus', 'grams', 'pound', 'tonne', 'dalton', - 'pounds', 'kilogram', 'kilograms', 'microgram', 'milligram', - 'metric_ton', 'micrograms', 'milligrams', 'planck_mass', - 'milli_mass_unit', 'atomic_mass_unit', 'atomic_mass_constant'] + assert find_unit(grams) == ['g', 't', 'Da', 'kg', 'me', 'mg', 'ug', 'amu', 'mmu', 'amus', + 'gram', 'mmus', 'grams', 'pound', 'tonne', 'dalton', 'pounds', + 'kilogram', 'kilograms', 'microgram', 'milligram', 'metric_ton', + 'micrograms', 'milligrams', 'planck_mass', 'milli_mass_unit', 'atomic_mass_unit', + 'electron_rest_mass', 'atomic_mass_constant'] def test_Quantity_derivative():
{ "difficulty": "low", "estimated_review_effort": 2, "problem_domain": "New Feature Additions" }
sympy__sympy-24370@a33f382
sympy/sympy
Python
24,370
Fix Floor division with sympy.Integer
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> Fixes #24369 #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> #### Brief description of what is fixed or changed Floor division with sympy.Integer #### Other comments `import sympy` `s0 = sympy.Symbol('s0')` `sympy.Integer(1024)//s0` #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> NO ENTRY <!-- END RELEASE NOTES -->
2022-12-11T14:10:23Z
Floor division with sympy.Integer gives: Argument of Integer should be of numeric type, got floor(1024/s0) ``` import sympy s0 = sympy.Symbol('s0') sympy.Integer(1024)//s0 ``` gives ``` Traceback (most recent call last): File "/Users/ezyang/Dev/sympy/sympy/core/numbers.py", line 2098, in __new__ ival = int(i) File "/Users/ezyang/Dev/sympy/sympy/core/expr.py", line 320, in __int__ raise TypeError("Cannot convert symbols to int") TypeError: Cannot convert symbols to int During handling of the above exception, another exception occurred: Traceback (most recent call last): File "repro.py", line 4, in <module> sympy.Integer(1024)//s0 File "/Users/ezyang/Dev/sympy/sympy/core/decorators.py", line 65, in __sympifyit_wrapper return func(a, b) File "/Users/ezyang/Dev/sympy/sympy/core/numbers.py", line 2426, in __floordiv__ return Integer(divmod(self, other)[0]) File "/Users/ezyang/Dev/sympy/sympy/core/cache.py", line 72, in wrapper retval = cfunc(*args, **kwargs) File "/Users/ezyang/Dev/sympy/sympy/core/numbers.py", line 2100, in __new__ raise TypeError( TypeError: Argument of Integer should be of numeric type, got floor(1024/s0). ``` oddly enough, it works if the lhs is a plain Python int.
The fix seems to be ```diff diff --git a/sympy/core/numbers.py b/sympy/core/numbers.py index 3b1aec2..52f7ea4 100644 --- a/sympy/core/numbers.py +++ b/sympy/core/numbers.py @@ -2423,7 +2423,7 @@ def __floordiv__(self, other): return NotImplemented if isinstance(other, Integer): return Integer(self.p // other) - return Integer(divmod(self, other)[0]) + return divmod(self, other)[0] def __rfloordiv__(self, other): return Integer(Integer(other).p // self.p) ```
[ { "body": "```\r\nimport sympy\r\n\r\ns0 = sympy.Symbol('s0')\r\nsympy.Integer(1024)//s0\r\n```\r\n\r\ngives\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/Users/ezyang/Dev/sympy/sympy/core/numbers.py\", line 2098, in __new__\r\n ival = int(i)\r\n File \"/Users/ezyang/Dev/sympy/sympy/core/expr.py\", line 320, in __int__\r\n raise TypeError(\"Cannot convert symbols to int\")\r\nTypeError: Cannot convert symbols to int\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"repro.py\", line 4, in <module>\r\n sympy.Integer(1024)//s0\r\n File \"/Users/ezyang/Dev/sympy/sympy/core/decorators.py\", line 65, in __sympifyit_wrapper\r\n return func(a, b)\r\n File \"/Users/ezyang/Dev/sympy/sympy/core/numbers.py\", line 2426, in __floordiv__\r\n return Integer(divmod(self, other)[0])\r\n File \"/Users/ezyang/Dev/sympy/sympy/core/cache.py\", line 72, in wrapper\r\n retval = cfunc(*args, **kwargs)\r\n File \"/Users/ezyang/Dev/sympy/sympy/core/numbers.py\", line 2100, in __new__\r\n raise TypeError(\r\nTypeError: Argument of Integer should be of numeric type, got floor(1024/s0).\r\n```\r\n\r\noddly enough, it works if the lhs is a plain Python int.", "number": 24369, "title": "Floor division with sympy.Integer gives: Argument of Integer should be of numeric type, got floor(1024/s0)" } ]
36a36f87dd3ac94593d8de186efd3532c77f5191
{ "head_commit": "a33f3824ad5a7461d885d82d2641fbd00d7e70e4", "head_commit_message": "fix #24369", "patch_to_review": "diff --git a/.mailmap b/.mailmap\nindex 12ebd404ed2c..e577eb5c6bf4 100644\n--- a/.mailmap\n+++ b/.mailmap\n@@ -209,6 +209,7 @@ Abhinav Anand <[email protected]>\n Abhinav Chanda <[email protected]>\n Abhishek <[email protected]>\n Abhishek Garg <[email protected]>\n+Abhishek Patidar <[email protected]> Abhishek Patidar <[email protected]>\n Abhishek Verma <[email protected]>\n Achal Jain <[email protected]>\n Adam Bloomston <[email protected]> <mail@adambloomston>\ndiff --git a/sympy/core/numbers.py b/sympy/core/numbers.py\nindex 3b1aec24296e..52f7ea45ada0 100644\n--- a/sympy/core/numbers.py\n+++ b/sympy/core/numbers.py\n@@ -2423,7 +2423,7 @@ def __floordiv__(self, other):\n return NotImplemented\n if isinstance(other, Integer):\n return Integer(self.p // other)\n- return Integer(divmod(self, other)[0])\n+ return divmod(self, other)[0]\n \n def __rfloordiv__(self, other):\n return Integer(Integer(other).p // self.p)\ndiff --git a/sympy/core/tests/test_numbers.py b/sympy/core/tests/test_numbers.py\nindex 8e42e2b0c70a..0575d30a2fa9 100644\n--- a/sympy/core/tests/test_numbers.py\n+++ b/sympy/core/tests/test_numbers.py\n@@ -16,6 +16,7 @@\n from sympy.core.symbol import Dummy, Symbol\n from sympy.core.sympify import sympify\n from sympy.functions.combinatorial.factorials import factorial\n+from sympy.functions.elementary.integers import floor\n from sympy.functions.combinatorial.numbers import fibonacci\n from sympy.functions.elementary.exponential import exp, log\n from sympy.functions.elementary.miscellaneous import sqrt, cbrt\n@@ -121,6 +122,7 @@ def test_mod():\n \n \n def test_divmod():\n+ s0 = Symbol(\"s0\")\n assert divmod(S(12), S(8)) == Tuple(1, 4)\n assert divmod(-S(12), S(8)) == Tuple(-2, 4)\n assert divmod(S.Zero, S.One) == Tuple(0, 0)\n@@ -128,6 +130,7 @@ def test_divmod():\n raises(ZeroDivisionError, lambda: divmod(S.One, S.Zero))\n assert divmod(S(12), 8) == Tuple(1, 4)\n assert divmod(12, S(8)) == Tuple(1, 4)\n+ assert divmod(1024,s0)[0] == floor(1024/s0)\n \n assert divmod(S(\"2\"), S(\"3/2\")) == Tuple(S(\"1\"), S(\"1/2\"))\n assert divmod(S(\"3/2\"), S(\"2\")) == Tuple(S(\"0\"), S(\"3/2\"))\n" }
[ { "diff_hunk": "@@ -121,13 +122,15 @@ def test_mod():\n \n \n def test_divmod():\n+ s0 = Symbol(\"s0\")\n assert divmod(S(12), S(8)) == Tuple(1, 4)\n assert divmod(-S(12), S(8)) == Tuple(-2, 4)\n assert divmod(S.Zero, S.One) == Tuple(0, 0)\n raises(ZeroDivisionError, lambda: divmod(S.Zero, S.Zero))\n raises(ZeroDivisionError, lambda: divmod(S.One, S.Zero))\n assert divmod(S(12), 8) == Tuple(1, 4)\n assert divmod(12, S(8)) == Tuple(1, 4)\n+ assert divmod(1024,s0)[0] == floor(1024/s0)", "line": null, "original_line": 133, "original_start_line": 125, "path": "sympy/core/tests/test_numbers.py", "start_line": null, "text": "@user1:\n```suggestion\r\n x = Symbol(\"x\")\r\n assert divmod(S(12), S(8)) == Tuple(1, 4)\r\n assert divmod(-S(12), S(8)) == Tuple(-2, 4)\r\n assert divmod(S.Zero, S.One) == Tuple(0, 0)\r\n raises(ZeroDivisionError, lambda: divmod(S.Zero, S.Zero))\r\n raises(ZeroDivisionError, lambda: divmod(S.One, S.Zero))\r\n assert divmod(S(12), 8) == Tuple(1, 4)\r\n assert divmod(12, S(8)) == Tuple(1, 4)\r\n assert S(1024)//x == 1024//x == floor(1024/x)\r\n```" }, { "diff_hunk": "@@ -121,13 +122,15 @@ def test_mod():\n \n \n def test_divmod():\n+ s0 = Symbol(\"s0\")\n assert divmod(S(12), S(8)) == Tuple(1, 4)\n assert divmod(-S(12), S(8)) == Tuple(-2, 4)\n assert divmod(S.Zero, S.One) == Tuple(0, 0)\n raises(ZeroDivisionError, lambda: divmod(S.Zero, S.Zero))\n raises(ZeroDivisionError, lambda: divmod(S.One, S.Zero))\n assert divmod(S(12), 8) == Tuple(1, 4)\n assert divmod(12, S(8)) == Tuple(1, 4)\n+ assert divmod(1024,s0)[0] == floor(1024/s0)", "line": null, "original_line": 133, "original_start_line": null, "path": "sympy/core/tests/test_numbers.py", "start_line": null, "text": "@user1:\nThis test already passes on master. We need another test for the case that fails which is specifically `Integer(1024) // s0`. We should also test `1024 // s0` if that isn't tested anywhere.\r\n\r\nThe symbol should just be called `x` rather than `s0`." } ]
94ffb875960b82c5c86b45e462cab3c65ebcf151
diff --git a/.mailmap b/.mailmap index 12ebd404ed2c..e577eb5c6bf4 100644 --- a/.mailmap +++ b/.mailmap @@ -209,6 +209,7 @@ Abhinav Anand <[email protected]> Abhinav Chanda <[email protected]> Abhishek <[email protected]> Abhishek Garg <[email protected]> +Abhishek Patidar <[email protected]> Abhishek Patidar <[email protected]> Abhishek Verma <[email protected]> Achal Jain <[email protected]> Adam Bloomston <[email protected]> <mail@adambloomston> diff --git a/sympy/core/numbers.py b/sympy/core/numbers.py index 3b1aec24296e..52f7ea45ada0 100644 --- a/sympy/core/numbers.py +++ b/sympy/core/numbers.py @@ -2423,7 +2423,7 @@ def __floordiv__(self, other): return NotImplemented if isinstance(other, Integer): return Integer(self.p // other) - return Integer(divmod(self, other)[0]) + return divmod(self, other)[0] def __rfloordiv__(self, other): return Integer(Integer(other).p // self.p) diff --git a/sympy/core/tests/test_numbers.py b/sympy/core/tests/test_numbers.py index 8e42e2b0c70a..8baf408933b0 100644 --- a/sympy/core/tests/test_numbers.py +++ b/sympy/core/tests/test_numbers.py @@ -16,6 +16,7 @@ from sympy.core.symbol import Dummy, Symbol from sympy.core.sympify import sympify from sympy.functions.combinatorial.factorials import factorial +from sympy.functions.elementary.integers import floor from sympy.functions.combinatorial.numbers import fibonacci from sympy.functions.elementary.exponential import exp, log from sympy.functions.elementary.miscellaneous import sqrt, cbrt @@ -121,6 +122,7 @@ def test_mod(): def test_divmod(): + x = Symbol("x") assert divmod(S(12), S(8)) == Tuple(1, 4) assert divmod(-S(12), S(8)) == Tuple(-2, 4) assert divmod(S.Zero, S.One) == Tuple(0, 0) @@ -128,6 +130,7 @@ def test_divmod(): raises(ZeroDivisionError, lambda: divmod(S.One, S.Zero)) assert divmod(S(12), 8) == Tuple(1, 4) assert divmod(12, S(8)) == Tuple(1, 4) + assert S(1024)//x == 1024//x == floor(1024/x) assert divmod(S("2"), S("3/2")) == Tuple(S("1"), S("1/2")) assert divmod(S("3/2"), S("2")) == Tuple(S("0"), S("3/2"))
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Bug Fixes" }
sympy__sympy-24325@eb006f8
sympy/sympy
Python
24,325
Numerical error on conversion of coulomb to statcoulomb
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs fixes #24319 ,#24381 ,#24281 <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> #### Brief description of what is fixed or changed fixes numerical error in the conversion of `coulomb` and `statcoulomb` #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: or if no release note(s) should be included use: See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> * physics.units * fixed numerical error in conversion of statcoulomb and coulomb <!-- END RELEASE NOTES -->
2022-11-29T10:29:14Z
Numerical error on conversion of coulomb to statcoulomb ```python In[2]: from sympy.physics.units import convert_to In[3]: from sympy.physics.units.systems.cgs import cgs_gauss In[4]: from sympy.physics.units.definitions.unit_definitions import statcoulomb, coulomb, second, gram, centimeter, erg In[5]: convert_to(coulomb, statcoulomb, unit_system='cgs_gauss').n() Out[5]:29979245.8*statcoulomb ``` `Expected Output : 1 C ≘ 2997924580 statC ≈ 3.00×109 statC` ```python def test_conversion_to_from_si(): assert convert_to(statcoulomb, coulomb, cgs_gauss) == 5*coulomb/149896229 assert convert_to(coulomb, statcoulomb, cgs_gauss) == 149896229*statcoulomb/5 ``` It should be fixed as : ```python def test_conversion_to_from_si(): assert convert_to(statcoulomb, coulomb, cgs_gauss) == coulomb/2997924580 assert convert_to(coulomb, statcoulomb, cgs_gauss) == 2997924580*statcoulomb ```
Can I open the PR for that ?
[ { "body": "```python\r\nIn[2]: from sympy.physics.units import convert_to\r\nIn[3]: from sympy.physics.units.systems.cgs import cgs_gauss\r\nIn[4]: from sympy.physics.units.definitions.unit_definitions import statcoulomb, coulomb, second, gram, centimeter, erg\r\nIn[5]: convert_to(coulomb, statcoulomb, unit_system='cgs_gauss').n()\r\n\r\nOut[5]:29979245.8*statcoulomb\r\n```\r\n`Expected Output : 1 C ≘ 2997924580 statC ≈ 3.00×109 statC`\r\n```python \r\ndef test_conversion_to_from_si():\r\n assert convert_to(statcoulomb, coulomb, cgs_gauss) == 5*coulomb/149896229\r\n assert convert_to(coulomb, statcoulomb, cgs_gauss) == 149896229*statcoulomb/5\r\n```\r\nIt should be fixed as :\r\n```python \r\ndef test_conversion_to_from_si():\r\n assert convert_to(statcoulomb, coulomb, cgs_gauss) == coulomb/2997924580\r\n assert convert_to(coulomb, statcoulomb, cgs_gauss) == 2997924580*statcoulomb\r\n```\r\n", "number": 24319, "title": "Numerical error on conversion of coulomb to statcoulomb " } ]
cdef6fcbfc12008d0de65ecd8ed21d1912e77e5d
{ "head_commit": "eb006f839d32c13735b34d998669461ecca61da6", "head_commit_message": "add tests", "patch_to_review": "diff --git a/.mailmap b/.mailmap\nindex be8e110a76d1..cd54bf00ef48 100644\n--- a/.mailmap\n+++ b/.mailmap\n@@ -209,6 +209,7 @@ Abhinav Anand <[email protected]>\n Abhinav Chanda <[email protected]>\n Abhishek <[email protected]>\n Abhishek Garg <[email protected]>\n+Abhishek Patidar <[email protected]> Abhishek Patidar <[email protected]>\n Abhishek Verma <[email protected]>\n Achal Jain <[email protected]>\n Adam Bloomston <[email protected]> <mail@adambloomston>\ndiff --git a/sympy/physics/units/systems/cgs.py b/sympy/physics/units/systems/cgs.py\nindex fbf7a001bb7f..6a77dbd4a86a 100644\n--- a/sympy/physics/units/systems/cgs.py\n+++ b/sympy/physics/units/systems/cgs.py\n@@ -56,16 +56,16 @@\n cgs_gauss.set_quantity_scale_factor(maxwell, sqrt(centimeter**3*gram)/second)\n \n # SI units expressed in CGS-gaussian units:\n-cgs_gauss.set_quantity_scale_factor(coulomb, speed_of_light*statcoulomb/10)\n-cgs_gauss.set_quantity_scale_factor(ampere, speed_of_light*statcoulomb/second/10)\n-cgs_gauss.set_quantity_scale_factor(volt, speed_of_light*statvolt/10**6)\n+cgs_gauss.set_quantity_scale_factor(coulomb, 10*speed_of_light*statcoulomb)\n+cgs_gauss.set_quantity_scale_factor(ampere, 10*speed_of_light*statcoulomb/second)\n+cgs_gauss.set_quantity_scale_factor(volt, 10**6/speed_of_light*statvolt)\n cgs_gauss.set_quantity_scale_factor(weber, 10**8*maxwell)\n cgs_gauss.set_quantity_scale_factor(tesla, 10**4*gauss)\n cgs_gauss.set_quantity_scale_factor(debye, One/10**18*statcoulomb*centimeter)\n cgs_gauss.set_quantity_scale_factor(oersted, sqrt(gram/centimeter)/second)\n-cgs_gauss.set_quantity_scale_factor(ohm, 10**9/speed_of_light**2*second/centimeter)\n-cgs_gauss.set_quantity_scale_factor(farad, One/10**9*speed_of_light**2*centimeter)\n-cgs_gauss.set_quantity_scale_factor(henry, 10**9/speed_of_light**2/centimeter*second**2)\n+cgs_gauss.set_quantity_scale_factor(ohm, 10**5/speed_of_light**2*second/centimeter)\n+cgs_gauss.set_quantity_scale_factor(farad, One/10**5*speed_of_light**2*centimeter)\n+cgs_gauss.set_quantity_scale_factor(henry, 10**5/speed_of_light**2/centimeter*second**2)\n \n # Coulomb's constant:\n cgs_gauss.set_quantity_dimension(coulomb_constant, 1)\ndiff --git a/sympy/physics/units/tests/test_unit_system_cgs_gauss.py b/sympy/physics/units/tests/test_unit_system_cgs_gauss.py\nindex 0dfb2f526279..2a1dc82d38fe 100644\n--- a/sympy/physics/units/tests/test_unit_system_cgs_gauss.py\n+++ b/sympy/physics/units/tests/test_unit_system_cgs_gauss.py\n@@ -4,17 +4,16 @@\n from sympy.functions.elementary.miscellaneous import sqrt\n from sympy.physics.units import convert_to, coulomb_constant, elementary_charge, gravitational_constant, planck\n from sympy.physics.units.definitions.unit_definitions import statcoulomb, coulomb, second, gram, centimeter, erg, \\\n- newton, joule, dyne, speed_of_light, meter\n+ newton, joule, dyne, speed_of_light, meter, farad, henry, statvolt, volt, ohm\n from sympy.physics.units.systems import SI\n from sympy.physics.units.systems.cgs import cgs_gauss\n \n \n def test_conversion_to_from_si():\n-\n- assert convert_to(statcoulomb, coulomb, cgs_gauss) == 5*coulomb/149896229\n- assert convert_to(coulomb, statcoulomb, cgs_gauss) == 149896229*statcoulomb/5\n+ assert convert_to(statcoulomb, coulomb, cgs_gauss) == coulomb/2997924580\n+ assert convert_to(coulomb, statcoulomb, cgs_gauss) == 2997924580*statcoulomb\n assert convert_to(statcoulomb, sqrt(gram*centimeter**3)/second, cgs_gauss) == centimeter**(S(3)/2)*sqrt(gram)/second\n- assert convert_to(coulomb, sqrt(gram*centimeter**3)/second, cgs_gauss) == 149896229*centimeter**(S(3)/2)*sqrt(gram)/(5*second)\n+ assert convert_to(coulomb, sqrt(gram*centimeter**3)/second, cgs_gauss) == 2997924580*centimeter**(S(3)/2)*sqrt(gram)/second\n \n # SI units have an additional base unit, no conversion in case of electromagnetism:\n assert convert_to(coulomb, statcoulomb, SI) == coulomb\n@@ -26,18 +25,47 @@ def test_conversion_to_from_si():\n assert convert_to(joule, erg, SI) == 10**7*erg\n assert convert_to(joule, erg, cgs_gauss) == 10**7*erg\n \n+\n assert convert_to(dyne, newton, SI) == newton/10**5\n assert convert_to(dyne, newton, cgs_gauss) == newton/10**5\n assert convert_to(newton, dyne, SI) == 10**5*dyne\n assert convert_to(newton, dyne, cgs_gauss) == 10**5*dyne\n \n \n+def test_ohm_cgs_gauss():\n+\n+ assert convert_to(ohm,second/centimeter,cgs_gauss) == 25000*second/(22468879468420441*centimeter)\n+ assert convert_to(1*ohm,second/centimeter,cgs_gauss) == 25000*second/(22468879468420441*centimeter)\n+ assert convert_to(2*ohm,second/centimeter,cgs_gauss) == 50000*second/(22468879468420441*centimeter)\n+ assert NS(convert_to(ohm,second/centimeter,cgs_gauss)) == '1.11265005605362e-12*second/centimeter'\n+ assert NS(convert_to(2*ohm,second/centimeter,cgs_gauss)) == '2.22530011210724e-12*second/centimeter'\n+\n+def test_henry_cgs_gauss():\n+ assert convert_to(henry,second**2/centimeter,cgs_gauss) == 25000*second**2/(22468879468420441*centimeter)\n+ assert convert_to(1*henry,second**2/centimeter,cgs_gauss) == 25000*second**2/(22468879468420441*centimeter)\n+ assert convert_to(2*henry,second**2/centimeter,cgs_gauss) == 50000*second**2/(22468879468420441*centimeter)\n+ assert NS(convert_to(henry,second**2/centimeter,cgs_gauss)) == '1.11265005605362e-12*second**2/centimeter'\n+ assert NS(convert_to(2*henry,second**2/centimeter,cgs_gauss)) == '2.22530011210724e-12*second**2/centimeter'\n+\n+def test_volt_cgs_gauss():\n+ assert convert_to(volt,statvolt,cgs_gauss) == 10**6*statvolt/299792458\n+ assert convert_to(1*volt,statvolt,cgs_gauss) == 10**6*statvolt/299792458\n+ assert convert_to(2*volt,statvolt,cgs_gauss) == 2*10**6*statvolt/299792458\n+\n+def test_farad_cgs_gauss():\n+\n+ assert convert_to(farad,centimeter,cgs_gauss) == 299792458**2*centimeter/10**5\n+ assert convert_to(1*farad,centimeter,cgs_gauss) == 299792458**2*centimeter/10**5\n+ assert convert_to(2*farad,centimeter,cgs_gauss) == 2*299792458**2*centimeter/10**5\n+\n+\n+\n def test_cgs_gauss_convert_constants():\n \n assert convert_to(speed_of_light, centimeter/second, cgs_gauss) == 29979245800*centimeter/second\n \n assert convert_to(coulomb_constant, 1, cgs_gauss) == 1\n- assert convert_to(coulomb_constant, newton*meter**2/coulomb**2, cgs_gauss) == 22468879468420441*meter**2*newton/(25000000000*coulomb**2)\n+ assert convert_to(coulomb_constant, newton*meter**2/coulomb**2, cgs_gauss) == 22468879468420441*meter**2*newton/(2500000*coulomb**2)\n assert convert_to(coulomb_constant, newton*meter**2/coulomb**2, SI) == 22468879468420441*meter**2*newton/(2500000*coulomb**2)\n assert convert_to(coulomb_constant, dyne*centimeter**2/statcoulomb**2, cgs_gauss) == centimeter**2*dyne/statcoulomb**2\n assert convert_to(coulomb_constant, 1, SI) == coulomb_constant\n" }
[ { "diff_hunk": "@@ -26,18 +25,47 @@ def test_conversion_to_from_si():\n assert convert_to(joule, erg, SI) == 10**7*erg\n assert convert_to(joule, erg, cgs_gauss) == 10**7*erg\n \n+\n assert convert_to(dyne, newton, SI) == newton/10**5\n assert convert_to(dyne, newton, cgs_gauss) == newton/10**5\n assert convert_to(newton, dyne, SI) == 10**5*dyne\n assert convert_to(newton, dyne, cgs_gauss) == 10**5*dyne\n \n \n+def test_ohm_cgs_gauss():\n+\n+ assert convert_to(ohm,second/centimeter,cgs_gauss) == 25000*second/(22468879468420441*centimeter)\n+ assert convert_to(1*ohm,second/centimeter,cgs_gauss) == 25000*second/(22468879468420441*centimeter)\n+ assert convert_to(2*ohm,second/centimeter,cgs_gauss) == 50000*second/(22468879468420441*centimeter)\n+ assert NS(convert_to(ohm,second/centimeter,cgs_gauss)) == '1.11265005605362e-12*second/centimeter'\n+ assert NS(convert_to(2*ohm,second/centimeter,cgs_gauss)) == '2.22530011210724e-12*second/centimeter'\n+\n+def test_henry_cgs_gauss():\n+ assert convert_to(henry,second**2/centimeter,cgs_gauss) == 25000*second**2/(22468879468420441*centimeter)\n+ assert convert_to(1*henry,second**2/centimeter,cgs_gauss) == 25000*second**2/(22468879468420441*centimeter)\n+ assert convert_to(2*henry,second**2/centimeter,cgs_gauss) == 50000*second**2/(22468879468420441*centimeter)\n+ assert NS(convert_to(henry,second**2/centimeter,cgs_gauss)) == '1.11265005605362e-12*second**2/centimeter'\n+ assert NS(convert_to(2*henry,second**2/centimeter,cgs_gauss)) == '2.22530011210724e-12*second**2/centimeter'\n+\n+def test_volt_cgs_gauss():\n+ assert convert_to(volt,statvolt,cgs_gauss) == 10**6*statvolt/299792458\n+ assert convert_to(1*volt,statvolt,cgs_gauss) == 10**6*statvolt/299792458\n+ assert convert_to(2*volt,statvolt,cgs_gauss) == 2*10**6*statvolt/299792458", "line": null, "original_line": 53, "original_start_line": 52, "path": "sympy/physics/units/tests/test_unit_system_cgs_gauss.py", "start_line": null, "text": "@user1:\nditto\n\n@author:\nThis test is for the conversion of a `volt` to a `statvolt`\r\nIn master when we are converting `volt` to `statvolt` it gives a wrong answer.\r\nSo I fix it to `cgs_gauss.set_quantity_scale_factor(volt, 10**6/speed_of_light*statvolt)`\r\nfrom `cgs_gauss.set_quantity_scale_factor(volt, speed_of_light*statvolt/10**6)`\r\nReference for the [conversions](https://en.wikipedia.org/wiki/Gaussian_units) of SI unit to gaussian units" }, { "diff_hunk": "@@ -26,18 +25,47 @@ def test_conversion_to_from_si():\n assert convert_to(joule, erg, SI) == 10**7*erg\n assert convert_to(joule, erg, cgs_gauss) == 10**7*erg\n \n+\n assert convert_to(dyne, newton, SI) == newton/10**5\n assert convert_to(dyne, newton, cgs_gauss) == newton/10**5\n assert convert_to(newton, dyne, SI) == 10**5*dyne\n assert convert_to(newton, dyne, cgs_gauss) == 10**5*dyne\n \n \n+def test_ohm_cgs_gauss():\n+\n+ assert convert_to(ohm,second/centimeter,cgs_gauss) == 25000*second/(22468879468420441*centimeter)\n+ assert convert_to(1*ohm,second/centimeter,cgs_gauss) == 25000*second/(22468879468420441*centimeter)\n+ assert convert_to(2*ohm,second/centimeter,cgs_gauss) == 50000*second/(22468879468420441*centimeter)\n+ assert NS(convert_to(ohm,second/centimeter,cgs_gauss)) == '1.11265005605362e-12*second/centimeter'\n+ assert NS(convert_to(2*ohm,second/centimeter,cgs_gauss)) == '2.22530011210724e-12*second/centimeter'\n+\n+def test_henry_cgs_gauss():\n+ assert convert_to(henry,second**2/centimeter,cgs_gauss) == 25000*second**2/(22468879468420441*centimeter)\n+ assert convert_to(1*henry,second**2/centimeter,cgs_gauss) == 25000*second**2/(22468879468420441*centimeter)\n+ assert convert_to(2*henry,second**2/centimeter,cgs_gauss) == 50000*second**2/(22468879468420441*centimeter)\n+ assert NS(convert_to(henry,second**2/centimeter,cgs_gauss)) == '1.11265005605362e-12*second**2/centimeter'\n+ assert NS(convert_to(2*henry,second**2/centimeter,cgs_gauss)) == '2.22530011210724e-12*second**2/centimeter'", "line": null, "original_line": 48, "original_start_line": 45, "path": "sympy/physics/units/tests/test_unit_system_cgs_gauss.py", "start_line": null, "text": "@user1:\nditto\n\n@author:\nThis test is for the conversion of a `henry` to a `second**/centimeter`\r\nIn master when we are converting `henry` to `second**2/centimeter` it gives a wrong answer.\r\nSo I fix it to `cgs_gauss.set_quantity_scale_factor(henry, 10**5/speed_of_light**2/centimeter*second**2)`\r\nfrom `cgs_gauss.set_quantity_scale_factor(henry, 10**9/speed_of_light**2/centimeter*second**2)`\r\nReference for the [conversions](https://en.wikipedia.org/wiki/Gaussian_units) of SI unit to gaussian units" }, { "diff_hunk": "@@ -26,18 +25,47 @@ def test_conversion_to_from_si():\n assert convert_to(joule, erg, SI) == 10**7*erg\n assert convert_to(joule, erg, cgs_gauss) == 10**7*erg\n \n+\n assert convert_to(dyne, newton, SI) == newton/10**5\n assert convert_to(dyne, newton, cgs_gauss) == newton/10**5\n assert convert_to(newton, dyne, SI) == 10**5*dyne\n assert convert_to(newton, dyne, cgs_gauss) == 10**5*dyne\n \n \n+def test_ohm_cgs_gauss():\n+\n+ assert convert_to(ohm,second/centimeter,cgs_gauss) == 25000*second/(22468879468420441*centimeter)\n+ assert convert_to(1*ohm,second/centimeter,cgs_gauss) == 25000*second/(22468879468420441*centimeter)\n+ assert convert_to(2*ohm,second/centimeter,cgs_gauss) == 50000*second/(22468879468420441*centimeter)\n+ assert NS(convert_to(ohm,second/centimeter,cgs_gauss)) == '1.11265005605362e-12*second/centimeter'\n+ assert NS(convert_to(2*ohm,second/centimeter,cgs_gauss)) == '2.22530011210724e-12*second/centimeter'", "line": null, "original_line": 41, "original_start_line": 38, "path": "sympy/physics/units/tests/test_unit_system_cgs_gauss.py", "start_line": null, "text": "@user1:\nWhat is the purpose of these test beyond the first one?\n\n@author:\nThis test is for the conversion of an ohm to a second/centimeter\r\nIn master when we are converting ohm to second/centimeter it gives a wrong answer.\r\nSo I fix it to `cgs_gauss.set_quantity_scale_factor(ohm, 10**5/speed_of_light**2*second/centimeter)`\r\nfrom `cgs_gauss.set_quantity_scale_factor(ohm, 10**9/speed_of_light**2*second/centimeter)` \r\nReference for the [conversions](https://en.wikipedia.org/wiki/Gaussian_units) of SI unit to gaussian units\n\n@user1:\nThe first test does that. I don't think the additional tests are adding value to the test suite.\n\n@author:\nOk I will remove those tests." }, { "diff_hunk": "@@ -26,18 +25,47 @@ def test_conversion_to_from_si():\n assert convert_to(joule, erg, SI) == 10**7*erg\n assert convert_to(joule, erg, cgs_gauss) == 10**7*erg\n \n+\n assert convert_to(dyne, newton, SI) == newton/10**5\n assert convert_to(dyne, newton, cgs_gauss) == newton/10**5\n assert convert_to(newton, dyne, SI) == 10**5*dyne\n assert convert_to(newton, dyne, cgs_gauss) == 10**5*dyne\n \n \n+def test_ohm_cgs_gauss():\n+\n+ assert convert_to(ohm,second/centimeter,cgs_gauss) == 25000*second/(22468879468420441*centimeter)\n+ assert convert_to(1*ohm,second/centimeter,cgs_gauss) == 25000*second/(22468879468420441*centimeter)\n+ assert convert_to(2*ohm,second/centimeter,cgs_gauss) == 50000*second/(22468879468420441*centimeter)\n+ assert NS(convert_to(ohm,second/centimeter,cgs_gauss)) == '1.11265005605362e-12*second/centimeter'\n+ assert NS(convert_to(2*ohm,second/centimeter,cgs_gauss)) == '2.22530011210724e-12*second/centimeter'\n+\n+def test_henry_cgs_gauss():\n+ assert convert_to(henry,second**2/centimeter,cgs_gauss) == 25000*second**2/(22468879468420441*centimeter)\n+ assert convert_to(1*henry,second**2/centimeter,cgs_gauss) == 25000*second**2/(22468879468420441*centimeter)\n+ assert convert_to(2*henry,second**2/centimeter,cgs_gauss) == 50000*second**2/(22468879468420441*centimeter)\n+ assert NS(convert_to(henry,second**2/centimeter,cgs_gauss)) == '1.11265005605362e-12*second**2/centimeter'\n+ assert NS(convert_to(2*henry,second**2/centimeter,cgs_gauss)) == '2.22530011210724e-12*second**2/centimeter'\n+\n+def test_volt_cgs_gauss():\n+ assert convert_to(volt,statvolt,cgs_gauss) == 10**6*statvolt/299792458\n+ assert convert_to(1*volt,statvolt,cgs_gauss) == 10**6*statvolt/299792458\n+ assert convert_to(2*volt,statvolt,cgs_gauss) == 2*10**6*statvolt/299792458\n+\n+def test_farad_cgs_gauss():\n+\n+ assert convert_to(farad,centimeter,cgs_gauss) == 299792458**2*centimeter/10**5\n+ assert convert_to(1*farad,centimeter,cgs_gauss) == 299792458**2*centimeter/10**5\n+ assert convert_to(2*farad,centimeter,cgs_gauss) == 2*299792458**2*centimeter/10**5", "line": null, "original_line": 59, "original_start_line": 58, "path": "sympy/physics/units/tests/test_unit_system_cgs_gauss.py", "start_line": null, "text": "@user1:\nditto\n\n@author:\nThis test is for the conversion of a `farad` to a `centimeter`\r\nIn master when we are converting `farad` to `centimeter` it gives a wrong answer.\r\nSo I fix it to `cgs_gauss.set_quantity_scale_factor(farad, One/10**5*speed_of_light**2*centimeter)`\r\nfrom `cgs_gauss.set_quantity_scale_factor(farad, One/10**9*speed_of_light**2*centimeter)`\r\nReference for the [conversions](https://en.wikipedia.org/wiki/Gaussian_units) of SI unit to gaussian units" } ]
5e676000de96867c81abf4aecb62ac8e8dcf654a
diff --git a/.mailmap b/.mailmap index be8e110a76d1..cd54bf00ef48 100644 --- a/.mailmap +++ b/.mailmap @@ -209,6 +209,7 @@ Abhinav Anand <[email protected]> Abhinav Chanda <[email protected]> Abhishek <[email protected]> Abhishek Garg <[email protected]> +Abhishek Patidar <[email protected]> Abhishek Patidar <[email protected]> Abhishek Verma <[email protected]> Achal Jain <[email protected]> Adam Bloomston <[email protected]> <mail@adambloomston> diff --git a/sympy/physics/units/systems/cgs.py b/sympy/physics/units/systems/cgs.py index fbf7a001bb7f..6a77dbd4a86a 100644 --- a/sympy/physics/units/systems/cgs.py +++ b/sympy/physics/units/systems/cgs.py @@ -56,16 +56,16 @@ cgs_gauss.set_quantity_scale_factor(maxwell, sqrt(centimeter**3*gram)/second) # SI units expressed in CGS-gaussian units: -cgs_gauss.set_quantity_scale_factor(coulomb, speed_of_light*statcoulomb/10) -cgs_gauss.set_quantity_scale_factor(ampere, speed_of_light*statcoulomb/second/10) -cgs_gauss.set_quantity_scale_factor(volt, speed_of_light*statvolt/10**6) +cgs_gauss.set_quantity_scale_factor(coulomb, 10*speed_of_light*statcoulomb) +cgs_gauss.set_quantity_scale_factor(ampere, 10*speed_of_light*statcoulomb/second) +cgs_gauss.set_quantity_scale_factor(volt, 10**6/speed_of_light*statvolt) cgs_gauss.set_quantity_scale_factor(weber, 10**8*maxwell) cgs_gauss.set_quantity_scale_factor(tesla, 10**4*gauss) cgs_gauss.set_quantity_scale_factor(debye, One/10**18*statcoulomb*centimeter) cgs_gauss.set_quantity_scale_factor(oersted, sqrt(gram/centimeter)/second) -cgs_gauss.set_quantity_scale_factor(ohm, 10**9/speed_of_light**2*second/centimeter) -cgs_gauss.set_quantity_scale_factor(farad, One/10**9*speed_of_light**2*centimeter) -cgs_gauss.set_quantity_scale_factor(henry, 10**9/speed_of_light**2/centimeter*second**2) +cgs_gauss.set_quantity_scale_factor(ohm, 10**5/speed_of_light**2*second/centimeter) +cgs_gauss.set_quantity_scale_factor(farad, One/10**5*speed_of_light**2*centimeter) +cgs_gauss.set_quantity_scale_factor(henry, 10**5/speed_of_light**2/centimeter*second**2) # Coulomb's constant: cgs_gauss.set_quantity_dimension(coulomb_constant, 1) diff --git a/sympy/physics/units/tests/test_unit_system_cgs_gauss.py b/sympy/physics/units/tests/test_unit_system_cgs_gauss.py index 0dfb2f526279..e4a4539e7966 100644 --- a/sympy/physics/units/tests/test_unit_system_cgs_gauss.py +++ b/sympy/physics/units/tests/test_unit_system_cgs_gauss.py @@ -4,17 +4,16 @@ from sympy.functions.elementary.miscellaneous import sqrt from sympy.physics.units import convert_to, coulomb_constant, elementary_charge, gravitational_constant, planck from sympy.physics.units.definitions.unit_definitions import statcoulomb, coulomb, second, gram, centimeter, erg, \ - newton, joule, dyne, speed_of_light, meter + newton, joule, dyne, speed_of_light, meter, farad, henry, statvolt, volt, ohm from sympy.physics.units.systems import SI from sympy.physics.units.systems.cgs import cgs_gauss def test_conversion_to_from_si(): - - assert convert_to(statcoulomb, coulomb, cgs_gauss) == 5*coulomb/149896229 - assert convert_to(coulomb, statcoulomb, cgs_gauss) == 149896229*statcoulomb/5 + assert convert_to(statcoulomb, coulomb, cgs_gauss) == coulomb/2997924580 + assert convert_to(coulomb, statcoulomb, cgs_gauss) == 2997924580*statcoulomb assert convert_to(statcoulomb, sqrt(gram*centimeter**3)/second, cgs_gauss) == centimeter**(S(3)/2)*sqrt(gram)/second - assert convert_to(coulomb, sqrt(gram*centimeter**3)/second, cgs_gauss) == 149896229*centimeter**(S(3)/2)*sqrt(gram)/(5*second) + assert convert_to(coulomb, sqrt(gram*centimeter**3)/second, cgs_gauss) == 2997924580*centimeter**(S(3)/2)*sqrt(gram)/second # SI units have an additional base unit, no conversion in case of electromagnetism: assert convert_to(coulomb, statcoulomb, SI) == coulomb @@ -26,6 +25,7 @@ def test_conversion_to_from_si(): assert convert_to(joule, erg, SI) == 10**7*erg assert convert_to(joule, erg, cgs_gauss) == 10**7*erg + assert convert_to(dyne, newton, SI) == newton/10**5 assert convert_to(dyne, newton, cgs_gauss) == newton/10**5 assert convert_to(newton, dyne, SI) == 10**5*dyne @@ -37,7 +37,7 @@ def test_cgs_gauss_convert_constants(): assert convert_to(speed_of_light, centimeter/second, cgs_gauss) == 29979245800*centimeter/second assert convert_to(coulomb_constant, 1, cgs_gauss) == 1 - assert convert_to(coulomb_constant, newton*meter**2/coulomb**2, cgs_gauss) == 22468879468420441*meter**2*newton/(25000000000*coulomb**2) + assert convert_to(coulomb_constant, newton*meter**2/coulomb**2, cgs_gauss) == 22468879468420441*meter**2*newton/(2500000*coulomb**2) assert convert_to(coulomb_constant, newton*meter**2/coulomb**2, SI) == 22468879468420441*meter**2*newton/(2500000*coulomb**2) assert convert_to(coulomb_constant, dyne*centimeter**2/statcoulomb**2, cgs_gauss) == centimeter**2*dyne/statcoulomb**2 assert convert_to(coulomb_constant, 1, SI) == coulomb_constant @@ -46,3 +46,9 @@ def test_cgs_gauss_convert_constants(): assert convert_to(elementary_charge, statcoulomb, cgs_gauss) assert convert_to(gravitational_constant, dyne*centimeter**2/gram**2, cgs_gauss) assert NS(convert_to(planck, erg*second, cgs_gauss)) == '6.62607015e-27*erg*second' + + spc = 25000*second/(22468879468420441*centimeter) + assert convert_to(ohm, second/centimeter, cgs_gauss) == spc + assert convert_to(henry, second**2/centimeter, cgs_gauss) == spc*second + assert convert_to(volt, statvolt, cgs_gauss) == 10**6*statvolt/299792458 + assert convert_to(farad, centimeter, cgs_gauss) == 299792458**2*centimeter/10**5
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Bug Fixes" }
sympy__sympy-24296@af44337
sympy/sympy
Python
24,296
Small errors in _laplace_rule_exp and _laplace_rule_trig corrected
#### References to other Issues or PRs Fixes #24294 #### Brief description of what is fixed or changed The functions _laplace_rule_exp and _laplace_rule_trig did not return the constant factors of their expressions. #### Other comments #### Release Notes <!-- BEGIN RELEASE NOTES --> * integrals * corrected small errors in Laplace transform rules <!-- END RELEASE NOTES -->
2022-11-22T07:01:30Z
laplace_transform drops coefficient when Heaviside used when using Heaviside with another function, I notice that adding a scalar coefficient (e.g., `2`) gets ignored in the output. I would expect `L(2 * sin(t) * u(t))` to be `2/(s**2 + 1)`, but `laplace_transform` outputs `1/(s**2 + 1)` (see **Out[5]** below). Note that when I explicitly state `t > 0`, I get the answer I expected (**Out[9]**). But by setting `t > 0`, the Laplace transform of time-domain functions having a delta function get messed up (since 0 is excluded). Is this a bug? ```python In [1]: import sympy as sympy ...: from sympy import laplace_transform, Heaviside, Symbol ...: from sympy.abc import t, s In [2]: laplace_transform(Heaviside(t), t, s) Out[2]: (1/s, 0, True) In [3]: laplace_transform(2 * Heaviside(t), t, s) Out[3]: (2/s, 0, True) In [4]: laplace_transform(sympy.sin(t) * Heaviside(t), t, s) Out[4]: (1/(s**2 + 1), 0, True) In [5]: laplace_transform(2 * sympy.sin(t) * Heaviside(t), t, s) Out[5]: (1/(s**2 + 1), 0, True) In [6]: laplace_transform(sympy.sin(t), t, s) Out[6]: (1/(s**2 + 1), 0, True) In [7]: laplace_transform(2 * sympy.sin(t), t, s) Out[7]: (2/(s**2 + 1), 0, True) In [8]: t = Symbol('t', positive=True) In [9]: laplace_transform(2 * sympy.sin(t) * Heaviside(t), t, s) Out[9]: (2/(s**2 + 1), 0, True) ``` --- Sympy version 1.11.1
CC @hanspi42 This seems to fix it but the code around seems a bit fishy so maybe more is needed: ```python diff --git a/sympy/integrals/transforms.py b/sympy/integrals/transforms.py index 071a6f1..4160acd 100644 --- a/sympy/integrals/transforms.py +++ b/sympy/integrals/transforms.py @@ -1694,6 +1694,7 @@ def _laplace_rule_trig(f, t, s, doit=True, **hints): L = _laplace_apply_rules(ma1[z], t, s, doit=doit, **hints) try: r, p, c = L + r = k * r # The convergence plane changes only if the shift has been # done along the real axis: if sd==1: ``` With that: ```python In [1]: from sympy.abc import t, s In [2]: import sympy In [3]: laplace_transform(2 * sympy.sin(t) * Heaviside(t), t, s) Out[3]: ⎛ 2 ⎞ ⎜──────, 0, True⎟ ⎜ 2 ⎟ ⎝s + 1 ⎠ ``` Yes, this looks like the correct fix. I will write test cases and implement it right away.
[ { "body": "when using Heaviside with another function, I notice that adding a scalar coefficient (e.g., `2`) gets ignored in the output. I would expect `L(2 * sin(t) * u(t))` to be `2/(s**2 + 1)`, but `laplace_transform` outputs `1/(s**2 + 1)` (see **Out[5]** below).\r\n\r\nNote that when I explicitly state `t > 0`, I get the answer I expected (**Out[9]**). But by setting `t > 0`, the Laplace transform of time-domain functions having a delta function get messed up (since 0 is excluded).\r\n\r\nIs this a bug?\r\n\r\n```python\r\nIn [1]: import sympy as sympy \r\n ...: from sympy import laplace_transform, Heaviside, Symbol \r\n ...: from sympy.abc import t, s\r\n\r\nIn [2]: laplace_transform(Heaviside(t), t, s)\r\nOut[2]: (1/s, 0, True)\r\n\r\nIn [3]: laplace_transform(2 * Heaviside(t), t, s)\r\nOut[3]: (2/s, 0, True)\r\n\r\nIn [4]: laplace_transform(sympy.sin(t) * Heaviside(t), t, s)\r\nOut[4]: (1/(s**2 + 1), 0, True)\r\n\r\nIn [5]: laplace_transform(2 * sympy.sin(t) * Heaviside(t), t, s)\r\nOut[5]: (1/(s**2 + 1), 0, True)\r\n\r\nIn [6]: laplace_transform(sympy.sin(t), t, s)\r\nOut[6]: (1/(s**2 + 1), 0, True)\r\n\r\nIn [7]: laplace_transform(2 * sympy.sin(t), t, s)\r\nOut[7]: (2/(s**2 + 1), 0, True)\r\n\r\nIn [8]: t = Symbol('t', positive=True)\r\n\r\nIn [9]: laplace_transform(2 * sympy.sin(t) * Heaviside(t), t, s)\r\nOut[9]: (2/(s**2 + 1), 0, True)\r\n```\r\n---\r\nSympy version 1.11.1", "number": 24294, "title": "laplace_transform drops coefficient when Heaviside used" } ]
69c654b27d939718cd060172ad0fba95ada5a699
{ "head_commit": "af443377dd48c2caf440d2b6dd76830dbe84712f", "head_commit_message": "Moved all constant-factor calculations to `_laplace_apply_rules`", "patch_to_review": "diff --git a/sympy/integrals/tests/test_transforms.py b/sympy/integrals/tests/test_transforms.py\nindex fa0a69085bef..884ad0a6d5b6 100644\n--- a/sympy/integrals/tests/test_transforms.py\n+++ b/sympy/integrals/tests/test_transforms.py\n@@ -707,6 +707,10 @@ def test_laplace_transform():\n assert inverse_laplace_transform(\n f(w), w, t, plane=0) == InverseLaplaceTransform(f(w), w, t, 0)\n assert LT(f(t)*g(t), t, s) == LaplaceTransform(f(t)*g(t), t, s)\n+ # Issue #24294\n+ assert LT(b*f(a*t), t, s) == b*LaplaceTransform(f(t), t, s/a)/a\n+ assert LT(3*exp(t)*Heaviside(t), t, s) == (3/(s - 1), 1, True)\n+ assert LT(2*sin(t)*Heaviside(t), t, s) == (2/(s**2 + 1), 0, True)\n \n # additional basic tests from wikipedia\n assert LT((t - a)**b*exp(-c*(t - a))*Heaviside(t - a), t, s) == \\\ndiff --git a/sympy/integrals/transforms.py b/sympy/integrals/transforms.py\nindex 071a6f111894..8318dc5c1175 100644\n--- a/sympy/integrals/transforms.py\n+++ b/sympy/integrals/transforms.py\n@@ -1579,8 +1579,7 @@ def _laplace_rule_timescale(f, t, s, doit=True, **hints):\n _simplify = hints.pop('simplify', True)\n b = Wild('b', exclude=[t])\n g = WildFunction('g', nargs=1)\n- k, func = f.as_independent(t, as_Add=False)\n- ma1 = func.match(g)\n+ ma1 = f.match(g)\n if ma1:\n arg = ma1[g].args[0].collect(t)\n ma2 = arg.match(b*t)\n@@ -1591,18 +1590,18 @@ def _laplace_rule_timescale(f, t, s, doit=True, **hints):\n if ma2[b]==1:\n if doit==True and not any(func.has(t) for func\n in ma1[g].atoms(AppliedUndef)):\n- return k*_laplace_transform(ma1[g].func(t), t, s,\n+ return _laplace_transform(ma1[g].func(t), t, s,\n simplify=_simplify)\n else:\n- return k*LaplaceTransform(ma1[g].func(t), t, s, **hints)\n+ return LaplaceTransform(ma1[g].func(t), t, s, **hints)\n else:\n L = _laplace_apply_rules(ma1[g].func(t), t, s/ma2[b],\n doit=doit, **hints)\n try:\n r, p, c = L\n- return (k/ma2[b]*r, p, c)\n+ return (1/ma2[b]*r, p, c)\n except TypeError:\n- return k/ma2[b]*L\n+ return 1/ma2[b]*L\n return None\n \n def _laplace_rule_heaviside(f, t, s, doit=True, **hints):\n@@ -1615,8 +1614,7 @@ def _laplace_rule_heaviside(f, t, s, doit=True, **hints):\n b = Wild('b', exclude=[t])\n y = Wild('y')\n g = WildFunction('g', nargs=1)\n- k, func = f.as_independent(t, as_Add=False)\n- ma1 = func.match(Heaviside(y)*g)\n+ ma1 = f.match(Heaviside(y)*g)\n if ma1:\n ma2 = ma1[y].match(t-a)\n ma3 = ma1[g].args[0].collect(t).match(t-b)\n@@ -1627,9 +1625,9 @@ def _laplace_rule_heaviside(f, t, s, doit=True, **hints):\n L = _laplace_apply_rules(ma1[g].func(t), t, s, doit=doit, **hints)\n try:\n r, p, c = L\n- return (k*exp(-ma2[a]*s)*r, p, c)\n+ return (exp(-ma2[a]*s)*r, p, c)\n except TypeError:\n- return k*exp(-ma2[a]*s)*L\n+ return exp(-ma2[a]*s)*L\n return None\n \n \n@@ -1643,8 +1641,7 @@ def _laplace_rule_exp(f, t, s, doit=True, **hints):\n \n y = Wild('y')\n z = Wild('z')\n- k, func = f.as_independent(t, as_Add=False)\n- ma1 = func.match(exp(y)*z)\n+ ma1 = f.match(exp(y)*z)\n if ma1:\n ma2 = ma1[y].collect(t).match(a*t)\n if ma2:\n@@ -1669,7 +1666,6 @@ def _laplace_rule_trig(f, t, s, doit=True, **hints):\n a = Wild('a', exclude=[t])\n y = Wild('y')\n z = Wild('z')\n- k, func = f.as_independent(t, as_Add=False)\n # All of the rules have a very similar form: trig(y)*z is matched, and then\n # two copies of the Laplace transform of z are shifted in the s Domain\n # and added with a weight; see rules 1.6 to 1.9 in\n@@ -1684,7 +1680,7 @@ def _laplace_rule_trig(f, t, s, doit=True, **hints):\n (sin(y), '1.8', -I, -1, I), (cos(y), '1.9', 1, 1, I)]\n for trigrule in trigrules:\n fm, nu, s1, s2, sd = trigrule\n- ma1 = func.match(fm*z)\n+ ma1 = f.match(fm*z)\n if ma1:\n ma2 = ma1[y].collect(t).match(a*t)\n if ma2:\n@@ -1777,8 +1773,13 @@ def _laplace_apply_rules(f, t, s, doit=True, **hints):\n prog_rules = [_laplace_rule_timescale, _laplace_rule_heaviside,\n _laplace_rule_exp, _laplace_rule_trig, _laplace_rule_diff]\n for p_rule in prog_rules:\n- LT = p_rule(f, t, s, doit=doit, **hints)\n+ LT = p_rule(func, t, s, doit=doit, **hints)\n if LT is not None:\n+ try:\n+ r, p, c = LT\n+ return (k*r, p, c)\n+ except TypeError:\n+ return k*LT\n return LT\n return None\n \n" }
[ { "diff_hunk": "@@ -1777,8 +1773,13 @@ def _laplace_apply_rules(f, t, s, doit=True, **hints):\n prog_rules = [_laplace_rule_timescale, _laplace_rule_heaviside,\n _laplace_rule_exp, _laplace_rule_trig, _laplace_rule_diff]\n for p_rule in prog_rules:\n- LT = p_rule(f, t, s, doit=doit, **hints)\n+ LT = p_rule(func, t, s, doit=doit, **hints)\n if LT is not None:\n+ try:\n+ r, p, c = LT\n+ return (k*r, p, c)\n+ except TypeError:\n+ return k*LT", "line": null, "original_line": 1782, "original_start_line": 1778, "path": "sympy/integrals/transforms.py", "start_line": null, "text": "@user1:\nRather than `try/except` it would be better to test the condition explicitly e.g.:\r\n```python\r\nconds = hints.get('noconds', False):\r\nif conds:\r\n r, p, c = LT\r\n return (k*r, p, c)\r\nelse:\r\n return k*LT\r\n```\r\nThis is better because:\r\n- It is more explicit\r\n- We should generally avoid catching TypeError which is usually indicative of bugs\r\n- Some expressions are iterable so there is a possibility of confusion here:\r\n ```python\r\n In [1]: r, p, c = Matrix([1, 2, 3])\r\n\r\n In [2]: r, p, c\r\n Out[2]: (1, 2, 3)\r\n ```\r\n- This also ensures that when `noconds` is set we will see an error if the wrong type of object was returned here.\r\n\r\nIn general catching exceptions is something that should be used for exceptional rather than expected cases.\n\n@author:\nThat was how we handled it a year ago, but I agree it is not good. I have rewritten all instances. However, since it is not always certain inside the recursive rule engine that conditions are returned even if `nocond` is not there, I am now also checking whether the return value's type is tuple.\r\n\r\nPush will follow soon ...\n\n@user1:\n> it is not always certain inside the recursive rule engine that conditions are returned even if `nocond` is not there\r\n\r\nThis is precisely why I would check `nocond`: if `nocond` is passed and a tuple is being returned then that's a bug is it not? Surely in that case we should *want* to see a `TypeError` rather than silently passing the wrong type through.\n\n@author:\nIt is the other way around: if noconds is not passed, it can still be that no tuple is returned inside the recursive tree. \n\n@author:\nTo explain a bit better: if, in the recursion tree, the Laplace transform cannot be evaluated, an unevaluated `LaplaceTransform()` is returned. So even if `noconds` is not set, and conditions should be derived, the result may not be a tuple, but a `LaplaceTransform()` object with `**hints` in the argument that do not contain `noconds`.\r\n\r\nWithout the type tuple check, e.g., `LT(b*f(a*t), t, s)` would fail.\n\n@user1:\nOkay that makes sense. I think it's not a good API to return either a tuple or a LaplaceTransform but I guess that can't be changed now." } ]
f8f54d96266618749131a69c528a6db2a5b28d27
diff --git a/sympy/integrals/tests/test_transforms.py b/sympy/integrals/tests/test_transforms.py index fa0a69085bef..884ad0a6d5b6 100644 --- a/sympy/integrals/tests/test_transforms.py +++ b/sympy/integrals/tests/test_transforms.py @@ -707,6 +707,10 @@ def test_laplace_transform(): assert inverse_laplace_transform( f(w), w, t, plane=0) == InverseLaplaceTransform(f(w), w, t, 0) assert LT(f(t)*g(t), t, s) == LaplaceTransform(f(t)*g(t), t, s) + # Issue #24294 + assert LT(b*f(a*t), t, s) == b*LaplaceTransform(f(t), t, s/a)/a + assert LT(3*exp(t)*Heaviside(t), t, s) == (3/(s - 1), 1, True) + assert LT(2*sin(t)*Heaviside(t), t, s) == (2/(s**2 + 1), 0, True) # additional basic tests from wikipedia assert LT((t - a)**b*exp(-c*(t - a))*Heaviside(t - a), t, s) == \ diff --git a/sympy/integrals/transforms.py b/sympy/integrals/transforms.py index 071a6f111894..d2f471f5ded0 100644 --- a/sympy/integrals/transforms.py +++ b/sympy/integrals/transforms.py @@ -1579,8 +1579,7 @@ def _laplace_rule_timescale(f, t, s, doit=True, **hints): _simplify = hints.pop('simplify', True) b = Wild('b', exclude=[t]) g = WildFunction('g', nargs=1) - k, func = f.as_independent(t, as_Add=False) - ma1 = func.match(g) + ma1 = f.match(g) if ma1: arg = ma1[g].args[0].collect(t) ma2 = arg.match(b*t) @@ -1591,18 +1590,19 @@ def _laplace_rule_timescale(f, t, s, doit=True, **hints): if ma2[b]==1: if doit==True and not any(func.has(t) for func in ma1[g].atoms(AppliedUndef)): - return k*_laplace_transform(ma1[g].func(t), t, s, + return _laplace_transform(ma1[g].func(t), t, s, simplify=_simplify) else: - return k*LaplaceTransform(ma1[g].func(t), t, s, **hints) + return LaplaceTransform(ma1[g].func(t), t, s, **hints) else: L = _laplace_apply_rules(ma1[g].func(t), t, s/ma2[b], doit=doit, **hints) - try: + noconds = hints.get('noconds', False) + if not noconds and type(L) is tuple: r, p, c = L - return (k/ma2[b]*r, p, c) - except TypeError: - return k/ma2[b]*L + return (1/ma2[b]*r, p, c) + else: + return 1/ma2[b]*L return None def _laplace_rule_heaviside(f, t, s, doit=True, **hints): @@ -1615,8 +1615,7 @@ def _laplace_rule_heaviside(f, t, s, doit=True, **hints): b = Wild('b', exclude=[t]) y = Wild('y') g = WildFunction('g', nargs=1) - k, func = f.as_independent(t, as_Add=False) - ma1 = func.match(Heaviside(y)*g) + ma1 = f.match(Heaviside(y)*g) if ma1: ma2 = ma1[y].match(t-a) ma3 = ma1[g].args[0].collect(t).match(t-b) @@ -1625,11 +1624,12 @@ def _laplace_rule_heaviside(f, t, s, doit=True, **hints): debug(' f: %s ( %s, %s, %s )'%(f, ma1, ma2, ma3)) debug(' rule: time shift (1.3)') L = _laplace_apply_rules(ma1[g].func(t), t, s, doit=doit, **hints) - try: + noconds = hints.get('noconds', False) + if not noconds and type(L) is tuple: r, p, c = L - return (k*exp(-ma2[a]*s)*r, p, c) - except TypeError: - return k*exp(-ma2[a]*s)*L + return (exp(-ma2[a]*s)*r, p, c) + else: + return exp(-ma2[a]*s)*L return None @@ -1643,8 +1643,7 @@ def _laplace_rule_exp(f, t, s, doit=True, **hints): y = Wild('y') z = Wild('z') - k, func = f.as_independent(t, as_Add=False) - ma1 = func.match(exp(y)*z) + ma1 = f.match(exp(y)*z) if ma1: ma2 = ma1[y].collect(t).match(a*t) if ma2: @@ -1652,10 +1651,11 @@ def _laplace_rule_exp(f, t, s, doit=True, **hints): debug(' f: %s ( %s, %s )'%(f, ma1, ma2)) debug(' rule: multiply with exp (1.5)') L = _laplace_apply_rules(ma1[z], t, s-ma2[a], doit=doit, **hints) - try: + noconds = hints.get('noconds', False) + if not noconds and type(L) is tuple: r, p, c = L return (r, p+ma2[a], c) - except TypeError: + else: return L return None @@ -1669,7 +1669,6 @@ def _laplace_rule_trig(f, t, s, doit=True, **hints): a = Wild('a', exclude=[t]) y = Wild('y') z = Wild('z') - k, func = f.as_independent(t, as_Add=False) # All of the rules have a very similar form: trig(y)*z is matched, and then # two copies of the Laplace transform of z are shifted in the s Domain # and added with a weight; see rules 1.6 to 1.9 in @@ -1684,7 +1683,7 @@ def _laplace_rule_trig(f, t, s, doit=True, **hints): (sin(y), '1.8', -I, -1, I), (cos(y), '1.9', 1, 1, I)] for trigrule in trigrules: fm, nu, s1, s2, sd = trigrule - ma1 = func.match(fm*z) + ma1 = f.match(fm*z) if ma1: ma2 = ma1[y].collect(t).match(a*t) if ma2: @@ -1692,7 +1691,8 @@ def _laplace_rule_trig(f, t, s, doit=True, **hints): debug(' f: %s ( %s, %s )'%(f, ma1, ma2)) debug(' rule: multiply with %s (%s)'%(fm.func, nu)) L = _laplace_apply_rules(ma1[z], t, s, doit=doit, **hints) - try: + noconds = hints.get('noconds', False) + if not noconds and type(L) is tuple: r, p, c = L # The convergence plane changes only if the shift has been # done along the real axis: @@ -1703,7 +1703,7 @@ def _laplace_rule_trig(f, t, s, doit=True, **hints): return ((s1*(r.subs(s, s-sd*ma2[a])+\ s2*r.subs(s, s+sd*ma2[a]))).simplify()/2, p+cp_shift, c) - except TypeError: + else: if doit==True and _simplify==True: return (s1*(L.subs(s, s-sd*ma2[a])+\ s2*L.subs(s, s+sd*ma2[a]))).simplify()/2 @@ -1777,9 +1777,14 @@ def _laplace_apply_rules(f, t, s, doit=True, **hints): prog_rules = [_laplace_rule_timescale, _laplace_rule_heaviside, _laplace_rule_exp, _laplace_rule_trig, _laplace_rule_diff] for p_rule in prog_rules: - LT = p_rule(f, t, s, doit=doit, **hints) - if LT is not None: - return LT + L = p_rule(func, t, s, doit=doit, **hints) + if L is not None: + noconds = hints.get('noconds', False) + if not noconds and type(L) is tuple: + r, p, c = L + return (k*r, p, c) + else: + return k*L return None class LaplaceTransform(IntegralTransform):
{ "difficulty": "low", "estimated_review_effort": 2, "problem_domain": "Bug Fixes" }
sympy__sympy-24152@c8ffc40
sympy/sympy
Python
24,152
Fixes incomplete TensorProduct expand if scalar factors present in summands
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> Fixes #24142 #### Brief description of what is fixed or changed (Replaces identical PR 24147 that was inadvertently closed when renaming branches) See issue #24142: The expansion of a TensorProduct object stops incomplete if summands in the tensor product factors have scalar factors, e.g. ``` from sympy import * from sympy.physics.quantum import * U = Operator('U') V = Operator('V') P = TensorProduct(2*U - V, U + V) print(P) # (2*U - V)x(U + V) print(P.expand(tensorproduct=True)) #result: 2*Ux(U + V) - Vx(U + V) #expansion has missed 2nd tensor factor and is incomplete ``` This PR also adds test cases to test_tensorproduct.py. #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> * physics.quantum * Fixed incomplete expansion of TensorProduct if summands have scalar factors <!-- END RELEASE NOTES -->
2022-10-21T13:47:03Z
Bug in expand of TensorProduct + Workaround + Fix ### Error description The expansion of a TensorProduct object stops incomplete if summands in the tensor product factors have (scalar) factors, e.g. ``` from sympy import * from sympy.physics.quantum import * U = Operator('U') V = Operator('V') P = TensorProduct(2*U - V, U + V) print(P) # (2*U - V)x(U + V) print(P.expand(tensorproduct=True)) #result: 2*Ux(U + V) - Vx(U + V) #expansion has missed 2nd tensor factor and is incomplete ``` This is clearly not the expected behaviour. It also effects other functions that rely on .expand(tensorproduct=True), as e.g. qapply() . ### Work around Repeat .expand(tensorproduct=True) as may times as there are tensor factors, resp. until the expanded term does no longer change. This is however only reasonable in interactive session and not in algorithms. ### Code Fix .expand relies on the method TensorProduct._eval_expand_tensorproduct(). The issue arises from an inprecise check in TensorProduct._eval_expand_tensorproduct() whether a recursive call is required; it fails when the creation of a TensorProduct object returns commutative (scalar) factors up front: in that case the constructor returns a Mul(c_factors, TensorProduct(..)). I thus propose the following code fix in TensorProduct._eval_expand_tensorproduct() in quantum/tensorproduct.py. I have marked the four lines to be added / modified: ``` def _eval_expand_tensorproduct(self, **hints): ... for aa in args[i].args: tp = TensorProduct(*args[:i] + (aa,) + args[i + 1:]) c_part, nc_part = tp.args_cnc() #added if len(nc_part)==1 and isinstance(nc_part[0], TensorProduct): #modified nc_part = (nc_part[0]._eval_expand_tensorproduct(), ) #modified add_args.append(Mul(*c_part)*Mul(*nc_part)) #modified break ... ``` The fix splits of commutative (scalar) factors from the tp returned. The TensorProduct object will be the one nc factor in nc_part (see TensorProduct.__new__ constructor), if any. Note that the constructor will return 0 if a tensor factor is 0, so there is no guarantee that tp contains a TensorProduct object (e.g. TensorProduct(U-U, U+V).
Can you make a pull request with this fix? Will do. I haven't worked with git before, so bear with me. But as I'm currently digging into some of the quantum package and have more and larger patches in the pipeline, it seems worth the effort to get git set up on my side. So watch out :-)
[ { "body": "### Error description\r\nThe expansion of a TensorProduct object stops incomplete if summands in the tensor product factors have (scalar) factors, e.g.\r\n```\r\nfrom sympy import *\r\nfrom sympy.physics.quantum import *\r\nU = Operator('U')\r\nV = Operator('V')\r\nP = TensorProduct(2*U - V, U + V)\r\nprint(P) \r\n# (2*U - V)x(U + V)\r\nprint(P.expand(tensorproduct=True)) \r\n#result: 2*Ux(U + V) - Vx(U + V) #expansion has missed 2nd tensor factor and is incomplete\r\n```\r\nThis is clearly not the expected behaviour. It also effects other functions that rely on .expand(tensorproduct=True), as e.g. qapply() .\r\n\r\n### Work around\r\nRepeat .expand(tensorproduct=True) as may times as there are tensor factors, resp. until the expanded term does no longer change. This is however only reasonable in interactive session and not in algorithms.\r\n\r\n### Code Fix\r\n.expand relies on the method TensorProduct._eval_expand_tensorproduct(). The issue arises from an inprecise check in TensorProduct._eval_expand_tensorproduct() whether a recursive call is required; it fails when the creation of a TensorProduct object returns commutative (scalar) factors up front: in that case the constructor returns a Mul(c_factors, TensorProduct(..)).\r\nI thus propose the following code fix in TensorProduct._eval_expand_tensorproduct() in quantum/tensorproduct.py. I have marked the four lines to be added / modified:\r\n```\r\n def _eval_expand_tensorproduct(self, **hints):\r\n ...\r\n for aa in args[i].args:\r\n tp = TensorProduct(*args[:i] + (aa,) + args[i + 1:])\r\n c_part, nc_part = tp.args_cnc() #added\r\n if len(nc_part)==1 and isinstance(nc_part[0], TensorProduct): #modified\r\n nc_part = (nc_part[0]._eval_expand_tensorproduct(), ) #modified\r\n add_args.append(Mul(*c_part)*Mul(*nc_part)) #modified\r\n break\r\n ...\r\n```\r\nThe fix splits of commutative (scalar) factors from the tp returned. The TensorProduct object will be the one nc factor in nc_part (see TensorProduct.__new__ constructor), if any. Note that the constructor will return 0 if a tensor factor is 0, so there is no guarantee that tp contains a TensorProduct object (e.g. TensorProduct(U-U, U+V).\r\n\r\n\r\n", "number": 24142, "title": "Bug in expand of TensorProduct + Workaround + Fix" } ]
b9af885473ad7e34b5b0826cb424dd26d8934670
{ "head_commit": "c8ffc408bfe642d1ae0efc3471384c94226f5ff8", "head_commit_message": "Added to .mailmap", "patch_to_review": "diff --git a/.mailmap b/.mailmap\nindex 3e85675b0853..025916850778 100644\n--- a/.mailmap\n+++ b/.mailmap\n@@ -449,6 +449,7 @@ Colin Marquardt <[email protected]>\n Colleen Lee <[email protected]> <[email protected]>\n Comer Duncan <[email protected]>\n Constantin Mateescu <[email protected]>\n+Costor <[email protected]>\n Craig A. Stoudt <[email protected]>\n Cristian Di Pietrantonio <[email protected]>\n Cristóvão Sousa <[email protected]>\ndiff --git a/sympy/physics/quantum/tensorproduct.py b/sympy/physics/quantum/tensorproduct.py\nindex 78accaf2955c..ebe7eff3562e 100644\n--- a/sympy/physics/quantum/tensorproduct.py\n+++ b/sympy/physics/quantum/tensorproduct.py\n@@ -246,9 +246,12 @@ def _eval_expand_tensorproduct(self, **hints):\n if isinstance(args[i], Add):\n for aa in args[i].args:\n tp = TensorProduct(*args[:i] + (aa,) + args[i + 1:])\n- if isinstance(tp, TensorProduct):\n- tp = tp._eval_expand_tensorproduct()\n- add_args.append(tp)\n+ c_part, nc_part = tp.args_cnc()\n+ #Check for TensorProduct object: is the one object in nc_part, if any:\n+ #(Note: any other object type to be expanded must be added here)\n+ if len(nc_part)==1 and isinstance(nc_part[0], TensorProduct):\n+ nc_part = (nc_part[0]._eval_expand_tensorproduct(), )\n+ add_args.append(Mul(*c_part)*Mul(*nc_part))\n break\n \n if add_args:\ndiff --git a/sympy/physics/quantum/tests/test_tensorproduct.py b/sympy/physics/quantum/tests/test_tensorproduct.py\nindex f1765640ea2a..11d8abb8aa75 100644\n--- a/sympy/physics/quantum/tests/test_tensorproduct.py\n+++ b/sympy/physics/quantum/tests/test_tensorproduct.py\n@@ -44,6 +44,13 @@ def test_tensor_product_abstract():\n def test_tensor_product_expand():\n assert TP(A + B, B + C).expand(tensorproduct=True) == \\\n TP(A, B) + TP(A, C) + TP(B, B) + TP(B, C)\n+ #Tests for fix of issue #24142\n+ assert TP(A-B, B-A).expand(tensorproduct=True) == \\\n+ TP(A, B) - TP(A, A) - TP(B, B) + TP(B, A)\n+ assert TP(2*A + B, A + B).expand(tensorproduct=True) == \\\n+ 2 * TP(A, A) + 2 * TP(A, B) + TP(B, A) + TP(B, B)\n+ assert TP(2 * A * B + A, A + B).expand(tensorproduct=True) == \\\n+ 2 * TP(A*B, A) + 2 * TP(A*B, B) + TP(A, A) + TP(A, B)\n \n \n def test_tensor_product_commutator():\n" }
[ { "diff_hunk": "@@ -246,9 +246,12 @@ def _eval_expand_tensorproduct(self, **hints):\n if isinstance(args[i], Add):\n for aa in args[i].args:\n tp = TensorProduct(*args[:i] + (aa,) + args[i + 1:])\n- if isinstance(tp, TensorProduct):\n- tp = tp._eval_expand_tensorproduct()\n- add_args.append(tp)\n+ c_part, nc_part = tp.args_cnc()\n+ #Check for TensorProduct object: is the one object in nc_part, if any:\n+ #(Note: any other object type to be expanded must be added here)\n+ if len(nc_part)==1 and isinstance(nc_part[0], TensorProduct):", "line": null, "original_line": 252, "original_start_line": null, "path": "sympy/physics/quantum/tensorproduct.py", "start_line": null, "text": "@user1:\n```suggestion\r\n if len(nc_part) == 1 and isinstance(nc_part[0], TensorProduct):\r\n```\n\n@user1:\nAdd whitespace for operations\r\nhttps://peps.python.org/pep-0008/#other-recommendations" }, { "diff_hunk": "@@ -246,9 +246,12 @@ def _eval_expand_tensorproduct(self, **hints):\n if isinstance(args[i], Add):\n for aa in args[i].args:\n tp = TensorProduct(*args[:i] + (aa,) + args[i + 1:])\n- if isinstance(tp, TensorProduct):\n- tp = tp._eval_expand_tensorproduct()\n- add_args.append(tp)\n+ c_part, nc_part = tp.args_cnc()\n+ #Check for TensorProduct object: is the one object in nc_part, if any:\n+ #(Note: any other object type to be expanded must be added here)", "line": null, "original_line": 251, "original_start_line": null, "path": "sympy/physics/quantum/tensorproduct.py", "start_line": null, "text": "@user1:\n```suggestion\r\n # (Note: any other object type to be expanded must be added here)\r\n```\n\n@user1:\nThere should be one spacing between `#` and comment for inline comments\r\nhttps://peps.python.org/pep-0008/#inline-comments" }, { "diff_hunk": "@@ -246,9 +246,12 @@ def _eval_expand_tensorproduct(self, **hints):\n if isinstance(args[i], Add):\n for aa in args[i].args:\n tp = TensorProduct(*args[:i] + (aa,) + args[i + 1:])\n- if isinstance(tp, TensorProduct):\n- tp = tp._eval_expand_tensorproduct()\n- add_args.append(tp)\n+ c_part, nc_part = tp.args_cnc()\n+ #Check for TensorProduct object: is the one object in nc_part, if any:", "line": null, "original_line": 250, "original_start_line": null, "path": "sympy/physics/quantum/tensorproduct.py", "start_line": null, "text": "@user1:\n```suggestion\r\n # Check for TensorProduct object: is the one object in nc_part, if any:\r\n```" }, { "diff_hunk": "@@ -44,6 +44,13 @@ def test_tensor_product_abstract():\n def test_tensor_product_expand():\n assert TP(A + B, B + C).expand(tensorproduct=True) == \\\n TP(A, B) + TP(A, C) + TP(B, B) + TP(B, C)\n+ #Tests for fix of issue #24142\n+ assert TP(A-B, B-A).expand(tensorproduct=True) == \\\n+ TP(A, B) - TP(A, A) - TP(B, B) + TP(B, A)", "line": null, "original_line": 49, "original_start_line": null, "path": "sympy/physics/quantum/tests/test_tensorproduct.py", "start_line": null, "text": "@user1:\n```suggestion\r\n TP(A, B) - TP(A, A) - TP(B, B) + TP(B, A)\r\n```" } ]
ecc2cc9114fb688d912e76871c39d7d5be394e34
diff --git a/.mailmap b/.mailmap index 3e85675b0853..025916850778 100644 --- a/.mailmap +++ b/.mailmap @@ -449,6 +449,7 @@ Colin Marquardt <[email protected]> Colleen Lee <[email protected]> <[email protected]> Comer Duncan <[email protected]> Constantin Mateescu <[email protected]> +Costor <[email protected]> Craig A. Stoudt <[email protected]> Cristian Di Pietrantonio <[email protected]> Cristóvão Sousa <[email protected]> diff --git a/sympy/physics/quantum/tensorproduct.py b/sympy/physics/quantum/tensorproduct.py index 78accaf2955c..cc2e02f2ecdd 100644 --- a/sympy/physics/quantum/tensorproduct.py +++ b/sympy/physics/quantum/tensorproduct.py @@ -246,9 +246,12 @@ def _eval_expand_tensorproduct(self, **hints): if isinstance(args[i], Add): for aa in args[i].args: tp = TensorProduct(*args[:i] + (aa,) + args[i + 1:]) - if isinstance(tp, TensorProduct): - tp = tp._eval_expand_tensorproduct() - add_args.append(tp) + c_part, nc_part = tp.args_cnc() + # Check for TensorProduct object: is the one object in nc_part, if any: + # (Note: any other object type to be expanded must be added here) + if len(nc_part) == 1 and isinstance(nc_part[0], TensorProduct): + nc_part = (nc_part[0]._eval_expand_tensorproduct(), ) + add_args.append(Mul(*c_part)*Mul(*nc_part)) break if add_args: diff --git a/sympy/physics/quantum/tests/test_tensorproduct.py b/sympy/physics/quantum/tests/test_tensorproduct.py index f1765640ea2a..882638376873 100644 --- a/sympy/physics/quantum/tests/test_tensorproduct.py +++ b/sympy/physics/quantum/tests/test_tensorproduct.py @@ -44,6 +44,13 @@ def test_tensor_product_abstract(): def test_tensor_product_expand(): assert TP(A + B, B + C).expand(tensorproduct=True) == \ TP(A, B) + TP(A, C) + TP(B, B) + TP(B, C) + #Tests for fix of issue #24142 + assert TP(A-B, B-A).expand(tensorproduct=True) == \ + TP(A, B) - TP(A, A) - TP(B, B) + TP(B, A) + assert TP(2*A + B, A + B).expand(tensorproduct=True) == \ + 2 * TP(A, A) + 2 * TP(A, B) + TP(B, A) + TP(B, B) + assert TP(2 * A * B + A, A + B).expand(tensorproduct=True) == \ + 2 * TP(A*B, A) + 2 * TP(A*B, B) + TP(A, A) + TP(A, B) def test_tensor_product_commutator():
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Bug Fixes" }
sympy__sympy-23932@376abd4
sympy/sympy
Python
23,932
SciPyPrinter: polygamma (fix gh-23924)
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs ~~Fixes #23924~~ #### Brief description of what is fixed or changed ~~Attempt to fix CI failure.~~ Add polygamma to explicit list of functions that SciPy can evaluate. #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> NO ENTRY <!-- END RELEASE NOTES -->
2022-08-15T07:45:13Z
CI failure: lambdify: NameError: name 'polygamma' is not defined This failure is seen in CI for the 3.11 optional dependency tests: ``` ____________ sympy/utilities/tests/test_lambdify.py:test_scipy_fns _____________ Traceback (most recent call last): File "/home/runner/work/sympy/sympy/sympy/utilities/tests/test_lambdify.py", line 1103, in test_scipy_fns assert abs(f(tv) - sympy_result) < 1e-13*(1 + abs(sympy_result)) ^^^^^ File "<lambdifygenerated-604>", line 2, in _lambdifygenerated return polygamma(0, x) ^^^^^^^^^ NameError: name 'polygamma' is not defined ``` That CI job uses latest cython, numpy and scipy from git so this is likely due to a change in scipy.
This job is not "required" so a PR can still be merged if it fails. Resolving the cause of this though is a release blocker for 1.11. If the next SciPy release is going to break `lambdify` then we might need a fix for that in SymPy 1.11. I don't know `lambdify` very well though so I would appreciate it if someone who does takes a look at what happens with lambdifying polygamma using the latest scipy from git. Polygamma is still in the same location in SciPy if nothing else. https://docs.scipy.org/doc/scipy/reference/generated/scipy.special.polygamma.html
[ { "body": "This failure is seen in CI for the 3.11 optional dependency tests:\r\n```\r\n____________ sympy/utilities/tests/test_lambdify.py:test_scipy_fns _____________\r\nTraceback (most recent call last):\r\n File \"/home/runner/work/sympy/sympy/sympy/utilities/tests/test_lambdify.py\", line 1103, in test_scipy_fns\r\n assert abs(f(tv) - sympy_result) < 1e-13*(1 + abs(sympy_result))\r\n ^^^^^\r\n File \"<lambdifygenerated-604>\", line 2, in _lambdifygenerated\r\n return polygamma(0, x)\r\n ^^^^^^^^^\r\nNameError: name 'polygamma' is not defined\r\n```\r\nThat CI job uses latest cython, numpy and scipy from git so this is likely due to a change in scipy.", "number": 23924, "title": "CI failure: lambdify: NameError: name 'polygamma' is not defined" } ]
f0154339c0b4d2602f0f4a6db4488d572b40e16f
{ "head_commit": "376abd48e8f6595f52ee0af1aaba920cbd5ad5b8", "head_commit_message": "SciPyPrinter: polygamma (fix gh-23924)", "patch_to_review": "diff --git a/sympy/printing/numpy.py b/sympy/printing/numpy.py\nindex 5ab29585a17a..a42d1df5d26d 100644\n--- a/sympy/printing/numpy.py\n+++ b/sympy/printing/numpy.py\n@@ -290,6 +290,7 @@ def _print_NDimArray(self, expr):\n 'gamma': 'gamma',\n 'loggamma': 'gammaln',\n 'digamma': 'psi',\n+ 'polygamma': 'polygamma',\n 'RisingFactorial': 'poch',\n 'jacobi': 'eval_jacobi',\n 'gegenbauer': 'eval_gegenbauer',\ndiff --git a/sympy/utilities/tests/test_lambdify.py b/sympy/utilities/tests/test_lambdify.py\nindex 50b285711830..f8269323ce83 100644\n--- a/sympy/utilities/tests/test_lambdify.py\n+++ b/sympy/utilities/tests/test_lambdify.py\n@@ -23,7 +23,7 @@\n from sympy.functions.special.beta_functions import (beta, betainc, betainc_regularized)\n from sympy.functions.special.delta_functions import (Heaviside)\n from sympy.functions.special.error_functions import (Ei, erf, erfc, fresnelc, fresnels)\n-from sympy.functions.special.gamma_functions import (digamma, gamma, loggamma)\n+from sympy.functions.special.gamma_functions import (digamma, gamma, loggamma, polygamma)\n from sympy.integrals.integrals import Integral\n from sympy.logic.boolalg import (And, false, ITE, Not, Or, true)\n from sympy.matrices.expressions.dotproduct import DotProduct\n@@ -1077,7 +1077,7 @@ def test_scipy_fns():\n if not scipy:\n skip(\"scipy not installed\")\n \n- single_arg_sympy_fns = [Ei, erf, erfc, factorial, gamma, loggamma, digamma]\n+ single_arg_sympy_fns = [Ei, erf, erfc, factorial, gamma, loggamma, digamma, polygamma]\n single_arg_scipy_fns = [scipy.special.expi, scipy.special.erf, scipy.special.erfc,\n scipy.special.factorial, scipy.special.gamma, scipy.special.gammaln,\n scipy.special.psi]\n@@ -1097,7 +1097,7 @@ def test_scipy_fns():\n tv = numpy.abs(tv)\n # SymPy's digamma evaluates as polygamma(0, z)\n # which SciPy supports for real arguments only\n- if sympy_fn == digamma:\n+ if sympy_fn in (digamma, polygamma):\n tv = numpy.real(tv)\n sympy_result = sympy_fn(tv).evalf()\n assert abs(f(tv) - sympy_result) < 1e-13*(1 + abs(sympy_result))\n" }
[ { "diff_hunk": "@@ -1077,7 +1077,7 @@ def test_scipy_fns():\n if not scipy:\n skip(\"scipy not installed\")\n \n- single_arg_sympy_fns = [Ei, erf, erfc, factorial, gamma, loggamma, digamma]\n+ single_arg_sympy_fns = [Ei, erf, erfc, factorial, gamma, loggamma, digamma, polygamma]", "line": null, "original_line": 1080, "original_start_line": null, "path": "sympy/utilities/tests/test_lambdify.py", "start_line": null, "text": "@user1:\nI have no idea how this test works, but does this \"guarantee\" that polygamma works when both arguments are used? (Just considering the variable name...)\n\n@author:\nNo, I didn't read the test code carefully enough, next commit now passes for me locally." } ]
9c7ca29c35fbda36d84157767748842334bf384d
diff --git a/sympy/printing/numpy.py b/sympy/printing/numpy.py index 5ab29585a17a..a42d1df5d26d 100644 --- a/sympy/printing/numpy.py +++ b/sympy/printing/numpy.py @@ -290,6 +290,7 @@ def _print_NDimArray(self, expr): 'gamma': 'gamma', 'loggamma': 'gammaln', 'digamma': 'psi', + 'polygamma': 'polygamma', 'RisingFactorial': 'poch', 'jacobi': 'eval_jacobi', 'gegenbauer': 'eval_gegenbauer', diff --git a/sympy/printing/tests/test_numpy.py b/sympy/printing/tests/test_numpy.py index 7b83e2a4f659..6e7e6b71e599 100644 --- a/sympy/printing/tests/test_numpy.py +++ b/sympy/printing/tests/test_numpy.py @@ -1,8 +1,10 @@ from sympy.concrete.summations import Sum from sympy.core.mod import Mod from sympy.core.relational import (Equality, Unequality) +from sympy.core.symbol import Symbol from sympy.functions.elementary.miscellaneous import sqrt from sympy.functions.elementary.piecewise import Piecewise +from sympy.functions.special.gamma_functions import polygamma from sympy.matrices.expressions.blockmatrix import BlockMatrix from sympy.matrices.expressions.matexpr import MatrixSymbol from sympy.matrices.expressions.special import Identity @@ -341,3 +343,6 @@ def test_scipy_print_methods(): assert hasattr(prntr, '_print_erf') assert hasattr(prntr, '_print_factorial') assert hasattr(prntr, '_print_chebyshevt') + k = Symbol('k', integer=True, nonnegative=True) + x = Symbol('x', real=True) + assert prntr.doprint(polygamma(k, x)) == "scipy.special.polygamma(k, x)" diff --git a/sympy/utilities/tests/test_lambdify.py b/sympy/utilities/tests/test_lambdify.py index 50b285711830..757219c85a0b 100644 --- a/sympy/utilities/tests/test_lambdify.py +++ b/sympy/utilities/tests/test_lambdify.py @@ -23,7 +23,7 @@ from sympy.functions.special.beta_functions import (beta, betainc, betainc_regularized) from sympy.functions.special.delta_functions import (Heaviside) from sympy.functions.special.error_functions import (Ei, erf, erfc, fresnelc, fresnels) -from sympy.functions.special.gamma_functions import (digamma, gamma, loggamma) +from sympy.functions.special.gamma_functions import (digamma, gamma, loggamma, polygamma) from sympy.integrals.integrals import Integral from sympy.logic.boolalg import (And, false, ITE, Not, Or, true) from sympy.matrices.expressions.dotproduct import DotProduct @@ -1080,7 +1080,7 @@ def test_scipy_fns(): single_arg_sympy_fns = [Ei, erf, erfc, factorial, gamma, loggamma, digamma] single_arg_scipy_fns = [scipy.special.expi, scipy.special.erf, scipy.special.erfc, scipy.special.factorial, scipy.special.gamma, scipy.special.gammaln, - scipy.special.psi] + scipy.special.psi] numpy.random.seed(0) for (sympy_fn, scipy_fn) in zip(single_arg_sympy_fns, single_arg_scipy_fns): f = lambdify(x, sympy_fn(x), modules="scipy") @@ -1104,18 +1104,20 @@ def test_scipy_fns(): assert abs(f(tv) - scipy_fn(tv)) < 1e-13*(1 + abs(sympy_result)) double_arg_sympy_fns = [RisingFactorial, besselj, bessely, besseli, - besselk] + besselk, polygamma] double_arg_scipy_fns = [scipy.special.poch, scipy.special.jv, - scipy.special.yv, scipy.special.iv, scipy.special.kv] + scipy.special.yv, scipy.special.iv, scipy.special.kv, scipy.special.polygamma] for (sympy_fn, scipy_fn) in zip(double_arg_sympy_fns, double_arg_scipy_fns): f = lambdify((x, y), sympy_fn(x, y), modules="scipy") for i in range(20): # SciPy supports only real orders of Bessel functions tv1 = numpy.random.uniform(-10, 10) tv2 = numpy.random.uniform(-10, 10) + 1j*numpy.random.uniform(-5, 5) - # SciPy supports poch for real arguments only - if sympy_fn == RisingFactorial: + # SciPy requires a real valued 2nd argument for: poch, polygamma + if sympy_fn in (RisingFactorial, polygamma): tv2 = numpy.real(tv2) + if sympy_fn == polygamma: + tv1 = abs(int(tv1)) # first argument to polygamma must be a non-negative integral. sympy_result = sympy_fn(tv1, tv2).evalf() assert abs(f(tv1, tv2) - sympy_result) < 1e-13*(1 + abs(sympy_result)) assert abs(f(tv1, tv2) - scipy_fn(tv1, tv2)) < 1e-13*(1 + abs(sympy_result))
{ "difficulty": "low", "estimated_review_effort": 2, "problem_domain": "Bug Fixes" }
sympy__sympy-23786@4b95864
sympy/sympy
Python
23,786
hep: Fixes bug in gamma_trace of sum of products containing GammaMatrix mixed with other symbols
#### References to other Issues or PRs Fixes #13636 #### Brief description of what is fixed or changed <!-- BEGIN RELEASE NOTES --> * physics.hep * Fixed a bug in finding traces of sums of products of GammaMatrix mixed with other factors. (#13636) <!-- END RELEASE NOTES -->
2022-07-16T17:48:29Z
Strangely behaved gamma_trace (gamma matrix trace) I am trying to use the high energy physics module of SymPy to compute traces of gamma matrices using the dedicated function `sympy.physics.hep.gamma_matrices.gamma_trace`. I have come accross the following strange behaviour: in some cases, the trace of a sum is not computed to be the sum of the traces. Here is an example: ```python from sympy import Symbol from sympy.tensor.tensor import tensor_indices, tensorhead from sympy.physics.hep.gamma_matrices import GammaMatrix as G, gamma_trace, LorentzIndex pi, ki, pf = tensorhead("pi, ki, pf", [LorentzIndex], [[1]]) i0, i1, i2, i3, i4 = tensor_indices("i0:5", LorentzIndex) x = Symbol("x") pis = pi(i2) * G(-i2) kis = ki(i3) * G(-i3) pfs = pf(i4) * G(-i4) # Show the trace of A + B print gamma_trace(pfs * G(i0) * kis * G(i1) * pis * G(-i1) * kis * G(-i0) + pfs * G(i0) * kis * G(i1) * pis * x * G(-i0) * pi(-i1)) # Show the trace of A print gamma_trace(pfs * G(i0) * kis * G(i1) * pis * G(-i1) * kis * G(-i0)) # Show the trace of B print gamma_trace(pfs * G(i0) * kis * G(i1) * pis * x * G(-i0) * pi(-i1)) ``` It may be a normal behaviour, but I couldn't find any hint in the docs dedicated to the module.
`gamma_trace(B)` differs from that part of `gamma_trace(A + B)` which comes from `B` because it is processed by `_simplify_single_line`. It calls `extract_type_tens` in order to collect all gamma matrices, but only the explicit gamma matrix factors are found, not those embedded in `pis, kis, pfs`. Hence it seems that the result of `gamma_trace(B)` is wrong. The result of `gamma_trace(B)` is actually right and similarly for `gamma_trace(A)`. Only `gamma_trace(A + B)` is problematic. Also, I am not sure I understand the argument about the "explicit gamma factors". Do you mean there is a problem with using the intermediary variables `pis, kis, pfs` ? Yet, it seems the documentation gives similar examples. All in all, should this behaviour be considered a bug? And in any case, what is the correct way to compute the trace of a tensor expression involving gamma matrices without worrying too much about this kind of pitfalls? Confirmed this is still a bug in sympy 1.10. This should be considered a bug. The `sympy.physics.hep` module is unmaintained right now. If you want to contribute improvements or take over maintenance then go ahead. OK, I'm pretty sure I have now found the right fix for this. Passes all tests including newly added test for this bug. I will try to create a pull request tomorrow after I get some sleep, run flake8 and black, make sure I have set up a branch in my fork correctly, etc. You don't need to run black. Since black is not currently used if you run it then it will probably rewrite everything so it is best not to do that.
[ { "body": "I am trying to use the high energy physics module of SymPy to compute traces of gamma matrices using the dedicated function `sympy.physics.hep.gamma_matrices.gamma_trace`. I have come accross the following strange behaviour: in some cases, the trace of a sum is not computed to be the sum of the traces. Here is an example:\r\n\r\n```python\r\nfrom sympy import Symbol\r\nfrom sympy.tensor.tensor import tensor_indices, tensorhead\r\nfrom sympy.physics.hep.gamma_matrices import GammaMatrix as G, gamma_trace, LorentzIndex\r\n\r\npi, ki, pf = tensorhead(\"pi, ki, pf\", [LorentzIndex], [[1]])\r\ni0, i1, i2, i3, i4 = tensor_indices(\"i0:5\", LorentzIndex)\r\nx = Symbol(\"x\")\r\npis = pi(i2) * G(-i2)\r\nkis = ki(i3) * G(-i3)\r\npfs = pf(i4) * G(-i4)\r\n\r\n# Show the trace of A + B\r\nprint gamma_trace(pfs * G(i0) * kis * G(i1) * pis * G(-i1) * kis * G(-i0) + pfs * G(i0) * kis * G(i1) * pis * x * G(-i0) * pi(-i1))\r\n# Show the trace of A\r\nprint gamma_trace(pfs * G(i0) * kis * G(i1) * pis * G(-i1) * kis * G(-i0))\r\n# Show the trace of B\r\nprint gamma_trace(pfs * G(i0) * kis * G(i1) * pis * x * G(-i0) * pi(-i1))\r\n```\r\n\r\nIt may be a normal behaviour, but I couldn't find any hint in the docs dedicated to the module.", "number": 13636, "title": "Strangely behaved gamma_trace (gamma matrix trace)" } ]
54885cf23d6206a5a6cf4f71761cb07fbf2c6708
{ "head_commit": "4b958642e6e3a126a06abf039512c4b5a3f109a5", "head_commit_message": "Add myself to .mailmap", "patch_to_review": "diff --git a/.mailmap b/.mailmap\nindex e832c7be1ef2..03eca177f6ff 100644\n--- a/.mailmap\n+++ b/.mailmap\n@@ -588,6 +588,8 @@ Gilbert Gede <[email protected]> <[email protected]>\n Gilles Schintgen <[email protected]>\n Gina <[email protected]>\n Gleb Siroki <[email protected]>\n+Glenn Horton-Smith <[email protected]>\n+Glenn Horton-Smith <[email protected]> Glenn Horton-Smith <[email protected]>\n GolimarOurHero <[email protected]>\n Goutham Lakshminarayan <[email protected]> Goutham <devnull@localhost>\n Govind Sahai <[email protected]>\ndiff --git a/sympy/physics/hep/gamma_matrices.py b/sympy/physics/hep/gamma_matrices.py\nindex 23284e4a9297..3009550888f7 100644\n--- a/sympy/physics/hep/gamma_matrices.py\n+++ b/sympy/physics/hep/gamma_matrices.py\n@@ -190,7 +190,7 @@ def gamma_trace(t):\n \n \"\"\"\n if isinstance(t, TensAdd):\n- res = TensAdd(*[_trace_single_line(x) for x in t.args])\n+ res = TensAdd(*[gamma_trace(x) for x in t.args])\n return res\n t = _simplify_single_line(t)\n res = _trace_single_line(t)\ndiff --git a/sympy/physics/hep/tests/test_gamma_matrices.py b/sympy/physics/hep/tests/test_gamma_matrices.py\nindex 27509803f24f..47e4abbac664 100644\n--- a/sympy/physics/hep/tests/test_gamma_matrices.py\n+++ b/sympy/physics/hep/tests/test_gamma_matrices.py\n@@ -3,6 +3,7 @@\n TensExpr, canon_bp\n from sympy.physics.hep.gamma_matrices import GammaMatrix as G, LorentzIndex, \\\n kahane_simplify, gamma_trace, _simplify_single_line, simplify_gamma_expression\n+from sympy import Symbol\n \n \n def _is_tensor_eq(arg1, arg2):\n@@ -399,3 +400,25 @@ def test_gamma_matrix_trace():\n t = ps*ps*ps*ps*ps*ps*ps*ps\n r = gamma_trace(t)\n assert r.equals(4*p2*p2*p2*p2)\n+\n+def test_bug_13636():\n+ \"\"\"Test issue 13636 regarding handling traces of sums of products\n+ of GammaMatrix mixed with other factors.\"\"\"\n+ pi, ki, pf = tensor_heads(\"pi, ki, pf\", [LorentzIndex])\n+ i0, i1, i2, i3, i4 = tensor_indices(\"i0:5\", LorentzIndex)\n+ x = Symbol(\"x\")\n+ pis = pi(i2) * G(-i2)\n+ kis = ki(i3) * G(-i3)\n+ pfs = pf(i4) * G(-i4)\n+\n+ a = pfs * G(i0) * kis * G(i1) * pis * G(-i1) * kis * G(-i0)\n+ b = pfs * G(i0) * kis * G(i1) * pis * x * G(-i0) * pi(-i1)\n+ ta = gamma_trace(a)\n+ tb = gamma_trace(b)\n+ t_a_plus_b = gamma_trace(a + b)\n+ assert ta.equals(\n+ -16 * ki(i0) * ki(-i0) * pf(i1) * pi(-i1)\n+ + 32 * ki(i0) * ki(i1) * pf(-i0) * pi(-i1)\n+ )\n+ assert tb.equals(-8 * x * ki(i0) * pf(-i0) * pi(i1) * pi(-i1))\n+ assert t_a_plus_b.equals(ta + tb)\n" }
[ { "diff_hunk": "@@ -399,3 +400,25 @@ def test_gamma_matrix_trace():\n t = ps*ps*ps*ps*ps*ps*ps*ps\n r = gamma_trace(t)\n assert r.equals(4*p2*p2*p2*p2)\n+\n+def test_bug_13636():\n+ \"\"\"Test issue 13636 regarding handling traces of sums of products\n+ of GammaMatrix mixed with other factors.\"\"\"\n+ pi, ki, pf = tensor_heads(\"pi, ki, pf\", [LorentzIndex])\n+ i0, i1, i2, i3, i4 = tensor_indices(\"i0:5\", LorentzIndex)\n+ x = Symbol(\"x\")\n+ pis = pi(i2) * G(-i2)\n+ kis = ki(i3) * G(-i3)\n+ pfs = pf(i4) * G(-i4)\n+\n+ a = pfs * G(i0) * kis * G(i1) * pis * G(-i1) * kis * G(-i0)\n+ b = pfs * G(i0) * kis * G(i1) * pis * x * G(-i0) * pi(-i1)\n+ ta = gamma_trace(a)\n+ tb = gamma_trace(b)\n+ t_a_plus_b = gamma_trace(a + b)\n+ assert ta.equals(\n+ -16 * ki(i0) * ki(-i0) * pf(i1) * pi(-i1)\n+ + 32 * ki(i0) * ki(i1) * pf(-i0) * pi(-i1)\n+ )\n+ assert tb.equals(-8 * x * ki(i0) * pf(-i0) * pi(i1) * pi(-i1))\n+ assert t_a_plus_b.equals(ta + tb)", "line": null, "original_line": 424, "original_start_line": 419, "path": "sympy/physics/hep/tests/test_gamma_matrices.py", "start_line": null, "text": "@user1:\nDoes `==` not work here or does it need to use `.equals`?\n\n@author:\nGood question. I just checked, and indeed == does not work here. The .equals() function for tensor objects applies sympify and .cannon_bp to the tensor expression before comparing, while ._eq_ is just Basic._eq_, and is false for this comparison because the expression trees are not exactly identical. The other test functions such as test_gamma_matrix_trace() also use .equals().\r\n\n\n@user1:\nUsually it is better to construct the exact expression for comparison in tests so that they can be compared with `==`. Sometimes that is not be best option because the exact expression is confusing and does not display the intent of the test well.\r\n\r\nI'm not sure what the preferred answer would be here. What exactly is the difference between the compared expressions? You can use `srepr` as one way to view the expression structure explicitly.\n\n@author:\nta == `4*((-4)*ki(L_0)*ki(-L_0)*pf(L_1)*pi(-L_1) + 8*ki(L_0)*ki(L_1)*pf(-L_0)*pi(-L_1))` which is the same as the expression being tested against except for distribution of the factor of 4 over the coefficients. I can use the exact expression for the test, I don't mind.\r\n\r\ntb and ta_plus_tb are the same as the expressions they are being compared against, so we could use == there with no problem.\n\n@author:\nThis is fixed now." } ]
f1d368f0d030b6ed1f40a93f8cb7d948fa7a4d42
diff --git a/.mailmap b/.mailmap index e832c7be1ef2..03eca177f6ff 100644 --- a/.mailmap +++ b/.mailmap @@ -588,6 +588,8 @@ Gilbert Gede <[email protected]> <[email protected]> Gilles Schintgen <[email protected]> Gina <[email protected]> Gleb Siroki <[email protected]> +Glenn Horton-Smith <[email protected]> +Glenn Horton-Smith <[email protected]> Glenn Horton-Smith <[email protected]> GolimarOurHero <[email protected]> Goutham Lakshminarayan <[email protected]> Goutham <devnull@localhost> Govind Sahai <[email protected]> diff --git a/sympy/physics/hep/gamma_matrices.py b/sympy/physics/hep/gamma_matrices.py index 23284e4a9297..3009550888f7 100644 --- a/sympy/physics/hep/gamma_matrices.py +++ b/sympy/physics/hep/gamma_matrices.py @@ -190,7 +190,7 @@ def gamma_trace(t): """ if isinstance(t, TensAdd): - res = TensAdd(*[_trace_single_line(x) for x in t.args]) + res = TensAdd(*[gamma_trace(x) for x in t.args]) return res t = _simplify_single_line(t) res = _trace_single_line(t) diff --git a/sympy/physics/hep/tests/test_gamma_matrices.py b/sympy/physics/hep/tests/test_gamma_matrices.py index 27509803f24f..f5dc8f6d6f10 100644 --- a/sympy/physics/hep/tests/test_gamma_matrices.py +++ b/sympy/physics/hep/tests/test_gamma_matrices.py @@ -3,6 +3,7 @@ TensExpr, canon_bp from sympy.physics.hep.gamma_matrices import GammaMatrix as G, LorentzIndex, \ kahane_simplify, gamma_trace, _simplify_single_line, simplify_gamma_expression +from sympy import Symbol def _is_tensor_eq(arg1, arg2): @@ -399,3 +400,26 @@ def test_gamma_matrix_trace(): t = ps*ps*ps*ps*ps*ps*ps*ps r = gamma_trace(t) assert r.equals(4*p2*p2*p2*p2) + + +def test_bug_13636(): + """Test issue 13636 regarding handling traces of sums of products + of GammaMatrix mixed with other factors.""" + pi, ki, pf = tensor_heads("pi, ki, pf", [LorentzIndex]) + i0, i1, i2, i3, i4 = tensor_indices("i0:5", LorentzIndex) + x = Symbol("x") + pis = pi(i2) * G(-i2) + kis = ki(i3) * G(-i3) + pfs = pf(i4) * G(-i4) + + a = pfs * G(i0) * kis * G(i1) * pis * G(-i1) * kis * G(-i0) + b = pfs * G(i0) * kis * G(i1) * pis * x * G(-i0) * pi(-i1) + ta = gamma_trace(a) + tb = gamma_trace(b) + t_a_plus_b = gamma_trace(a + b) + assert ta == 4 * ( + -4 * ki(i0) * ki(-i0) * pf(i1) * pi(-i1) + + 8 * ki(i0) * ki(i1) * pf(-i0) * pi(-i1) + ) + assert tb == -8 * x * ki(i0) * pf(-i0) * pi(i1) * pi(-i1) + assert t_a_plus_b == ta + tb
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Bug Fixes" }
sympy__sympy-23493@2d971fb
sympy/sympy
Python
23,493
calculus : Fixes is_increasing() cases which undergo infinite recursive loop through periodicity()
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> Fixes #23401 #### Brief description of what is fixed or changed The `if` routine condition has been modified. The coefficient `coeff` seems to be a Float, hence earlier it did not work as expected. On master - ``` >>> expr = (p + 1)/(-1.0e-3*p**2 + 0.1*p + 0.1) >>> is_increasing(expr,Interval(1,2),p) # No result, internally evalution of infinite recursion. ``` On current branch - ``` >>> expr = "(p + 1)/(-1.0e-3*p**2 + 0.1*p + 0.1)" >>> is_increasing(expr,Interval(1,2)) True >>> is_increasing(expr,Interval(1,2),p) True ``` #### Other comments The failing example from the issue has been used as the unit test and some other relevant examples have been tested ! #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> NO ENTRY <!-- END RELEASE NOTES -->
2022-05-13T10:11:35Z
is_increasing() function can lead to periodicity() infinite recursive loop **Environment:** Windows 10 Python 3.9 Sympy 1.10.1 Running in VsCode Jupyter Notebook **Explanation:** Some expressions can cause the is_increasing() function to lead to an infinite recursive loop via the periodicity() function. Below is the simplest code and expression I could find to cause this error (although there may be simpler). I have no idea how or why this is happening. The simplest function has an asymptote at -1 and is not periodic. ``` import sympy var = sympy.symbols("p") # original expression and simplest that still causes error #expr = "(p + 1.5713046)/(-7.89819399268874e-8*p**2 + 0.008879019*p + 0.112832144)" expr = "(p + 1)/(-1.0e-3*p**2 + 0.1*p + 0.1)" print(sympy.is_increasing(expr, sympy.Interval(1, 2), var)) ``` And here is the full error report (stopped with CTRL-C so the python kernel doesn't crash): ``` --------------------------------------------------------------------------- KeyboardInterrupt Traceback (most recent call last) ~\AppData\Local\Temp/ipykernel_28340/1436643031.py in <module> 5 expr = "(p + 1)/(-1.0e-3*p**2 + 0.1*p + 0.1)" 6 ----> 7 print(sympy.is_increasing(expr, sympy.Interval(1,2), var)) ~\AppData\Roaming\Python\Python39\site-packages\sympy\calculus\singularities.py in is_increasing(expression, interval, symbol) 199 200 """ --> 201 return monotonicity_helper(expression, lambda x: x >= 0, interval, symbol) 202 203 ~\AppData\Roaming\Python\Python39\site-packages\sympy\calculus\singularities.py in monotonicity_helper(expression, predicate, interval, symbol) 155 variable = symbol or (free.pop() if free else Symbol('x')) 156 derivative = expression.diff(variable) --> 157 predicate_interval = solveset(predicate(derivative), variable, S.Reals) 158 return interval.is_subset(predicate_interval) 159 ~\AppData\Roaming\Python\Python39\site-packages\sympy\solvers\solveset.py in solveset(f, symbol, domain) 2214 if symbol not in _rc: 2215 x = _rc[0] if domain.is_subset(S.Reals) else _rc[1] -> 2216 rv = solveset(f.xreplace({symbol: x}), x, domain) 2217 # try to use the original symbol if possible 2218 try: ~\AppData\Roaming\Python\Python39\site-packages\sympy\solvers\solveset.py in solveset(f, symbol, domain) 2238 f = piecewise_fold(f) 2239 -> 2240 return _solveset(f, symbol, domain, _check=True) 2241 2242 ~\AppData\Roaming\Python\Python39\site-packages\sympy\solvers\solveset.py in _solveset(f, symbol, domain, _check) 1044 from .inequalities import solve_univariate_inequality 1045 try: -> 1046 result = solve_univariate_inequality( 1047 f, symbol, domain=domain, relational=False) 1048 except NotImplementedError: ~\AppData\Roaming\Python\Python39\site-packages\sympy\solvers\inequalities.py in solve_univariate_inequality(expr, gen, relational, domain, continuous) 489 else: 490 e = expr.lhs - expr.rhs --> 491 period = periodicity(e, gen) 492 if period == S.Zero: 493 e = expand_mul(e) ~\AppData\Roaming\Python\Python39\site-packages\sympy\calculus\util.py in periodicity(f, symbol, check) 486 487 else: --> 488 period = _periodicity(g.args, symbol) 489 490 elif f.is_Add: ~\AppData\Roaming\Python\Python39\site-packages\sympy\calculus\util.py in _periodicity(args, symbol) 558 periods = [] 559 for f in args: --> 560 period = periodicity(f, symbol) 561 if period is None: 562 return None ~\AppData\Roaming\Python\Python39\site-packages\sympy\calculus\util.py in periodicity(f, symbol, check) 483 coeff, g = f.as_independent(symbol, as_Add=False) 484 if isinstance(g, TrigonometricFunction) or coeff is not S.One: --> 485 period = periodicity(g, symbol) 486 487 else: ~\AppData\Roaming\Python\Python39\site-packages\sympy\calculus\util.py in periodicity(f, symbol, check) 483 coeff, g = f.as_independent(symbol, as_Add=False) 484 if isinstance(g, TrigonometricFunction) or coeff is not S.One: --> 485 period = periodicity(g, symbol) 486 487 else: ... ``` Any ideas about what is going on here would be appreciated as I would like to use the is_increasing() function to check arbitrary expressions but that obviously can't happen with this bug around. Thanks
This seems to be the fix: ```diff diff --git a/sympy/calculus/util.py b/sympy/calculus/util.py index 4f6ada07f1..ed8c4e2fbd 100644 --- a/sympy/calculus/util.py +++ b/sympy/calculus/util.py @@ -480,9 +480,8 @@ def _check(orig_f, period): elif f.is_Mul: coeff, g = f.as_independent(symbol, as_Add=False) - if isinstance(g, TrigonometricFunction) or coeff is not S.One: + if isinstance(g, TrigonometricFunction) or coeff is not S.One and not (coeff.is_Float and coeff == 1.0): period = periodicity(g, symbol) - else: period = _periodicity(g.args, symbol) ``` Looks like `coeff` in this case is a Float so testing `is not S.One` does the wrong thing.
[ { "body": "**Environment:**\r\nWindows 10\r\nPython 3.9\r\nSympy 1.10.1\r\nRunning in VsCode Jupyter Notebook\r\n\r\n**Explanation:**\r\nSome expressions can cause the is_increasing() function to lead to an infinite recursive loop via the periodicity() function. Below is the simplest code and expression I could find to cause this error (although there may be simpler). I have no idea how or why this is happening. The simplest function has an asymptote at -1 and is not periodic.\r\n\r\n```\r\nimport sympy\r\nvar = sympy.symbols(\"p\")\r\n# original expression and simplest that still causes error\r\n#expr = \"(p + 1.5713046)/(-7.89819399268874e-8*p**2 + 0.008879019*p + 0.112832144)\"\r\nexpr = \"(p + 1)/(-1.0e-3*p**2 + 0.1*p + 0.1)\"\r\n\r\nprint(sympy.is_increasing(expr, sympy.Interval(1, 2), var))\r\n```\r\n\r\nAnd here is the full error report (stopped with CTRL-C so the python kernel doesn't crash):\r\n```\r\n---------------------------------------------------------------------------\r\nKeyboardInterrupt Traceback (most recent call last)\r\n~\\AppData\\Local\\Temp/ipykernel_28340/1436643031.py in <module>\r\n 5 expr = \"(p + 1)/(-1.0e-3*p**2 + 0.1*p + 0.1)\"\r\n 6 \r\n----> 7 print(sympy.is_increasing(expr, sympy.Interval(1,2), var))\r\n\r\n~\\AppData\\Roaming\\Python\\Python39\\site-packages\\sympy\\calculus\\singularities.py in is_increasing(expression, interval, symbol)\r\n 199 \r\n 200 \"\"\"\r\n--> 201 return monotonicity_helper(expression, lambda x: x >= 0, interval, symbol)\r\n 202 \r\n 203 \r\n\r\n~\\AppData\\Roaming\\Python\\Python39\\site-packages\\sympy\\calculus\\singularities.py in monotonicity_helper(expression, predicate, interval, symbol)\r\n 155 variable = symbol or (free.pop() if free else Symbol('x'))\r\n 156 derivative = expression.diff(variable)\r\n--> 157 predicate_interval = solveset(predicate(derivative), variable, S.Reals)\r\n 158 return interval.is_subset(predicate_interval)\r\n 159 \r\n\r\n~\\AppData\\Roaming\\Python\\Python39\\site-packages\\sympy\\solvers\\solveset.py in solveset(f, symbol, domain)\r\n 2214 if symbol not in _rc:\r\n 2215 x = _rc[0] if domain.is_subset(S.Reals) else _rc[1]\r\n-> 2216 rv = solveset(f.xreplace({symbol: x}), x, domain)\r\n 2217 # try to use the original symbol if possible\r\n 2218 try:\r\n\r\n~\\AppData\\Roaming\\Python\\Python39\\site-packages\\sympy\\solvers\\solveset.py in solveset(f, symbol, domain)\r\n 2238 f = piecewise_fold(f)\r\n 2239 \r\n-> 2240 return _solveset(f, symbol, domain, _check=True)\r\n 2241 \r\n 2242 \r\n\r\n~\\AppData\\Roaming\\Python\\Python39\\site-packages\\sympy\\solvers\\solveset.py in _solveset(f, symbol, domain, _check)\r\n 1044 from .inequalities import solve_univariate_inequality\r\n 1045 try:\r\n-> 1046 result = solve_univariate_inequality(\r\n 1047 f, symbol, domain=domain, relational=False)\r\n 1048 except NotImplementedError:\r\n\r\n~\\AppData\\Roaming\\Python\\Python39\\site-packages\\sympy\\solvers\\inequalities.py in solve_univariate_inequality(expr, gen, relational, domain, continuous)\r\n 489 else:\r\n 490 e = expr.lhs - expr.rhs\r\n--> 491 period = periodicity(e, gen)\r\n 492 if period == S.Zero:\r\n 493 e = expand_mul(e)\r\n\r\n~\\AppData\\Roaming\\Python\\Python39\\site-packages\\sympy\\calculus\\util.py in periodicity(f, symbol, check)\r\n 486 \r\n 487 else:\r\n--> 488 period = _periodicity(g.args, symbol)\r\n 489 \r\n 490 elif f.is_Add:\r\n\r\n~\\AppData\\Roaming\\Python\\Python39\\site-packages\\sympy\\calculus\\util.py in _periodicity(args, symbol)\r\n 558 periods = []\r\n 559 for f in args:\r\n--> 560 period = periodicity(f, symbol)\r\n 561 if period is None:\r\n 562 return None\r\n\r\n~\\AppData\\Roaming\\Python\\Python39\\site-packages\\sympy\\calculus\\util.py in periodicity(f, symbol, check)\r\n 483 coeff, g = f.as_independent(symbol, as_Add=False)\r\n 484 if isinstance(g, TrigonometricFunction) or coeff is not S.One:\r\n--> 485 period = periodicity(g, symbol)\r\n 486 \r\n 487 else:\r\n\r\n~\\AppData\\Roaming\\Python\\Python39\\site-packages\\sympy\\calculus\\util.py in periodicity(f, symbol, check)\r\n 483 coeff, g = f.as_independent(symbol, as_Add=False)\r\n 484 if isinstance(g, TrigonometricFunction) or coeff is not S.One:\r\n--> 485 period = periodicity(g, symbol)\r\n 486 \r\n 487 else:\r\n\r\n ...\r\n```\r\n\r\nAny ideas about what is going on here would be appreciated as I would like to use the is_increasing() function to check arbitrary expressions but that obviously can't happen with this bug around. Thanks", "number": 23401, "title": "is_increasing() function can lead to periodicity() infinite recursive loop" } ]
918eb81f4bb3ae062cef98a9dcb9ecda058e33d5
{ "head_commit": "2d971fbfa3296279d7181e4150498d9bdc011395", "head_commit_message": "Fix_23401", "patch_to_review": "diff --git a/sympy/calculus/tests/test_singularities.py b/sympy/calculus/tests/test_singularities.py\nindex 08036b5a1184..4d34e4a7c597 100644\n--- a/sympy/calculus/tests/test_singularities.py\n+++ b/sympy/calculus/tests/test_singularities.py\n@@ -93,3 +93,9 @@ def test_is_monotonic():\n assert not is_monotonic(-x**2, S.Reals)\n assert is_monotonic(x**2 + y + 1, Interval(1, 2), x)\n raises(NotImplementedError, lambda: is_monotonic(x**2 + y + 1))\n+\n+\n+def test_issue():\n+ x = Symbol('x')\n+ expr = (x + 1)/(-1.0e-3*x**2 + 0.1*x + 0.1)\n+ assert is_increasing(expr, Interval(1,2), x)\ndiff --git a/sympy/calculus/util.py b/sympy/calculus/util.py\nindex 4f6ada07f123..32c9ace12750 100644\n--- a/sympy/calculus/util.py\n+++ b/sympy/calculus/util.py\n@@ -480,7 +480,7 @@ def _check(orig_f, period):\n \n elif f.is_Mul:\n coeff, g = f.as_independent(symbol, as_Add=False)\n- if isinstance(g, TrigonometricFunction) or coeff is not S.One:\n+ if isinstance(g, TrigonometricFunction) or coeff is not S.One and not (coeff.is_Float and coeff == 1.0):\n period = periodicity(g, symbol)\n \n else:\n" }
[ { "diff_hunk": "@@ -480,7 +480,7 @@ def _check(orig_f, period):\n \n elif f.is_Mul:\n coeff, g = f.as_independent(symbol, as_Add=False)\n- if isinstance(g, TrigonometricFunction) or coeff is not S.One:\n+ if isinstance(g, TrigonometricFunction) or coeff is not S.One and not (coeff.is_Float and coeff == 1.0):", "line": null, "original_line": 483, "original_start_line": null, "path": "sympy/calculus/util.py", "start_line": null, "text": "@user1:\nWe should really make a function for testing this sort of thing.\r\n\r\nLooking at `git grep 'is S.One'` I see a lot of other places in the codebase that potentially have the same problem.\n\n@user2:\nI think it better to use `coeff != 1` here.\n\n@user1:\nUsing `!=` does work but I would rather have a function specifically for testing integers and integer-valued floats (see #20033)\n\n@user2:\nA bit off-topic and I believe people have talked it a lot, but I don't understand why Sym(bolic)Py should support calculations mixing numeric and symbolic.\n\n@user1:\nIn practice an analysis-oriented CAS that doesn't support approximate numerics is not very useful because most real problems cannot be solved symbolically.\n\n@author:\nYes the `'is not S.One'` line is a part of multiple `if` routines in the code. Should I make a private function and check everytime ?\n\n@author:\nAlso kindly look into the `CircleCI` test fail .All other tests have passed .\n\n@user3:\n> Also kindly look into the `CircleCI` test fail .All other tests have passed .\r\n\r\nYou have to rebase your code with latest master, so that latest commits into the master are available on your local machine. You can do that by:\r\n```\r\ngit checkout master\r\ngit fetch upstream\r\ngit merge upstream/master\r\ngit checkout fix_23041\r\ngit rebase master\r\n```\n\n@user1:\nThe CircleCI check doesn't matter so nothing needs to be done about that.\r\n\r\nI think maybe what is needed is a function that can check for when two expressions are equal except for possible differences where one expression has a float and the other has an int. For this particular case though to fix this particular issue we could just use `coeff != 1` for now to keep it simple.\n\n@author:\nThanks @user2 for the commits.I have been busy with travel ,hence could not respond to the desired suggestions. Thanks again !" } ]
836af81302765dbefb39fbe6ab469201a378a3fc
diff --git a/sympy/calculus/tests/test_singularities.py b/sympy/calculus/tests/test_singularities.py index 08036b5a1184..2a86be98bbb6 100644 --- a/sympy/calculus/tests/test_singularities.py +++ b/sympy/calculus/tests/test_singularities.py @@ -93,3 +93,9 @@ def test_is_monotonic(): assert not is_monotonic(-x**2, S.Reals) assert is_monotonic(x**2 + y + 1, Interval(1, 2), x) raises(NotImplementedError, lambda: is_monotonic(x**2 + y + 1)) + + +def test_issue_23401(): + x = Symbol('x') + expr = (x + 1)/(-1.0e-3*x**2 + 0.1*x + 0.1) + assert is_increasing(expr, Interval(1,2), x) diff --git a/sympy/calculus/util.py b/sympy/calculus/util.py index 4f6ada07f123..fc2910b8e777 100644 --- a/sympy/calculus/util.py +++ b/sympy/calculus/util.py @@ -480,7 +480,7 @@ def _check(orig_f, period): elif f.is_Mul: coeff, g = f.as_independent(symbol, as_Add=False) - if isinstance(g, TrigonometricFunction) or coeff is not S.One: + if isinstance(g, TrigonometricFunction) or coeff != 1: period = periodicity(g, symbol) else:
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Bug Fixes" }
sympy__sympy-23821@779eba5
sympy/sympy
Python
23,821
add and use `has_xfree`
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs fixes https://github.com/sympy/sympy/issues/23775 by removing auto-expansion by linsolve (while still handling Eq) alternative to #23748 #### Brief description of what is fixed or changed `has_free` is slower than needed for some cases since it does sub-expression matches: ```python >>> eq = f(x+y+1) >>> eq.has_free(f) True >>> eq.has_free(x + 1) True ``` `has_xfree` is more strict and will only return True for full expression matches (similar to the way that `xreplace` makes replacements), so both of the above return False with `has_xfree`. #### Other comments `linsolve` and related functions that extract coefficients from linear systems fail when any term or factor depends on a provided symbol. Rather than providing a limited workaround for cases like treating `f(x)` as independent of `f(x).diff()`, those needing to solve such systems can replace all symbol-dependent objects with dummy symbols. The `recast_to_symbols` can do this: ```python >>> from sympy.solvers.solveset import recast_to_symbols >>> eqs = Tuple(f(x)+f(x).diff()) >>> e, k, d = recast_to_symbols(eqs, eqs.atoms(Derivative)) >>> linsolve(e, [f(x)]).xreplace(d) {(-Derivative(f(x), x),)} ``` Alternatively (though hackish) one could temporarily define Derivative's `has_xfree` to return None ```python >>> Derivative.has_xfree = lambda a, *b: None >>> linsolve(eqs, [f(x)]) {(-Derivative(f(x), x),)} >>> del Derivative.has_xfree ``` I tried to define `Derivative.bound_symbols` as `set(Derivative.variables)` so `iterfreeargs` would not return the function itself. This worked wrt `linsolve`, but many other tests failed. I am not sure of the implications, but it *seems* like the variables of differentation should be considered bound. ```diff diff --git a/sympy/core/function.py b/sympy/core/function.py index 551cf41fe8..0f0c6295f4 100644 --- a/sympy/core/function.py +++ b/sympy/core/function.py @@ -1480,6 +1480,10 @@ def __new__(cls, expr, *variables, **kwargs): expr = factor_terms(signsimp(expr)) return expr + @property + def bound_symbols(cls): + return set(cls.variables) + @property def canonical(cls): return cls.func(cls.expr, ``` then ```python >>> from sympy.core.traversal import * >>> list(iterfreeargs(f(x).diff(x,2))) [Derivative(f(x), (x, 2)), 2] --> in master: [Derivative(f(x), (x, 2)), f(x), (x, 2), x, x, 2] >>> list(iterfreeargs(Integral(x,(x,1,2)))) [Integral(x, (x, 1, 2)), 1, 2] >>> linsolve([f(x)+f(x).diff()], [f(x)]) {(-Derivative(f(x), x),)} >>> integrate(Derivative(f(x), x), x) x*Derivative(f(x), x) --> in master: f(x) >>> integrate(x**3*Derivative(f(x), (x, 4))) x**4*Derivative(f(x), (x, 4))/4 --> in master: x**3*Derivative(f(x), (x, 3)) - 3*x**2*Derivative(f(x), (x, 2)) + 6*x*Derivative(f(x), x) - 6*f(x) ``` #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> * solvers * `linsolve` no longer fully expands expressions so any nonlinear terms that would cancel after expansion now raise * `linear_coeffs` is more efficient and can return results as a dictionary (which is more economical for sparse equations) * `linear_eq_to_matrix` can now handle system expressed in terms of functions * core * `Basic.has_xfree` has been added. Like `xreplace`, it targets only full expression matching <!-- END RELEASE NOTES -->
2022-07-23T15:08:51Z
linsolve can return NaN The routine to identify coefficients of a linear system is fast and can quickly distinguish linear from non-linear system, but the obtained coefficients are not expanded and should probably be so before using them to compute a solution for the system: the coefficient of `x` in the following is 0 ```python >>> linsolve([x*(y+1)*(y-1)-x*y**2+x-2],[x]) {(NaN,)} >>> linsolve([0],[x]) {(x,)} ``` This expansion/check should not be done in the coefficient extraction routine, however.
The coefficient should be expanded automatically when constructing the domain. It fails because the wrong domain is used because of a bug in `construct_domain`: ```python In [13]: construct_domain([-y**2 + (y-1)*(y+1) + y]) Out[13]: (ZZ[y], [y - 1]) In [14]: construct_domain([-y**2 + (y-1)*(y+1) + 1]) Out[14]: (EX, [EX(-y**2 + (y - 1)*(y + 1) + 1)]) ``` This seems to fix it: ```diff diff --git a/sympy/polys/constructor.py b/sympy/polys/constructor.py index dad69c5..f6b0990 100644 --- a/sympy/polys/constructor.py +++ b/sympy/polys/constructor.py @@ -139,8 +139,6 @@ def _construct_composite(coeffs, opt): denoms.append(denom) polys, gens = parallel_dict_from_basic(numers + denoms) # XXX: sorting - if not gens: - return None if opt.composite is None: if any(gen.is_number and gen.is_algebraic for gen in gens): diff --git a/sympy/polys/matrices/linsolve.py b/sympy/polys/matrices/linsolve.py index 75ae26d..3cbe078 100644 --- a/sympy/polys/matrices/linsolve.py +++ b/sympy/polys/matrices/linsolve.py @@ -124,9 +124,15 @@ def sympy_dict_to_dm(eqs_coeffs, eqs_rhs, syms): sym2index = dict(zip(syms, range(nsyms))) eqsdict = [] for eq, rhs in zip(eqs_coeffs, eqs_rhs): - eqdict = {sym2index[s]: elem_map[c] for s, c in eq.items()} + eqdict = {} + for s, c in eq.items(): + ec = elem_map[c] + if ec: + eqdict[sym2index[s]] = ec if rhs: - eqdict[nsyms] = - elem_map[rhs] + erhs = elem_map[rhs] + if erhs: + eqdict[nsyms] = - erhs if eqdict: eqsdict.append(eqdict) sdm_aug = SDM(enumerate(eqsdict), (neqs, nsyms+1), K) ```
[ { "body": "The routine to identify coefficients of a linear system is fast and can quickly distinguish linear from non-linear system, but the obtained coefficients are not expanded and should probably be so before using them to compute a solution for the system: the coefficient of `x` in the following is 0\r\n```python\r\n>>> linsolve([x*(y+1)*(y-1)-x*y**2+x-2],[x])\r\n{(NaN,)}\r\n>>> linsolve([0],[x])\r\n{(x,)}\r\n```\r\nThis expansion/check should not be done in the coefficient extraction routine, however.", "number": 23775, "title": "linsolve can return NaN" } ]
64e187e9521cf63653a831c5c3edc956715207ce
{ "head_commit": "779eba5c83b878de53304429aaa8b6149480973d", "head_commit_message": "don't integrate 0 wrt t\n\nIt doesn't hurt, but it changes the form of the solution when a Sparse or Dense matrix is used because the Sparse automatically doesn't provide zeros (so they aren't integrated) while Dense does.", "patch_to_review": "diff --git a/sympy/core/basic.py b/sympy/core/basic.py\nindex 14640c32d9d3..1fcf7fa9092f 100644\n--- a/sympy/core/basic.py\n+++ b/sympy/core/basic.py\n@@ -1258,6 +1258,33 @@ def has(self, *patterns):\n \"\"\"\n return self._has(iterargs, *patterns)\n \n+ def has_xfree(self, s):\n+ \"\"\"return True if self has any of the patterns in s as a\n+ free argument, else False. This is like `Basic.has_free`\n+ but this will only report exact argument matches.\n+\n+ Examples\n+ ========\n+\n+ >>> from sympy import Function\n+ >>> from sympy.abc import x, y\n+ >>> f = Function('f')\n+ >>> f(x).has_xfree({f})\n+ False\n+ >>> f(x).has_xfree({f(x)})\n+ True\n+ >>> f(x + 1).has_xfree({x})\n+ True\n+ >>> f(x + 1).has_xfree({x + 1})\n+ True\n+ >>> f(x + y + 1).has_xfree({x + 1})\n+ False\n+ \"\"\"\n+ # protect O(1) containment check by requiring:\n+ if not type(s) in (dict, set):\n+ raise ValueError('expecting set or dict argument')\n+ return any(a in s for a in iterfreeargs(self))\n+\n @cacheit\n def has_free(self, *patterns):\n \"\"\"return True if self has object(s) ``x`` as a free expression\n@@ -1284,8 +1311,26 @@ def has_free(self, *patterns):\n True\n >>> (x + y + 1).has_free(y + 1)\n True\n-\n \"\"\"\n+ if not patterns:\n+ return False\n+ p0 = patterns[0]\n+ if len(patterns) == 1 and iterable(p0) and not isinstance(p0, Basic):\n+ # Basic can contain iterables (though not non-Basic, ideally)\n+ # but don't encourage mixed passing patterns\n+ raise ValueError(filldedent('''\n+ Expecting 1 or more Basic args, not a single\n+ non-Basic iterable. Don't forget to unpack\n+ iterables: `eq.has_free(*patterns)`'''))\n+ # try quick test first\n+ try:\n+ s = set(patterns)\n+ rv = self.has_xfree(s)\n+ if rv:\n+ return rv\n+ except TypeError:\n+ pass\n+ # now try matching through slower _has\n return self._has(iterfreeargs, *patterns)\n \n def _has(self, iterargs, *patterns):\ndiff --git a/sympy/core/tests/test_expr.py b/sympy/core/tests/test_expr.py\nindex d94201d65dc2..51376c91c7a3 100644\n--- a/sympy/core/tests/test_expr.py\n+++ b/sympy/core/tests/test_expr.py\n@@ -31,6 +31,7 @@\n from sympy.polys.polytools import factor, cancel, Poly\n from sympy.polys.rationaltools import together\n from sympy.series.order import O\n+from sympy.sets.sets import FiniteSet\n from sympy.simplify.combsimp import combsimp\n from sympy.simplify.gammasimp import gammasimp\n from sympy.simplify.powsimp import powsimp\n@@ -1624,12 +1625,31 @@ def test_has_free():\n assert Integral(f(x), (f(x), 1, y)).has_free(y)\n assert not Integral(f(x), (f(x), 1, y)).has_free(x)\n assert not Integral(f(x), (f(x), 1, y)).has_free(f(x))\n+ # simple extraction\n+ assert (x + 1 + y).has_free(x + 1)\n+ assert not (x + 2 + y).has_free(x + 1)\n+ assert (2 + 3*x*y).has_free(3*x)\n+ raises(ValueError, lambda: x.has_free({x, y}))\n+ s = FiniteSet(1, 2)\n+ assert Piecewise((s, x > 3), (4, True)).has_free(s)\n+ assert not Piecewise((1, x > 3), (4, True)).has_free(s)\n+ # can't make set of these, but fallback will handle\n+ assert not x.has_free(y, [])\n+\n+\n+def test_has_xfree():\n+ assert (x + 1).has_xfree({x})\n+ assert ((x + 1)**2).has_xfree({x + 1})\n+ assert not (x + y + 1).has_xfree({x + 1})\n+ raises(ValueError, lambda: x.has_xfree(x))\n+ raises(ValueError, lambda: x.has_xfree([x]))\n \n \n def test_issue_5300():\n x = Symbol('x', commutative=False)\n assert x*sqrt(2)/sqrt(6) == x*sqrt(3)/3\n \n+\n def test_floordiv():\n from sympy.functions.elementary.integers import floor\n assert x // y == floor(x / y)\ndiff --git a/sympy/core/traversal.py b/sympy/core/traversal.py\nindex 980ff4a03381..0615c221a538 100644\n--- a/sympy/core/traversal.py\n+++ b/sympy/core/traversal.py\n@@ -41,6 +41,7 @@ def iterfreeargs(expr, _first=True):\n \n Examples\n ========\n+\n >>> from sympy import Integral, Function\n >>> from sympy.abc import x\n >>> f = Function('f')\n@@ -66,8 +67,6 @@ def iterfreeargs(expr, _first=True):\n pass # for cases like f being an arg\n \n \n-\n-\n class preorder_traversal:\n \"\"\"\n Do a pre-order traversal of a tree.\ndiff --git a/sympy/polys/matrices/linsolve.py b/sympy/polys/matrices/linsolve.py\nindex 75ae26d669b2..ebf37e461e78 100644\n--- a/sympy/polys/matrices/linsolve.py\n+++ b/sympy/polys/matrices/linsolve.py\n@@ -42,8 +42,11 @@\n sdm_nullspace_from_rref\n )\n \n+from sympy.utilities.misc import filldedent\n+\n \n def _linsolve(eqs, syms):\n+\n \"\"\"Solve a linear system of equations.\n \n Examples\n@@ -69,8 +72,8 @@ def _linsolve(eqs, syms):\n nsyms = len(syms)\n \n # Convert to sparse augmented matrix (len(eqs) x (nsyms+1))\n- eqsdict, rhs = _linear_eq_to_dict(eqs, syms)\n- Aaug = sympy_dict_to_dm(eqsdict, rhs, syms)\n+ eqsdict, const = _linear_eq_to_dict(eqs, syms)\n+ Aaug = sympy_dict_to_dm(eqsdict, const, syms)\n K = Aaug.domain\n \n # sdm_irref has issues with float matrices. This uses the ddm_rref()\n@@ -126,53 +129,55 @@ def sympy_dict_to_dm(eqs_coeffs, eqs_rhs, syms):\n for eq, rhs in zip(eqs_coeffs, eqs_rhs):\n eqdict = {sym2index[s]: elem_map[c] for s, c in eq.items()}\n if rhs:\n- eqdict[nsyms] = - elem_map[rhs]\n+ eqdict[nsyms] = -elem_map[rhs]\n if eqdict:\n eqsdict.append(eqdict)\n- sdm_aug = SDM(enumerate(eqsdict), (neqs, nsyms+1), K)\n+ sdm_aug = SDM(enumerate(eqsdict), (neqs, nsyms + 1), K)\n return sdm_aug\n \n \n-def _expand_eqs_deprecated(eqs):\n- \"\"\"Use expand to cancel nonlinear terms.\n+def _linear_eq_to_dict(eqs, syms):\n+ \"\"\"Convert a system Expr/Eq equations into dict form, returning\n+ the coefficient dictionaries and a list of syms-independent terms\n+ from each expression in ``eqs```.\n \n- This approach matches previous behaviour of linsolve but should be\n- deprecated.\n+ Examples\n+ ========\n+\n+ >>> from sympy.polys.matrices.linsolve import _linear_eq_to_dict\n+ >>> from sympy.abc import x\n+ >>> _linear_eq_to_dict([2*x + 3], {x})\n+ ([{x: 2}], [3])\n \"\"\"\n- def expand_eq(eq):\n- if eq.is_Equality:\n- eq = eq.lhs - eq.rhs\n- return eq.expand()\n+ coeffs = []\n+ ind = []\n+ symset = set(syms)\n+ for i, e in enumerate(eqs):\n+ c, d = _lin_eq2dict(e, symset)\n+ coeffs.append(d)\n+ ind.append(c)\n+ return coeffs, ind\n \n- return [expand_eq(eq) for eq in eqs]\n \n+def _lin_eq2dict(a, symset):\n+ \"\"\"return (c, d) where c is the sym-independent part of ``a`` and\n+ ``d`` is an efficiently calculated dictionary mapping symbols to\n+ their coefficients. A PolyNonlinearError is raised if non-linearity\n+ is detected.\n \n-def _linear_eq_to_dict(eqs, syms):\n- \"\"\"Convert a system Expr/Eq equations into dict form\"\"\"\n- try:\n- return _linear_eq_to_dict_inner(eqs, syms)\n- except PolyNonlinearError:\n- # XXX: This should be deprecated:\n- eqs = _expand_eqs_deprecated(eqs)\n- return _linear_eq_to_dict_inner(eqs, syms)\n-\n-\n-def _linear_eq_to_dict_inner(eqs, syms):\n- \"\"\"Convert a system Expr/Eq equations into dict form\"\"\"\n- syms = set(syms)\n- eqsdict, eqs_rhs = [], []\n- for eq in eqs:\n- rhs, eqdict = _lin_eq2dict(eq, syms)\n- eqsdict.append(eqdict)\n- eqs_rhs.append(rhs)\n- return eqsdict, eqs_rhs\n+ The values in the dictionary will be non-zero.\n \n+ Examples\n+ ========\n \n-def _lin_eq2dict(a, symset):\n- \"\"\"Efficiently convert a linear equation to a dict of coefficients\"\"\"\n+ >>> from sympy.polys.matrices.linsolve import _lin_eq2dict\n+ >>> from sympy.abc import x, y\n+ >>> _lin_eq2dict(x + 2*y + 3, {x, y})\n+ (3, {x: 1, y: 2})\n+ \"\"\"\n if a in symset:\n return S.Zero, {a: S.One}\n- elif a.is_Add:\n+ if a.is_Add:\n terms_list = defaultdict(list)\n coeff_list = []\n for ai in a.args:\n@@ -183,7 +188,7 @@ def _lin_eq2dict(a, symset):\n coeff = Add(*coeff_list)\n terms = {sym: Add(*coeffs) for sym, coeffs in terms_list.items()}\n return coeff, terms\n- elif a.is_Mul:\n+ if a.is_Mul:\n terms = terms_coeff = None\n coeff_list = []\n for ai in a.args:\n@@ -194,16 +199,30 @@ def _lin_eq2dict(a, symset):\n terms = ti\n terms_coeff = ci\n else:\n- raise PolyNonlinearError\n- coeff = Mul(*coeff_list)\n+ # since ti is not null and we already have\n+ # a term, this is a cross term\n+ raise PolyNonlinearError(filldedent('''\n+ nonlinear cross-term: %s''' % a))\n+ coeff = Mul._from_args(coeff_list)\n if terms is None:\n return coeff, {}\n else:\n terms = {sym: coeff * c for sym, c in terms.items()}\n return coeff * terms_coeff, terms\n- elif a.is_Equality:\n- return _lin_eq2dict(a.lhs - a.rhs, symset)\n- elif not a.has_free(*symset):\n+ if a.is_Equality:\n+ (coeff, terms), (cR, tR) = [_lin_eq2dict(ai, symset)\n+ for ai in a.args]\n+ # there were no nonlinear errors so now\n+ # cancellation is allowed\n+ coeff -= cR\n+ for k, v in tR.items():\n+ if k in terms:\n+ terms[k] -= v\n+ else:\n+ terms[k] = v\n+ # don't store coefficients of 0, however\n+ terms = {k: v for k, v in terms.items() if v}\n+ return coeff, terms\n+ if not a.has_xfree(symset):\n return a, {}\n- else:\n- raise PolyNonlinearError\n+ raise PolyNonlinearError('nonlinear term: %s' % a)\ndiff --git a/sympy/polys/matrices/tests/test_linsolve.py b/sympy/polys/matrices/tests/test_linsolve.py\nindex e82f3f6af664..9d8cd7eb9feb 100644\n--- a/sympy/polys/matrices/tests/test_linsolve.py\n+++ b/sympy/polys/matrices/tests/test_linsolve.py\n@@ -103,6 +103,9 @@ def all_close(sol1, sol2, eps=1e-15):\n \n \n def test__linsolve_deprecated():\n- assert _linsolve([Eq(x**2, x**2+y)], [x, y]) == {x:x, y:S.Zero}\n- assert _linsolve([(x+y)**2-x**2], [x]) == {x:-y/2}\n- assert _linsolve([Eq((x+y)**2, x**2)], [x]) == {x:-y/2}\n+ raises(PolyNonlinearError, lambda:\n+ _linsolve([Eq(x**2, x**2 + y)], [x, y]))\n+ raises(PolyNonlinearError, lambda:\n+ _linsolve([(x + y)**2 - x**2], [x]))\n+ raises(PolyNonlinearError, lambda:\n+ _linsolve([Eq((x + y)**2, x**2)], [x]))\ndiff --git a/sympy/solvers/ode/systems.py b/sympy/solvers/ode/systems.py\nindex 82bdea435e8d..98263983e6f7 100644\n--- a/sympy/solvers/ode/systems.py\n+++ b/sympy/solvers/ode/systems.py\n@@ -197,7 +197,6 @@ def simpcoeff(coeff, wrt2):\n rep = {}\n \n sol = [Eq(s.lhs, simprhs(s.rhs, rep, wrt1, wrt2)) for s in sol]\n-\n return sol\n \n \n@@ -468,13 +467,7 @@ def linear_ode_to_matrix(eqs, funcs, t, order):\n \n for o in range(order, -1, -1):\n # Work from the highest derivative down\n- funcs_deriv = [func.diff(t, o) for func in funcs]\n-\n- # linear_eq_to_matrix expects a proper symbol so substitute e.g.\n- # Derivative(x(t), t) for a Dummy.\n- rep = {func_deriv: Dummy() for func_deriv in funcs_deriv}\n- eqs = [eq.subs(rep) for eq in eqs]\n- syms = [rep[func_deriv] for func_deriv in funcs_deriv]\n+ syms = [func.diff(t, o) for func in funcs]\n \n # Ai is the matrix for X(t).diff(t, o)\n # eqs is minus the remainder of the equations.\n@@ -947,7 +940,7 @@ def linodesolve(A, t, b=None, B=None, type=\"auto\", doit=False,\n # constants = numbered_symbols(prefix='C', cls=Dummy, start=const_idx+1)\n Cvect = Matrix(list(Dummy() for _ in range(n)))\n \n- if any(type == typ for typ in [\"type2\", \"type4\", \"type6\"]) and b is None:\n+ if b is None and any(type == typ for typ in [\"type2\", \"type4\", \"type6\"]):\n b = zeros(n, 1)\n \n is_transformed = tau is not None\n@@ -973,6 +966,7 @@ def linodesolve(A, t, b=None, B=None, type=\"auto\", doit=False,\n A = system_info['A']\n b = system_info['b']\n \n+ intx_wrtt = lambda x: Integral(x, t) if x else 0\n if type in (\"type1\", \"type2\", \"type5\", \"type6\"):\n P, J = matrix_exp_jordan_form(A, t)\n P = simplify(P)\n@@ -981,8 +975,7 @@ def linodesolve(A, t, b=None, B=None, type=\"auto\", doit=False,\n sol_vector = P * (J * Cvect)\n else:\n Jinv = J.subs(t, -t)\n- sol_vector = P * J * ((Jinv * P.inv() * b).applyfunc(lambda x: Integral(x, t)) + Cvect)\n-\n+ sol_vector = P * J * ((Jinv * P.inv() * b).applyfunc(intx_wrtt) + Cvect)\n else:\n if B is None:\n B, _ = _is_commutative_anti_derivative(A, t)\n@@ -990,7 +983,7 @@ def linodesolve(A, t, b=None, B=None, type=\"auto\", doit=False,\n if type == \"type3\":\n sol_vector = B.exp() * Cvect\n else:\n- sol_vector = B.exp() * (((-B).exp() * b).applyfunc(lambda x: Integral(x, t)) + Cvect)\n+ sol_vector = B.exp() * (((-B).exp() * b).applyfunc(intx_wrtt) + Cvect)\n \n if is_transformed:\n sol_vector = sol_vector.subs(t, tau)\ndiff --git a/sympy/solvers/ode/tests/test_systems.py b/sympy/solvers/ode/tests/test_systems.py\nindex dd08d644f7a2..2a2b1e155c6d 100644\n--- a/sympy/solvers/ode/tests/test_systems.py\n+++ b/sympy/solvers/ode/tests/test_systems.py\n@@ -1071,8 +1071,8 @@ def test_sysode_linear_neq_order1_type2():\n eqs6 = [Eq(Derivative(f(x), x), -9*f(x) - 4*g(x)),\n Eq(Derivative(g(x), x), -4*g(x)),\n Eq(Derivative(h(x), x), h(x) + exp(x))]\n- sol6 = [Eq(f(x), C1*exp(-4*x)*Rational(-4, 5) + C2*exp(-9*x)),\n- Eq(g(x), C1*exp(-4*x)),\n+ sol6 = [Eq(f(x), C2*exp(-4*x)*Rational(-4, 5) + C1*exp(-9*x)),\n+ Eq(g(x), C2*exp(-4*x)),\n Eq(h(x), C3*exp(x) + x*exp(x))]\n assert dsolve(eqs6) == sol6\n assert checksysodesol(eqs6, sol6) == (True, [0, 0, 0])\n@@ -1647,11 +1647,11 @@ def test_higher_order_to_first_order_9():\n \n eqs9 = [f(x) + g(x) - 2*exp(I*x) + 2*Derivative(f(x), x) + Derivative(f(x), (x, 2)),\n f(x) + g(x) - 2*exp(I*x) + 2*Derivative(g(x), x) + Derivative(g(x), (x, 2))]\n- sol9 = [Eq(f(x), -C1 + C2*exp(-2*x)/2 - (C3/2 - C4/2)*exp(-x)*cos(x)\n- + (C3/2 + C4/2)*exp(-x)*sin(x) + 2*((1 - 2*I)*exp(I*x)*sin(x)**2/5)\n+ sol9 = [Eq(f(x), -C1 + C4*exp(-2*x)/2 - (C2/2 - C3/2)*exp(-x)*cos(x)\n+ + (C2/2 + C3/2)*exp(-x)*sin(x) + 2*((1 - 2*I)*exp(I*x)*sin(x)**2/5)\n + 2*((1 - 2*I)*exp(I*x)*cos(x)**2/5)),\n- Eq(g(x), C1 - C2*exp(-2*x)/2 - (C3/2 - C4/2)*exp(-x)*cos(x)\n- + (C3/2 + C4/2)*exp(-x)*sin(x) + 2*((1 - 2*I)*exp(I*x)*sin(x)**2/5)\n+ Eq(g(x), C1 - C4*exp(-2*x)/2 - (C2/2 - C3/2)*exp(-x)*cos(x)\n+ + (C2/2 + C3/2)*exp(-x)*sin(x) + 2*((1 - 2*I)*exp(I*x)*sin(x)**2/5)\n + 2*((1 - 2*I)*exp(I*x)*cos(x)**2/5))]\n assert dsolve(eqs9) == sol9\n assert checksysodesol(eqs9, sol9) == (True, [0, 0])\n@@ -1977,11 +1977,8 @@ def test_linodesolve():\n \n # non-homogeneous term assumed to be 0\n sol1 = [-C1*exp(-t/2 + sqrt(5)*t/2)/2 + sqrt(5)*C1*exp(-t/2 + sqrt(5)*t/2)/2 - sqrt(5)*C2*exp(-sqrt(5)*t/2\n- - t/2)/2 - C2*exp(-sqrt(5)*t/2 - t/2)/2 - exp(-t/2 + sqrt(5)*t/2)*Integral(0, t)/2 +\n- sqrt(5)*exp(-t/2 + sqrt(5)*t/2)*Integral(0, t)/2 - sqrt(5)*exp(-sqrt(5)*t/2 - t/2)*Integral(0, t)/2\n- - exp(-sqrt(5)*t/2 - t/2)*Integral(0, t)/2,\n- C1*exp(-t/2 + sqrt(5)*t/2) + C2*exp(-sqrt(5)*t/2 - t/2)\n- + exp(-t/2 + sqrt(5)*t/2)*Integral(0, t) + exp(-sqrt(5)*t/2 - t/2)*Integral(0, t)]\n+ - t/2)/2 - C2*exp(-sqrt(5)*t/2 - t/2)/2,\n+ C1*exp(-t/2 + sqrt(5)*t/2) + C2*exp(-sqrt(5)*t/2 - t/2)]\n assert constant_renumber(linodesolve(A, t, type=\"type2\"), variables=[t]) == sol1\n \n # Testing the Errors\ndiff --git a/sympy/solvers/solveset.py b/sympy/solvers/solveset.py\nindex b0c49825a988..c19e88868803 100644\n--- a/sympy/solvers/solveset.py\n+++ b/sympy/solvers/solveset.py\n@@ -12,7 +12,7 @@\n \"\"\"\n from sympy.core.sympify import sympify\n from sympy.core import (S, Pow, Dummy, pi, Expr, Wild, Mul, Equality,\n- Add)\n+ Add, Basic)\n from sympy.core.containers import Tuple\n from sympy.core.function import (Lambda, expand_complex, AppliedUndef,\n expand_log, _mexpand, expand_trig, nfloat)\n@@ -23,7 +23,7 @@\n from sympy.core.sorting import default_sort_key, ordered\n from sympy.core.symbol import Symbol, _uniquely_named_symbol\n from sympy.core.sympify import _sympify\n-from sympy.core.traversal import iterfreeargs\n+from sympy.polys.matrices.linsolve import _linear_eq_to_dict, _lin_eq2dict\n from sympy.polys.polyroots import UnsolvableFactorError\n from sympy.simplify.simplify import simplify, fraction, trigsimp, nsimplify\n from sympy.simplify import powdenest, logcombine\n@@ -38,7 +38,7 @@\n from sympy.sets import (FiniteSet, imageset, Interval, Intersection,\n Union, ConditionSet, ImageSet, Complement, Contains)\n from sympy.sets.sets import Set, ProductSet\n-from sympy.matrices import Matrix, MatrixBase\n+from sympy.matrices import SparseMatrix, MatrixBase\n from sympy.ntheory import totient\n from sympy.ntheory.factor_ import divisors\n from sympy.ntheory.residue_ntheory import discrete_log, nthroot_mod\n@@ -54,11 +54,10 @@\n from sympy.solvers.polysys import solve_poly_system\n from sympy.utilities import filldedent\n from sympy.utilities.iterables import (numbered_symbols, has_dups,\n- is_sequence)\n+ is_sequence, iterable)\n from sympy.calculus.util import periodicity, continuous_domain, function_range\n \n from types import GeneratorType\n-from collections import defaultdict\n \n \n class NonlinearError(ValueError):\n@@ -2411,7 +2410,7 @@ def solvify(f, symbol, domain):\n ###############################################################################\n \n \n-def linear_coeffs(eq, *syms, **_kw):\n+def linear_coeffs(eq, *syms, dict=False):\n \"\"\"Return a list whose elements are the coefficients of the\n corresponding symbols in the sum of terms in ``eq``.\n The additive constant is returned as the last element of the\n@@ -2422,69 +2421,82 @@ def linear_coeffs(eq, *syms, **_kw):\n \n NonlinearError\n The equation contains a nonlinear term\n+ ValueError\n+ duplicate or unordered symbols are passed\n+\n+ Parameters\n+ ==========\n+\n+ dict - (default False) when True, return coefficients as a\n+ dictionary with coefficients keyed to syms that were present;\n+ key 1 gives the constant term\n \n Examples\n ========\n \n >>> from sympy.solvers.solveset import linear_coeffs\n >>> from sympy.abc import x, y, z\n-\n >>> linear_coeffs(3*x + 2*y - 1, x, y)\n [3, 2, -1]\n \n It is not necessary to expand the expression:\n \n- >>> linear_coeffs(x + y*(z*(x*3 + 2) + 3), x)\n- [3*y*z + 1, y*(2*z + 3)]\n+ >>> linear_coeffs(x + y*(z*(x*3 + 2) + 3), x)\n+ [3*y*z + 1, y*(2*z + 3)]\n \n- But if there are nonlinear or cross terms -- even if they would\n- cancel after simplification -- an error is raised so the situation\n- does not pass silently past the caller's attention:\n+ When nonlinear is detected, an error will be raised:\n \n- >>> eq = 1/x*(x - 1) + 1/x\n- >>> linear_coeffs(eq.expand(), x)\n- [0, 1]\n- >>> linear_coeffs(eq, x)\n- Traceback (most recent call last):\n- ...\n- NonlinearError: nonlinear term encountered: 1/x\n+ * even if they would cancel after expansion (so the\n+ situation does not pass silently past the caller's\n+ attention)\n \n- >>> linear_coeffs(x*(y + 1) - x*y, x, y)\n- Traceback (most recent call last):\n- ...\n- NonlinearError: nonlinear term encountered: x*(y + 1)\n+ >>> eq = 1/x*(x - 1) + 1/x\n+ >>> linear_coeffs(eq.expand(), x)\n+ [0, 1]\n+ >>> linear_coeffs(eq, x)\n+ Traceback (most recent call last):\n+ ...\n+ NonlinearError:\n+ nonlinear in given generators\n+\n+ * when there are cross terms\n+\n+ >>> linear_coeffs(x*(y + 1), x, y)\n+ Traceback (most recent call last):\n+ ...\n+ NonlinearError:\n+ symbol-dependent cross-terms encountered\n+\n+ * when there are terms that contain an expression\n+ dependent on the symbols that is not linear\n+\n+ >>> linear_coeffs(x**2, x)\n+ Traceback (most recent call last):\n+ ...\n+ NonlinearError:\n+ nonlinear in given generators\n \"\"\"\n- d = defaultdict(list)\n eq = _sympify(eq)\n+ if len(syms) == 1 and iterable(syms[0]) and not isinstance(syms[0], Basic):\n+ raise ValueError('expecting unpacked symbols, *syms')\n symset = set(syms)\n if len(symset) != len(syms):\n raise ValueError('duplicate symbols given')\n- has = set(iterfreeargs(eq)) & symset\n- if not has:\n- return [S.Zero]*len(syms) + [eq]\n- c, terms = eq.as_coeff_add(*has)\n- d[0].extend(Add.make_args(c))\n- for t in terms:\n- m, f = t.as_coeff_mul(*has)\n- if len(f) != 1:\n- break\n- f = f[0]\n- if f in symset:\n- d[f].append(m)\n- elif f.is_Add:\n- d1 = linear_coeffs(f, *has, **{'dict': True})\n- d[0].append(m*d1.pop(0))\n- for xf, vf in d1.items():\n- d[xf].append(m*vf)\n- else:\n- break\n- else:\n- for k, v in d.items():\n- d[k] = Add(*v)\n- if not _kw:\n- return [d.get(s, S.Zero) for s in syms]+ [d[0]]\n- return d # default is still list but this won't matter\n- raise NonlinearError('nonlinear term encountered: %s' % t)\n+ try:\n+ c, d = _lin_eq2dict(eq, symset)\n+ except PolyNonlinearError as err:\n+ raise NonlinearError(str(err))\n+ if dict:\n+ if c:\n+ d[S.One] = c\n+ return d\n+ rv = [S.Zero]*(len(syms) + 1)\n+ rv[-1] = c\n+ for i, k in enumerate(syms):\n+ if k not in d:\n+ continue\n+ rv[i] = d[k]\n+ return rv\n \n \n def linear_eq_to_matrix(equations, *symbols):\n@@ -2531,39 +2543,39 @@ def linear_eq_to_matrix(equations, *symbols):\n The coefficients (numerical or symbolic) of the symbols will\n be returned as matrices:\n \n- >>> eqns = [c*x + z - 1 - c, y + z, x - y]\n- >>> A, b = linear_eq_to_matrix(eqns, [x, y, z])\n- >>> A\n- Matrix([\n- [c, 0, 1],\n- [0, 1, 1],\n- [1, -1, 0]])\n- >>> b\n- Matrix([\n- [c + 1],\n- [ 0],\n- [ 0]])\n+ >>> eqns = [c*x + z - 1 - c, y + z, x - y]\n+ >>> A, b = linear_eq_to_matrix(eqns, [x, y, z])\n+ >>> A\n+ Matrix([\n+ [c, 0, 1],\n+ [0, 1, 1],\n+ [1, -1, 0]])\n+ >>> b\n+ Matrix([\n+ [c + 1],\n+ [ 0],\n+ [ 0]])\n \n This routine does not simplify expressions and will raise an error\n if nonlinearity is encountered:\n \n- >>> eqns = [\n- ... (x**2 - 3*x)/(x - 3) - 3,\n- ... y**2 - 3*y - y*(y - 4) + x - 4]\n- >>> linear_eq_to_matrix(eqns, [x, y])\n- Traceback (most recent call last):\n- ...\n- NonlinearError:\n- The term (x**2 - 3*x)/(x - 3) is nonlinear in {x, y}\n+ >>> eqns = [\n+ ... (x**2 - 3*x)/(x - 3) - 3,\n+ ... y**2 - 3*y - y*(y - 4) + x - 4]\n+ >>> linear_eq_to_matrix(eqns, [x, y])\n+ Traceback (most recent call last):\n+ ...\n+ NonlinearError:\n+ symbol-dependent term can be ignored using `strict=False`\n \n- Simplifying these equations will discard the removable singularity\n- in the first, reveal the linear structure of the second:\n+ Simplifying these equations will discard the removable singularity\n+ in the first and reveal the linear structure of the second:\n \n- >>> [e.simplify() for e in eqns]\n- [x - 3, x + y - 4]\n+ >>> [e.simplify() for e in eqns]\n+ [x - 3, x + y - 4]\n \n- Any such simplification needed to eliminate nonlinear terms must\n- be done before calling this routine.\n+ Any such simplification needed to eliminate nonlinear terms must\n+ be done *before* calling this routine.\n \"\"\"\n if not symbols:\n raise ValueError(filldedent('''\n@@ -2574,12 +2586,6 @@ def linear_eq_to_matrix(equations, *symbols):\n if hasattr(symbols[0], '__iter__'):\n symbols = symbols[0]\n \n- for i in symbols:\n- if not isinstance(i, Symbol):\n- raise ValueError(filldedent('''\n- Expecting a Symbol but got %s\n- ''' % i))\n-\n if has_dups(symbols):\n raise ValueError('Symbols must be unique')\n \n@@ -2594,14 +2600,19 @@ def linear_eq_to_matrix(equations, *symbols):\n Eq or Matrix.\n '''))\n \n- A, b = [], []\n- for i, f in enumerate(equations):\n- if isinstance(f, Equality):\n- f = f.rewrite(Add, evaluate=False)\n- coeff_list = linear_coeffs(f, *symbols)\n- b.append(-coeff_list.pop())\n- A.append(coeff_list)\n- A, b = map(Matrix, (A, b))\n+ # construct the dictionaries\n+ try:\n+ eq, c = _linear_eq_to_dict(equations, symbols)\n+ except PolyNonlinearError as err:\n+ raise NonlinearError(str(err))\n+ # prepare output matrices\n+ n, m = shape = len(eq), len(symbols)\n+ ix = dict(zip(symbols, range(m)))\n+ dat = {(row, ix[k]): d[k] for row, d in enumerate(eq) for k in d}\n+ rhs = [-i for i in c]\n+ del c\n+ A = SparseMatrix(*shape, dat)\n+ b = SparseMatrix(n, 1, rhs)\n return A, b\n \n \n@@ -2760,16 +2771,23 @@ def linsolve(system, *symbols):\n >>> linsolve([], x)\n EmptySet\n \n- * An error is raised if, after expansion, any nonlinearity\n- is detected:\n+ * An error is raised if any nonlinearity is detected, even\n+ if it could be removed with expansion\n+\n+ >>> linsolve([x*(1/x - 1)], x)\n+ Traceback (most recent call last):\n+ ...\n+ NonlinearError: nonlinear term: 1/x\n+\n+ >>> linsolve([x*(y + 1)], x, y)\n+ Traceback (most recent call last):\n+ ...\n+ NonlinearError: nonlinear cross-term: x*(y + 1)\n \n- >>> linsolve([x*(1/x - 1), (y - 1)**2 - y**2 + 1], x, y)\n- {(1, 1)}\n >>> linsolve([x**2 - 1], x)\n Traceback (most recent call last):\n ...\n- NonlinearError:\n- nonlinear term encountered: x**2\n+ NonlinearError: nonlinear term: x**2\n \"\"\"\n if not system:\n return S.EmptySet\ndiff --git a/sympy/solvers/tests/test_solveset.py b/sympy/solvers/tests/test_solveset.py\nindex 87def61c7632..64676fcc3e71 100644\n--- a/sympy/solvers/tests/test_solveset.py\n+++ b/sympy/solvers/tests/test_solveset.py\n@@ -1362,6 +1362,10 @@ def test_abs_invert_solvify():\n \n \n def test_linear_eq_to_matrix():\n+ assert linear_eq_to_matrix(0, x) == (Matrix([[0]]), Matrix([[0]]))\n+ assert linear_eq_to_matrix(1, x) == (Matrix([[0]]), Matrix([[-1]]))\n+\n+ # integer coefficients\n eqns1 = [2*x + y - 2*z - 3, x - y - z, x + y + 3*z - 12]\n eqns2 = [Eq(3*x + 2*y - z, 1), Eq(2*x - 2*y + 4*z, -2), -2*x + y - 2*z]\n \n@@ -1379,17 +1383,24 @@ def test_linear_eq_to_matrix():\n assert A == Matrix([[a*b, b, c], [d + e, f, g], [i, j, k]])\n assert B == Matrix([[d], [h], [l]])\n \n- # raise ValueError if\n+ # raise Errors if\n # 1) no symbols are given\n raises(ValueError, lambda: linear_eq_to_matrix(eqns3))\n # 2) there are duplicates\n raises(ValueError, lambda: linear_eq_to_matrix(eqns3, [x, x, y]))\n- # 3) there are non-symbols\n- raises(ValueError, lambda: linear_eq_to_matrix(eqns3, [x, 1/a, y]))\n- # 4) a nonlinear term is detected in the original expression\n+ # 3) a nonlinear term is detected in the original expression\n raises(NonlinearError, lambda: linear_eq_to_matrix(Eq(1/x + x, 1/x), [x]))\n+ raises(NonlinearError, lambda: linear_eq_to_matrix([x**2], [x]))\n+ raises(NonlinearError, lambda: linear_eq_to_matrix([x*y], [x, y]))\n+ # 4) Eq being used to represent equations autoevaluates\n+ # (use unevaluated Eq instead)\n+ raises(ValueError, lambda: linear_eq_to_matrix(Eq(x, x), x))\n+ raises(ValueError, lambda: linear_eq_to_matrix(Eq(x, x + 1), x))\n+\n+\n+ # if non-symbols are passed, the user is responsible for interpreting\n+ assert linear_eq_to_matrix([x], [1/x]) == (Matrix([[0]]), Matrix([[-x]]))\n \n- assert linear_eq_to_matrix(1, x) == (Matrix([[0]]), Matrix([[-1]]))\n # issue 15195\n assert linear_eq_to_matrix(x + y*(z*(3*x + 2) + 3), x) == (\n Matrix([[3*y*z + 1]]), Matrix([[-y*(2*z + 3)]]))\n@@ -1502,12 +1513,11 @@ def test_linsolve():\n assert linsolve(Eqns, x, y) == {\n (kilo*newton*Rational(-28, 3), kN*Rational(4, 3))}\n \n- # linsolve fully expands expressions, so removable singularities\n- # and other nonlinearity does not raise an error\n+ # linsolve does not allow expansion (real or implemented)\n+ # to remove singularities, but it will cancel linear terms\n assert linsolve([Eq(x, x + y)], [x, y]) == {(x, 0)}\n- assert linsolve([Eq(1/x, 1/x + y)], [x, y]) == {(x, 0)}\n- assert linsolve([Eq(y/x, y/x + y)], [x, y]) == {(x, 0)}\n- assert linsolve([Eq(x*(x + 1), x**2 + y)], [x, y]) == {(y, y)}\n+ raises(NonlinearError, lambda:\n+ linsolve([Eq(x**2, x**2 + y)], [x, y]))\n \n # corner cases\n #\n@@ -2785,8 +2795,11 @@ def test_linear_coeffs():\n linear_coeffs(x, x, x))\n assert linear_coeffs(a*(x + y), x, y) == [a, a, 0]\n assert linear_coeffs(1.0, x, y) == [0, 0, 1.0]\n+ # don't include coefficients of 0\n+ assert linear_coeffs(Eq(x, x + y), x, y, dict=True) == {y: 1}\n+ assert linear_coeffs(0, x, y, dict=True) == {}\n+\n \n-# modular tests\n def test_is_modular():\n assert _is_modular(y, x) is False\n assert _is_modular(Mod(x, 3) - 1, x) is True\n" }
[ { "diff_hunk": "@@ -2594,14 +2600,19 @@ def linear_eq_to_matrix(equations, *symbols):\n Eq or Matrix.\n '''))\n \n- A, b = [], []\n- for i, f in enumerate(equations):\n- if isinstance(f, Equality):\n- f = f.rewrite(Add, evaluate=False)\n- coeff_list = linear_coeffs(f, *symbols)\n- b.append(-coeff_list.pop())\n- A.append(coeff_list)\n- A, b = map(Matrix, (A, b))\n+ # construct the dictionaries\n+ try:\n+ eq, c = _linear_eq_to_dict(equations, symbols)\n+ except PolyNonlinearError as err:\n+ raise NonlinearError(str(err))\n+ # prepare output matrices\n+ n, m = shape = len(eq), len(symbols)\n+ ix = dict(zip(symbols, range(m)))\n+ dat = {(row, ix[k]): d[k] for row, d in enumerate(eq) for k in d}\n+ rhs = [-i for i in c]\n+ del c\n+ A = SparseMatrix(*shape, dat)\n+ b = SparseMatrix(n, 1, rhs)", "line": null, "original_line": 2615, "original_start_line": null, "path": "sympy/solvers/solveset.py", "start_line": null, "text": "@user1:\nI pretty much want to abolish SparseMatrix so it would be better not to add new uses of it." }, { "diff_hunk": "@@ -1258,6 +1258,33 @@ def has(self, *patterns):\n \"\"\"\n return self._has(iterargs, *patterns)\n \n+ def has_xfree(self, s):", "line": null, "original_line": 1261, "original_start_line": null, "path": "sympy/core/basic.py", "start_line": null, "text": "@user1:\nMaybe this should be type-hinted as `s: set[Basic]`.\n\n@author:\nI don't follow you. `s` should be a set, not Basic.\n\n@user1:\nIt should be a `set` of `Basic` which is what `set[Basic]` means in typing syntax." }, { "diff_hunk": "@@ -194,16 +199,30 @@ def _lin_eq2dict(a, symset):\n terms = ti\n terms_coeff = ci\n else:\n- raise PolyNonlinearError\n- coeff = Mul(*coeff_list)\n+ # since ti is not null and we already have\n+ # a term, this is a cross term\n+ raise PolyNonlinearError(filldedent('''\n+ nonlinear cross-term: %s''' % a))\n+ coeff = Mul._from_args(coeff_list)\n if terms is None:\n return coeff, {}\n else:\n terms = {sym: coeff * c for sym, c in terms.items()}\n return coeff * terms_coeff, terms\n- elif a.is_Equality:\n- return _lin_eq2dict(a.lhs - a.rhs, symset)\n- elif not a.has_free(*symset):\n+ if a.is_Equality:\n+ (coeff, terms), (cR, tR) = [_lin_eq2dict(ai, symset)\n+ for ai in a.args]", "line": null, "original_line": 214, "original_start_line": null, "path": "sympy/polys/matrices/linsolve.py", "start_line": null, "text": "@user1:\nMaybe equalities should be handled at a higher level before calling this function.\n\n@author:\nI had already considered that. It is here because otherwise anyone using any of the linear coefficient extracting routines would have to duplicate this code to carefully make sure there is no linearity and then allow cancellation of linear terms.\n\n@user1:\nIt could be handled by `_linear_eq_to_dict` for example.\r\n\r\nThe checking for zeros probably needs to be handled more broadly than just for `Equality`." }, { "diff_hunk": "@@ -126,53 +129,70 @@ def sympy_dict_to_dm(eqs_coeffs, eqs_rhs, syms):\n for eq, rhs in zip(eqs_coeffs, eqs_rhs):\n eqdict = {sym2index[s]: elem_map[c] for s, c in eq.items()}\n if rhs:\n- eqdict[nsyms] = - elem_map[rhs]\n+ eqdict[nsyms] = -elem_map[rhs]\n if eqdict:\n eqsdict.append(eqdict)\n- sdm_aug = SDM(enumerate(eqsdict), (neqs, nsyms+1), K)\n+ sdm_aug = SDM(enumerate(eqsdict), (neqs, nsyms + 1), K)\n return sdm_aug\n \n \n-def _expand_eqs_deprecated(eqs):\n- \"\"\"Use expand to cancel nonlinear terms.\n+def _linear_eq_to_dict(eqs, syms):\n+ \"\"\"Convert a system Expr/Eq equations into dict form, returning\n+ the coefficient dictionaries and a list of syms-independent terms\n+ from each expression in ``eqs```.\n \n- This approach matches previous behaviour of linsolve but should be\n- deprecated.\n+ Examples\n+ ========\n+\n+ >>> from sympy.polys.matrices.linsolve import _linear_eq_to_dict\n+ >>> from sympy.abc import x\n+ >>> _linear_eq_to_dict([2*x + 3], {x})\n+ ([{x: 2}], [3])\n \"\"\"\n- def expand_eq(eq):\n- if eq.is_Equality:\n- eq = eq.lhs - eq.rhs\n- return eq.expand()\n+ coeffs = []\n+ ind = []\n+ symset = set(syms)\n+ for i, e in enumerate(eqs):\n+ if e.is_Equality:\n+ (coeff, terms), (cR, tR) = [_lin_eq2dict(ai, symset)\n+ for ai in e.args]", "line": null, "original_line": 158, "original_start_line": null, "path": "sympy/polys/matrices/linsolve.py", "start_line": null, "text": "@user1:\nA loop to avoid repeating a line just twice is not really a good application of DRY. It's a lot easier to understand the straight-forward repetitive code:\r\n```python\r\ncoeff, terms = _lin_eq2dict(e.lhs, symset)\r\ncR, tR = _lin_eq2dict(e.rhs, symset)\r\n```" }, { "diff_hunk": "@@ -126,53 +129,70 @@ def sympy_dict_to_dm(eqs_coeffs, eqs_rhs, syms):\n for eq, rhs in zip(eqs_coeffs, eqs_rhs):\n eqdict = {sym2index[s]: elem_map[c] for s, c in eq.items()}\n if rhs:\n- eqdict[nsyms] = - elem_map[rhs]\n+ eqdict[nsyms] = -elem_map[rhs]\n if eqdict:\n eqsdict.append(eqdict)\n- sdm_aug = SDM(enumerate(eqsdict), (neqs, nsyms+1), K)\n+ sdm_aug = SDM(enumerate(eqsdict), (neqs, nsyms + 1), K)\n return sdm_aug\n \n \n-def _expand_eqs_deprecated(eqs):\n- \"\"\"Use expand to cancel nonlinear terms.\n+def _linear_eq_to_dict(eqs, syms):\n+ \"\"\"Convert a system Expr/Eq equations into dict form, returning\n+ the coefficient dictionaries and a list of syms-independent terms\n+ from each expression in ``eqs```.\n \n- This approach matches previous behaviour of linsolve but should be\n- deprecated.\n+ Examples\n+ ========\n+\n+ >>> from sympy.polys.matrices.linsolve import _linear_eq_to_dict\n+ >>> from sympy.abc import x\n+ >>> _linear_eq_to_dict([2*x + 3], {x})\n+ ([{x: 2}], [3])\n \"\"\"\n- def expand_eq(eq):\n- if eq.is_Equality:\n- eq = eq.lhs - eq.rhs\n- return eq.expand()\n+ coeffs = []\n+ ind = []\n+ symset = set(syms)\n+ for i, e in enumerate(eqs):\n+ if e.is_Equality:\n+ (coeff, terms), (cR, tR) = [_lin_eq2dict(ai, symset)\n+ for ai in e.args]\n+ # there were no nonlinear errors so now\n+ # cancellation is allowed\n+ coeff -= cR\n+ for k, v in tR.items():\n+ if k in terms:\n+ terms[k] -= v\n+ else:\n+ terms[k] = v", "line": null, "original_line": 166, "original_start_line": null, "path": "sympy/polys/matrices/linsolve.py", "start_line": null, "text": "@user1:\nShould this not be `-v`?\n\n@author:\nHmm.. that should be tested." }, { "diff_hunk": "@@ -126,53 +129,70 @@ def sympy_dict_to_dm(eqs_coeffs, eqs_rhs, syms):\n for eq, rhs in zip(eqs_coeffs, eqs_rhs):\n eqdict = {sym2index[s]: elem_map[c] for s, c in eq.items()}\n if rhs:\n- eqdict[nsyms] = - elem_map[rhs]\n+ eqdict[nsyms] = -elem_map[rhs]\n if eqdict:\n eqsdict.append(eqdict)\n- sdm_aug = SDM(enumerate(eqsdict), (neqs, nsyms+1), K)\n+ sdm_aug = SDM(enumerate(eqsdict), (neqs, nsyms + 1), K)\n return sdm_aug\n \n \n-def _expand_eqs_deprecated(eqs):\n- \"\"\"Use expand to cancel nonlinear terms.\n+def _linear_eq_to_dict(eqs, syms):\n+ \"\"\"Convert a system Expr/Eq equations into dict form, returning\n+ the coefficient dictionaries and a list of syms-independent terms\n+ from each expression in ``eqs```.\n \n- This approach matches previous behaviour of linsolve but should be\n- deprecated.\n+ Examples\n+ ========\n+\n+ >>> from sympy.polys.matrices.linsolve import _linear_eq_to_dict\n+ >>> from sympy.abc import x\n+ >>> _linear_eq_to_dict([2*x + 3], {x})\n+ ([{x: 2}], [3])\n \"\"\"\n- def expand_eq(eq):\n- if eq.is_Equality:\n- eq = eq.lhs - eq.rhs\n- return eq.expand()\n+ coeffs = []\n+ ind = []\n+ symset = set(syms)\n+ for i, e in enumerate(eqs):\n+ if e.is_Equality:\n+ (coeff, terms), (cR, tR) = [_lin_eq2dict(ai, symset)\n+ for ai in e.args]\n+ # there were no nonlinear errors so now\n+ # cancellation is allowed\n+ coeff -= cR\n+ for k, v in tR.items():\n+ if k in terms:\n+ terms[k] -= v\n+ else:\n+ terms[k] = v\n+ # don't store coefficients of 0, however\n+ terms = {k: v for k, v in terms.items() if v}\n+ c, d = coeff, terms\n+ else:\n+ c, d = _lin_eq2dict(e, symset)\n+ coeffs.append(d)\n+ ind.append(c)\n+ return coeffs, ind\n \n- return [expand_eq(eq) for eq in eqs]\n \n+def _lin_eq2dict(a, symset):\n+ \"\"\"return (c, d) where c is the sym-independent part of ``a`` and\n+ ``d`` is an efficiently calculated dictionary mapping symbols to\n+ their coefficients. A PolyNonlinearError is raised if non-linearity\n+ is detected.\n \n-def _linear_eq_to_dict(eqs, syms):\n- \"\"\"Convert a system Expr/Eq equations into dict form\"\"\"\n- try:\n- return _linear_eq_to_dict_inner(eqs, syms)\n- except PolyNonlinearError:\n- # XXX: This should be deprecated:\n- eqs = _expand_eqs_deprecated(eqs)\n- return _linear_eq_to_dict_inner(eqs, syms)\n-\n-\n-def _linear_eq_to_dict_inner(eqs, syms):\n- \"\"\"Convert a system Expr/Eq equations into dict form\"\"\"\n- syms = set(syms)\n- eqsdict, eqs_rhs = [], []\n- for eq in eqs:\n- rhs, eqdict = _lin_eq2dict(eq, syms)\n- eqsdict.append(eqdict)\n- eqs_rhs.append(rhs)\n- return eqsdict, eqs_rhs\n+ The values in the dictionary will be non-zero.\n \n+ Examples\n+ ========\n \n-def _lin_eq2dict(a, symset):\n- \"\"\"Efficiently convert a linear equation to a dict of coefficients\"\"\"\n+ >>> from sympy.polys.matrices.linsolve import _lin_eq2dict\n+ >>> from sympy.abc import x, y\n+ >>> _lin_eq2dict(x + 2*y + 3, {x, y})\n+ (3, {x: 1, y: 2})\n+ \"\"\"\n if a in symset:\n return S.Zero, {a: S.One}\n- elif a.is_Add:\n+ if a.is_Add:", "line": null, "original_line": 195, "original_start_line": null, "path": "sympy/polys/matrices/linsolve.py", "start_line": null, "text": "@user1:\nWhy have you changed these `elif` to `if`? It's much clearer to use `elif` to show that all cases are mutually exclusive without needing to scan through to see that every control path either returns or raises.\n\n@author:\nPreference. If every block clearly ends with a return (or starts with an elif) coverage is complete." }, { "diff_hunk": "@@ -126,53 +129,70 @@ def sympy_dict_to_dm(eqs_coeffs, eqs_rhs, syms):\n for eq, rhs in zip(eqs_coeffs, eqs_rhs):\n eqdict = {sym2index[s]: elem_map[c] for s, c in eq.items()}\n if rhs:\n- eqdict[nsyms] = - elem_map[rhs]\n+ eqdict[nsyms] = -elem_map[rhs]\n if eqdict:\n eqsdict.append(eqdict)\n- sdm_aug = SDM(enumerate(eqsdict), (neqs, nsyms+1), K)\n+ sdm_aug = SDM(enumerate(eqsdict), (neqs, nsyms + 1), K)\n return sdm_aug\n \n \n-def _expand_eqs_deprecated(eqs):\n- \"\"\"Use expand to cancel nonlinear terms.\n+def _linear_eq_to_dict(eqs, syms):\n+ \"\"\"Convert a system Expr/Eq equations into dict form, returning\n+ the coefficient dictionaries and a list of syms-independent terms\n+ from each expression in ``eqs```.\n \n- This approach matches previous behaviour of linsolve but should be\n- deprecated.\n+ Examples\n+ ========\n+\n+ >>> from sympy.polys.matrices.linsolve import _linear_eq_to_dict\n+ >>> from sympy.abc import x\n+ >>> _linear_eq_to_dict([2*x + 3], {x})\n+ ([{x: 2}], [3])\n \"\"\"\n- def expand_eq(eq):\n- if eq.is_Equality:\n- eq = eq.lhs - eq.rhs\n- return eq.expand()\n+ coeffs = []\n+ ind = []\n+ symset = set(syms)\n+ for i, e in enumerate(eqs):\n+ if e.is_Equality:\n+ (coeff, terms), (cR, tR) = [_lin_eq2dict(ai, symset)\n+ for ai in e.args]", "line": null, "original_line": 158, "original_start_line": 157, "path": "sympy/polys/matrices/linsolve.py", "start_line": null, "text": "@author:\n```suggestion\r\n coeff, terms = _lin_eq2dict(e.lhs, symset)\r\n cR, tR = _lin_eq2dict(e.rhs, symset)\r\n```" }, { "diff_hunk": "@@ -126,53 +129,70 @@ def sympy_dict_to_dm(eqs_coeffs, eqs_rhs, syms):\n for eq, rhs in zip(eqs_coeffs, eqs_rhs):\n eqdict = {sym2index[s]: elem_map[c] for s, c in eq.items()}\n if rhs:\n- eqdict[nsyms] = - elem_map[rhs]\n+ eqdict[nsyms] = -elem_map[rhs]\n if eqdict:\n eqsdict.append(eqdict)\n- sdm_aug = SDM(enumerate(eqsdict), (neqs, nsyms+1), K)\n+ sdm_aug = SDM(enumerate(eqsdict), (neqs, nsyms + 1), K)\n return sdm_aug\n \n \n-def _expand_eqs_deprecated(eqs):\n- \"\"\"Use expand to cancel nonlinear terms.\n+def _linear_eq_to_dict(eqs, syms):\n+ \"\"\"Convert a system Expr/Eq equations into dict form, returning\n+ the coefficient dictionaries and a list of syms-independent terms\n+ from each expression in ``eqs```.\n \n- This approach matches previous behaviour of linsolve but should be\n- deprecated.\n+ Examples\n+ ========\n+\n+ >>> from sympy.polys.matrices.linsolve import _linear_eq_to_dict\n+ >>> from sympy.abc import x\n+ >>> _linear_eq_to_dict([2*x + 3], {x})\n+ ([{x: 2}], [3])\n \"\"\"\n- def expand_eq(eq):\n- if eq.is_Equality:\n- eq = eq.lhs - eq.rhs\n- return eq.expand()\n+ coeffs = []\n+ ind = []\n+ symset = set(syms)\n+ for i, e in enumerate(eqs):\n+ if e.is_Equality:\n+ (coeff, terms), (cR, tR) = [_lin_eq2dict(ai, symset)\n+ for ai in e.args]\n+ # there were no nonlinear errors so now\n+ # cancellation is allowed\n+ coeff -= cR\n+ for k, v in tR.items():\n+ if k in terms:\n+ terms[k] -= v\n+ else:\n+ terms[k] = v", "line": null, "original_line": 166, "original_start_line": null, "path": "sympy/polys/matrices/linsolve.py", "start_line": null, "text": "@author:\n```suggestion\r\n terms[k] = -v\r\n```" }, { "diff_hunk": "@@ -126,53 +129,70 @@ def sympy_dict_to_dm(eqs_coeffs, eqs_rhs, syms):\n for eq, rhs in zip(eqs_coeffs, eqs_rhs):\n eqdict = {sym2index[s]: elem_map[c] for s, c in eq.items()}\n if rhs:\n- eqdict[nsyms] = - elem_map[rhs]\n+ eqdict[nsyms] = -elem_map[rhs]\n if eqdict:\n eqsdict.append(eqdict)\n- sdm_aug = SDM(enumerate(eqsdict), (neqs, nsyms+1), K)\n+ sdm_aug = SDM(enumerate(eqsdict), (neqs, nsyms + 1), K)\n return sdm_aug\n \n \n-def _expand_eqs_deprecated(eqs):\n- \"\"\"Use expand to cancel nonlinear terms.\n+def _linear_eq_to_dict(eqs, syms):\n+ \"\"\"Convert a system Expr/Eq equations into dict form, returning\n+ the coefficient dictionaries and a list of syms-independent terms\n+ from each expression in ``eqs```.\n \n- This approach matches previous behaviour of linsolve but should be\n- deprecated.\n+ Examples\n+ ========\n+\n+ >>> from sympy.polys.matrices.linsolve import _linear_eq_to_dict\n+ >>> from sympy.abc import x\n+ >>> _linear_eq_to_dict([2*x + 3], {x})\n+ ([{x: 2}], [3])\n \"\"\"\n- def expand_eq(eq):\n- if eq.is_Equality:\n- eq = eq.lhs - eq.rhs\n- return eq.expand()\n+ coeffs = []\n+ ind = []\n+ symset = set(syms)\n+ for i, e in enumerate(eqs):\n+ if e.is_Equality:\n+ (coeff, terms), (cR, tR) = [_lin_eq2dict(ai, symset)\n+ for ai in e.args]\n+ # there were no nonlinear errors so now\n+ # cancellation is allowed\n+ coeff -= cR\n+ for k, v in tR.items():\n+ if k in terms:\n+ terms[k] -= v\n+ else:\n+ terms[k] = v\n+ # don't store coefficients of 0, however\n+ terms = {k: v for k, v in terms.items() if v}\n+ c, d = coeff, terms\n+ else:\n+ c, d = _lin_eq2dict(e, symset)\n+ coeffs.append(d)\n+ ind.append(c)\n+ return coeffs, ind\n \n- return [expand_eq(eq) for eq in eqs]\n \n+def _lin_eq2dict(a, symset):\n+ \"\"\"return (c, d) where c is the sym-independent part of ``a`` and\n+ ``d`` is an efficiently calculated dictionary mapping symbols to\n+ their coefficients. A PolyNonlinearError is raised if non-linearity\n+ is detected.\n \n-def _linear_eq_to_dict(eqs, syms):\n- \"\"\"Convert a system Expr/Eq equations into dict form\"\"\"\n- try:\n- return _linear_eq_to_dict_inner(eqs, syms)\n- except PolyNonlinearError:\n- # XXX: This should be deprecated:\n- eqs = _expand_eqs_deprecated(eqs)\n- return _linear_eq_to_dict_inner(eqs, syms)\n-\n-\n-def _linear_eq_to_dict_inner(eqs, syms):\n- \"\"\"Convert a system Expr/Eq equations into dict form\"\"\"\n- syms = set(syms)\n- eqsdict, eqs_rhs = [], []\n- for eq in eqs:\n- rhs, eqdict = _lin_eq2dict(eq, syms)\n- eqsdict.append(eqdict)\n- eqs_rhs.append(rhs)\n- return eqsdict, eqs_rhs\n+ The values in the dictionary will be non-zero.\n \n+ Examples\n+ ========\n \n-def _lin_eq2dict(a, symset):\n- \"\"\"Efficiently convert a linear equation to a dict of coefficients\"\"\"\n+ >>> from sympy.polys.matrices.linsolve import _lin_eq2dict\n+ >>> from sympy.abc import x, y\n+ >>> _lin_eq2dict(x + 2*y + 3, {x, y})\n+ (3, {x: 1, y: 2})\n+ \"\"\"\n if a in symset:\n return S.Zero, {a: S.One}\n- elif a.is_Add:\n+ if a.is_Add:", "line": null, "original_line": 195, "original_start_line": null, "path": "sympy/polys/matrices/linsolve.py", "start_line": null, "text": "@author:\n```suggestion\r\n elif a.is_Add:\r\n```" }, { "diff_hunk": "@@ -183,7 +203,7 @@ def _lin_eq2dict(a, symset):\n coeff = Add(*coeff_list)\n terms = {sym: Add(*coeffs) for sym, coeffs in terms_list.items()}\n return coeff, terms\n- elif a.is_Mul:\n+ if a.is_Mul:", "line": null, "original_line": 206, "original_start_line": null, "path": "sympy/polys/matrices/linsolve.py", "start_line": null, "text": "@author:\n```suggestion\r\n elif a.is_Mul:\r\n```" }, { "diff_hunk": "@@ -194,16 +214,16 @@ def _lin_eq2dict(a, symset):\n terms = ti\n terms_coeff = ci\n else:\n- raise PolyNonlinearError\n- coeff = Mul(*coeff_list)\n+ # since ti is not null and we already have\n+ # a term, this is a cross term\n+ raise PolyNonlinearError(filldedent('''\n+ nonlinear cross-term: %s''' % a))\n+ coeff = Mul._from_args(coeff_list)\n if terms is None:\n return coeff, {}\n else:\n terms = {sym: coeff * c for sym, c in terms.items()}\n return coeff * terms_coeff, terms\n- elif a.is_Equality:\n- return _lin_eq2dict(a.lhs - a.rhs, symset)\n- elif not a.has_free(*symset):\n+ if not a.has_xfree(symset):", "line": null, "original_line": 227, "original_start_line": null, "path": "sympy/polys/matrices/linsolve.py", "start_line": null, "text": "@author:\n```suggestion\r\n elif not a.has_xfree(symset):\r\n```" }, { "diff_hunk": "@@ -194,16 +214,16 @@ def _lin_eq2dict(a, symset):\n terms = ti\n terms_coeff = ci\n else:\n- raise PolyNonlinearError\n- coeff = Mul(*coeff_list)\n+ # since ti is not null and we already have\n+ # a term, this is a cross term\n+ raise PolyNonlinearError(filldedent('''\n+ nonlinear cross-term: %s''' % a))\n+ coeff = Mul._from_args(coeff_list)\n if terms is None:\n return coeff, {}\n else:\n terms = {sym: coeff * c for sym, c in terms.items()}\n return coeff * terms_coeff, terms\n- elif a.is_Equality:\n- return _lin_eq2dict(a.lhs - a.rhs, symset)\n- elif not a.has_free(*symset):\n+ if not a.has_xfree(symset):\n return a, {}\n- else:\n- raise PolyNonlinearError\n+ raise PolyNonlinearError('nonlinear term: %s' % a)", "line": null, "original_line": 229, "original_start_line": null, "path": "sympy/polys/matrices/linsolve.py", "start_line": null, "text": "@author:\n```suggestion\r\n else:\r\n raise PolyNonlinearError('nonlinear term: %s' % a)\r\n```" }, { "diff_hunk": "@@ -1624,12 +1625,31 @@ def test_has_free():\n assert Integral(f(x), (f(x), 1, y)).has_free(y)\n assert not Integral(f(x), (f(x), 1, y)).has_free(x)\n assert not Integral(f(x), (f(x), 1, y)).has_free(f(x))\n+ # simple extraction\n+ assert (x + 1 + y).has_free(x + 1)\n+ assert not (x + 2 + y).has_free(x + 1)\n+ assert (2 + 3*x*y).has_free(3*x)\n+ raises(ValueError, lambda: x.has_free({x, y}))\n+ s = FiniteSet(1, 2)\n+ assert Piecewise((s, x > 3), (4, True)).has_free(s)\n+ assert not Piecewise((1, x > 3), (4, True)).has_free(s)\n+ # can't make set of these, but fallback will handle\n+ assert not x.has_free(y, [])", "line": null, "original_line": 1637, "original_start_line": null, "path": "sympy/core/tests/test_expr.py", "start_line": null, "text": "@user1:\nShouldn't this just give a TypeError?\n\n@author:\nI'm not sure that it is yet a valid assumption that all Basic args are hashable. If it is, then an error can be raised.\n\n@user1:\nBasic args should be Basic and therefore should be hashable. If they aren't then that's a bug and it should be fixed. We shouldn't try to work around bugs like that because it just makes them less visible. There shouldn't be any code that attempts to allow for Basic to have non-Basic args." }, { "diff_hunk": "@@ -1258,6 +1258,33 @@ def has(self, *patterns):\n \"\"\"\n return self._has(iterargs, *patterns)\n \n+ def has_xfree(self, s: set[Basic]):\n+ \"\"\"return True if self has any of the patterns in s as a\n+ free argument, else False. This is like `Basic.has_free`\n+ but this will only report exact argument matches.\n+\n+ Examples\n+ ========\n+\n+ >>> from sympy import Function\n+ >>> from sympy.abc import x, y\n+ >>> f = Function('f')\n+ >>> f(x).has_xfree({f})\n+ False\n+ >>> f(x).has_xfree({f(x)})\n+ True\n+ >>> f(x + 1).has_xfree({x})\n+ True\n+ >>> f(x + 1).has_xfree({x + 1})\n+ True\n+ >>> f(x + y + 1).has_xfree({x + 1})\n+ False\n+ \"\"\"\n+ # protect O(1) containment check by requiring:\n+ if not type(s) in (dict, set):\n+ raise ValueError('expecting set or dict argument')", "line": null, "original_line": 1285, "original_start_line": null, "path": "sympy/core/basic.py", "start_line": null, "text": "@user1:\nI guess TypeError is more appropriate. The type hint in the signature suggests that only `set` is allowed. I suggest only allowing `set` here. I don't see a strong benefit in anything else.\n\n@author:\nI put the dict there because you indicated that dict or set would have O(1) lookup. And there is no reason to force the user to convert dict to set if it can be used directly. I can make up a case where someone would want to pass a dict, but I'm not sure *I* would do so in practice. But I have appreciated the flexibility of Python to just \"do the right thing\" if no rules are broken...and to enforce as few rules as necessary." }, { "diff_hunk": "@@ -1258,6 +1258,33 @@ def has(self, *patterns):\n \"\"\"\n return self._has(iterargs, *patterns)\n \n+ def has_xfree(self, s: set[Basic]):", "line": 1261, "original_line": 1261, "original_start_line": null, "path": "sympy/core/basic.py", "start_line": null, "text": "@user1:\nIf we're type-hinting then we can also add the return type `-> bool`." }, { "diff_hunk": "@@ -1284,8 +1311,27 @@ def has_free(self, *patterns):\n True\n >>> (x + y + 1).has_free(y + 1)\n True\n-\n \"\"\"\n+ if not patterns:\n+ return False\n+ p0 = patterns[0]\n+ if len(patterns) == 1 and iterable(p0) and not isinstance(p0, Basic):\n+ # Basic can contain iterables (though not non-Basic, ideally)\n+ # but don't encourage mixed passing patterns\n+ raise ValueError(filldedent('''\n+ Expecting 1 or more Basic args, not a single\n+ non-Basic iterable. Don't forget to unpack\n+ iterables: `eq.has_free(*patterns)`'''))\n+ # try quick test first\n+ try:\n+ s = set(patterns)\n+ except TypeError:\n+ pass # patterns had a non-hashable element", "line": null, "original_line": 1329, "original_start_line": null, "path": "sympy/core/basic.py", "start_line": null, "text": "@user1:\nAs I've said before I would prefer not to catch `TypeError`. There are too many places that do this. A `TypeError` should almost always mean a bug and so the error should not generally be caught.\n\n@author:\nPlease make a suggestion since there is no other error to catch and the scope has been limited to a single obvious case of why it might be raised. \r\n\r\nThe reason this is here is because if we can make a set of the items and this test passes then it is a quick exit. And if not (either unhashable items or pattern not found) then we go to the next step where unhashable things are thrown out. In case of unhashable I don't want to let this raise for the user because the next step will ignore them (or could its own TypeError).\n\n@user1:\nWhy we can't we just let this TypeError bubble up?\n\n@author:\nI guess we can see what fails...will make the change" }, { "diff_hunk": "@@ -1284,8 +1311,27 @@ def has_free(self, *patterns):\n True\n >>> (x + y + 1).has_free(y + 1)\n True\n-\n \"\"\"\n+ if not patterns:\n+ return False\n+ p0 = patterns[0]\n+ if len(patterns) == 1 and iterable(p0) and not isinstance(p0, Basic):\n+ # Basic can contain iterables (though not non-Basic, ideally)\n+ # but don't encourage mixed passing patterns\n+ raise ValueError(filldedent('''", "line": null, "original_line": 1321, "original_start_line": null, "path": "sympy/core/basic.py", "start_line": null, "text": "@user1:\n`TypeError` would make more sense here." }, { "diff_hunk": "@@ -1258,6 +1258,33 @@ def has(self, *patterns):\n \"\"\"\n return self._has(iterargs, *patterns)\n \n+ def has_xfree(self, s: set[Basic]):", "line": 1261, "original_line": 1261, "original_start_line": null, "path": "sympy/core/basic.py", "start_line": null, "text": "@author:\n```suggestion\r\n def has_xfree(self, s: set[Basic]) -> bool:\r\n```" }, { "diff_hunk": "@@ -1284,8 +1311,26 @@ def has_free(self, *patterns):\n True\n >>> (x + y + 1).has_free(y + 1)\n True\n-\n \"\"\"\n+ if not patterns:\n+ return False\n+ p0 = patterns[0]\n+ if len(patterns) == 1 and iterable(p0) and not isinstance(p0, Basic):\n+ # Basic can contain iterables (though not non-Basic, ideally)\n+ # but don't encourage mixed passing patterns\n+ raise ValueError(filldedent('''\n+ Expecting 1 or more Basic args, not a single\n+ non-Basic iterable. Don't forget to unpack\n+ iterables: `eq.has_free(*patterns)`'''))\n+ # try quick test first\n+ try:\n+ s = set(patterns)\n+ rv = self.has_xfree(s)\n+ if rv:\n+ return rv\n+ except TypeError:", "line": null, "original_line": 1331, "original_start_line": null, "path": "sympy/core/basic.py", "start_line": null, "text": "@user1:\nIt's usually a bad idea to catch `TypeError`. Why is it being caught here?\n\n@author:\nIn case of an un unhashable pattern which is not handled until we get to `_has`.\n\n@user1:\nWhat is an example of an unhashable pattern? Would they not be hashable if sympified?\n\n@user1:\nI would still prefer not to catch `TypeError`. It is not a good class of exceptions to catch. At least if we are catching it then the `try` block needs to be restricted in scope to a single operation that is the one where `TypeError` is expected and there should be comments explaining precisely where the `TypeError` can be expected to come from." }, { "diff_hunk": "@@ -1284,8 +1311,27 @@ def has_free(self, *patterns):\n True\n >>> (x + y + 1).has_free(y + 1)\n True\n-\n \"\"\"\n+ if not patterns:\n+ return False\n+ p0 = patterns[0]\n+ if len(patterns) == 1 and iterable(p0) and not isinstance(p0, Basic):\n+ # Basic can contain iterables (though not non-Basic, ideally)\n+ # but don't encourage mixed passing patterns\n+ raise ValueError(filldedent('''", "line": null, "original_line": 1321, "original_start_line": null, "path": "sympy/core/basic.py", "start_line": null, "text": "@author:\n```suggestion\r\n raise TypeError(filldedent('''\r\n```" }, { "diff_hunk": "@@ -1624,12 +1625,31 @@ def test_has_free():\n assert Integral(f(x), (f(x), 1, y)).has_free(y)\n assert not Integral(f(x), (f(x), 1, y)).has_free(x)\n assert not Integral(f(x), (f(x), 1, y)).has_free(f(x))\n+ # simple extraction\n+ assert (x + 1 + y).has_free(x + 1)\n+ assert not (x + 2 + y).has_free(x + 1)\n+ assert (2 + 3*x*y).has_free(3*x)\n+ raises(ValueError, lambda: x.has_free({x, y}))", "line": null, "original_line": 1632, "original_start_line": null, "path": "sympy/core/tests/test_expr.py", "start_line": null, "text": "@author:\n```suggestion\r\n raises(TypeError, lambda: x.has_free({x, y}))\r\n```" }, { "diff_hunk": "@@ -1624,12 +1625,31 @@ def test_has_free():\n assert Integral(f(x), (f(x), 1, y)).has_free(y)\n assert not Integral(f(x), (f(x), 1, y)).has_free(x)\n assert not Integral(f(x), (f(x), 1, y)).has_free(f(x))\n+ # simple extraction\n+ assert (x + 1 + y).has_free(x + 1)\n+ assert not (x + 2 + y).has_free(x + 1)\n+ assert (2 + 3*x*y).has_free(3*x)\n+ raises(TypeError, lambda: x.has_free({x, y}))\n+ s = FiniteSet(1, 2)\n+ assert Piecewise((s, x > 3), (4, True)).has_free(s)\n+ assert not Piecewise((1, x > 3), (4, True)).has_free(s)\n+ # can't make set of these, but fallback will handle\n+ assert not x.has_free(y, [])", "line": null, "original_line": 1637, "original_start_line": null, "path": "sympy/core/tests/test_expr.py", "start_line": null, "text": "@author:\n```suggestion\r\n raises(TypeError, lambda: x.has_free(y, []))\r\n```" }, { "diff_hunk": "@@ -1284,8 +1311,27 @@ def has_free(self, *patterns):\n True\n >>> (x + y + 1).has_free(y + 1)\n True\n-\n \"\"\"\n+ if not patterns:\n+ return False\n+ p0 = patterns[0]\n+ if len(patterns) == 1 and iterable(p0) and not isinstance(p0, Basic):\n+ # Basic can contain iterables (though not non-Basic, ideally)\n+ # but don't encourage mixed passing patterns\n+ raise TypeError(filldedent('''\n+ Expecting 1 or more Basic args, not a single\n+ non-Basic iterable. Don't forget to unpack\n+ iterables: `eq.has_free(*patterns)`'''))\n+ # try quick test first\n+ try:\n+ s = set(patterns)\n+ except TypeError:\n+ pass # patterns had a non-hashable element", "line": null, "original_line": 1329, "original_start_line": 1326, "path": "sympy/core/basic.py", "start_line": null, "text": "@author:\n```suggestion\r\n s = set(patterns)\r\n```" }, { "diff_hunk": "@@ -1258,6 +1258,33 @@ def has(self, *patterns):\n \"\"\"\n return self._has(iterargs, *patterns)\n \n+ def has_xfree(self, s: set[Basic]):\n+ \"\"\"return True if self has any of the patterns in s as a\n+ free argument, else False. This is like `Basic.has_free`\n+ but this will only report exact argument matches.\n+\n+ Examples\n+ ========\n+\n+ >>> from sympy import Function\n+ >>> from sympy.abc import x, y\n+ >>> f = Function('f')\n+ >>> f(x).has_xfree({f})\n+ False\n+ >>> f(x).has_xfree({f(x)})\n+ True\n+ >>> f(x + 1).has_xfree({x})\n+ True\n+ >>> f(x + 1).has_xfree({x + 1})\n+ True\n+ >>> f(x + y + 1).has_xfree({x + 1})\n+ False\n+ \"\"\"\n+ # protect O(1) containment check by requiring:\n+ if not type(s) in (dict, set):\n+ raise ValueError('expecting set or dict argument')", "line": null, "original_line": 1285, "original_start_line": 1284, "path": "sympy/core/basic.py", "start_line": null, "text": "@author:\n```suggestion\r\n if type(s) is not set:\r\n raise TypeError('expecting set argument')\r\n```" }, { "diff_hunk": "@@ -2785,8 +2797,11 @@ def test_linear_coeffs():\n linear_coeffs(x, x, x))\n assert linear_coeffs(a*(x + y), x, y) == [a, a, 0]\n assert linear_coeffs(1.0, x, y) == [0, 0, 1.0]\n+ # don't include coefficients of 0\n+ assert linear_coeffs(Eq(x, x + y), x, y, dict=True) == {y: 1}", "line": null, "original_line": 2801, "original_start_line": null, "path": "sympy/solvers/tests/test_solveset.py", "start_line": null, "text": "@author:\n```suggestion\r\n assert linear_coeffs(Eq(x, x + y), x, y, dict=True) == {y: -1}\r\n```" }, { "diff_hunk": "@@ -1624,12 +1625,31 @@ def test_has_free():\n assert Integral(f(x), (f(x), 1, y)).has_free(y)\n assert not Integral(f(x), (f(x), 1, y)).has_free(x)\n assert not Integral(f(x), (f(x), 1, y)).has_free(f(x))\n+ # simple extraction\n+ assert (x + 1 + y).has_free(x + 1)\n+ assert not (x + 2 + y).has_free(x + 1)\n+ assert (2 + 3*x*y).has_free(3*x)\n+ raises(TypeError, lambda: x.has_free({x, y}))\n+ s = FiniteSet(1, 2)\n+ assert Piecewise((s, x > 3), (4, True)).has_free(s)\n+ assert not Piecewise((1, x > 3), (4, True)).has_free(s)\n+ # can't make set of these, but fallback will handle\n+ raises(TypeError, lambda: x.has_free(y, []))\n+\n+\n+def test_has_xfree():\n+ assert (x + 1).has_xfree({x})\n+ assert ((x + 1)**2).has_xfree({x + 1})\n+ assert not (x + y + 1).has_xfree({x + 1})\n+ raises(ValueError, lambda: x.has_xfree(x))\n+ raises(ValueError, lambda: x.has_xfree([x]))", "line": null, "original_line": 1645, "original_start_line": 1644, "path": "sympy/core/tests/test_expr.py", "start_line": null, "text": "@author:\n```suggestion\r\n raises(TypeError, lambda: x.has_xfree(x))\r\n raises(TypeError, lambda: x.has_xfree([x]))\r\n```" }, { "diff_hunk": "@@ -2760,16 +2771,23 @@ def linsolve(system, *symbols):\n >>> linsolve([], x)\n EmptySet\n \n- * An error is raised if, after expansion, any nonlinearity\n- is detected:\n+ * An error is raised if any nonlinearity is detected, even\n+ if it could be removed with expansion", "line": null, "original_line": 2775, "original_start_line": 2774, "path": "sympy/solvers/solveset.py", "start_line": null, "text": "@author:\n```suggestion\r\n * An error is raised if any nonlinearity is detected, even\r\n if it could be removed with expansion\r\n```" }, { "diff_hunk": "@@ -1284,8 +1311,24 @@ def has_free(self, *patterns):\n True\n >>> (x + y + 1).has_free(y + 1)\n True\n-\n \"\"\"\n+ if not patterns:\n+ return False\n+ p0 = patterns[0]\n+ if len(patterns) == 1 and iterable(p0) and not isinstance(p0, Basic):\n+ # Basic can contain iterables (though not non-Basic, ideally)\n+ # but don't encourage mixed passing patterns\n+ raise TypeError(filldedent('''\n+ Expecting 1 or more Basic args, not a single\n+ non-Basic iterable. Don't forget to unpack\n+ iterables: `eq.has_free(*patterns)`'''))\n+ # try quick test first\n+ s = set(patterns)\n+ else:\n+ rv = self.has_xfree(s)\n+ if rv:\n+ return rv", "line": null, "original_line": 1330, "original_start_line": 1327, "path": "sympy/core/basic.py", "start_line": null, "text": "@author:\n```suggestion\r\n rv = self.has_xfree(s)\r\n if rv:\r\n return rv\r\n```" }, { "diff_hunk": "@@ -2760,16 +2771,23 @@ def linsolve(system, *symbols):\n >>> linsolve([], x)\n EmptySet\n \n- * An error is raised if, after expansion, any nonlinearity\n- is detected:\n+ * An error is raised if any nonlinearity is detected, even\n+ if it could be removed with expansion", "line": null, "original_line": 2775, "original_start_line": null, "path": "sympy/solvers/solveset.py", "start_line": null, "text": "@user1:\nI'm guessing that the indentation here is the sphinx problem:\r\n```\r\n/home/runner/work/sympy/sympy/sympy/solvers/solveset.py:docstring of sympy.solvers.solveset.linsolve:169: WARNING: Bullet list ends without a blank line; unexpected unindent.\r\n```\n\n@author:\nI used the editor to search for leading space followed by `*`, looking for this case. Not sure how I missed it! Thanks." } ]
526e42880b9948217ecd478767c83b2af95a62b7
diff --git a/sympy/core/basic.py b/sympy/core/basic.py index 14640c32d9d3..28ccb1040fa8 100644 --- a/sympy/core/basic.py +++ b/sympy/core/basic.py @@ -1258,6 +1258,33 @@ def has(self, *patterns): """ return self._has(iterargs, *patterns) + def has_xfree(self, s: set[Basic]): + """return True if self has any of the patterns in s as a + free argument, else False. This is like `Basic.has_free` + but this will only report exact argument matches. + + Examples + ======== + + >>> from sympy import Function + >>> from sympy.abc import x, y + >>> f = Function('f') + >>> f(x).has_xfree({f}) + False + >>> f(x).has_xfree({f(x)}) + True + >>> f(x + 1).has_xfree({x}) + True + >>> f(x + 1).has_xfree({x + 1}) + True + >>> f(x + y + 1).has_xfree({x + 1}) + False + """ + # protect O(1) containment check by requiring: + if type(s) is not set: + raise TypeError('expecting set argument') + return any(a in s for a in iterfreeargs(self)) + @cacheit def has_free(self, *patterns): """return True if self has object(s) ``x`` as a free expression @@ -1284,8 +1311,23 @@ def has_free(self, *patterns): True >>> (x + y + 1).has_free(y + 1) True - """ + if not patterns: + return False + p0 = patterns[0] + if len(patterns) == 1 and iterable(p0) and not isinstance(p0, Basic): + # Basic can contain iterables (though not non-Basic, ideally) + # but don't encourage mixed passing patterns + raise TypeError(filldedent(''' + Expecting 1 or more Basic args, not a single + non-Basic iterable. Don't forget to unpack + iterables: `eq.has_free(*patterns)`''')) + # try quick test first + s = set(patterns) + rv = self.has_xfree(s) + if rv: + return rv + # now try matching through slower _has return self._has(iterfreeargs, *patterns) def _has(self, iterargs, *patterns): diff --git a/sympy/core/tests/test_expr.py b/sympy/core/tests/test_expr.py index d94201d65dc2..ee5465f33ee3 100644 --- a/sympy/core/tests/test_expr.py +++ b/sympy/core/tests/test_expr.py @@ -31,6 +31,7 @@ from sympy.polys.polytools import factor, cancel, Poly from sympy.polys.rationaltools import together from sympy.series.order import O +from sympy.sets.sets import FiniteSet from sympy.simplify.combsimp import combsimp from sympy.simplify.gammasimp import gammasimp from sympy.simplify.powsimp import powsimp @@ -1624,12 +1625,31 @@ def test_has_free(): assert Integral(f(x), (f(x), 1, y)).has_free(y) assert not Integral(f(x), (f(x), 1, y)).has_free(x) assert not Integral(f(x), (f(x), 1, y)).has_free(f(x)) + # simple extraction + assert (x + 1 + y).has_free(x + 1) + assert not (x + 2 + y).has_free(x + 1) + assert (2 + 3*x*y).has_free(3*x) + raises(TypeError, lambda: x.has_free({x, y})) + s = FiniteSet(1, 2) + assert Piecewise((s, x > 3), (4, True)).has_free(s) + assert not Piecewise((1, x > 3), (4, True)).has_free(s) + # can't make set of these, but fallback will handle + raises(TypeError, lambda: x.has_free(y, [])) + + +def test_has_xfree(): + assert (x + 1).has_xfree({x}) + assert ((x + 1)**2).has_xfree({x + 1}) + assert not (x + y + 1).has_xfree({x + 1}) + raises(TypeError, lambda: x.has_xfree(x)) + raises(TypeError, lambda: x.has_xfree([x])) def test_issue_5300(): x = Symbol('x', commutative=False) assert x*sqrt(2)/sqrt(6) == x*sqrt(3)/3 + def test_floordiv(): from sympy.functions.elementary.integers import floor assert x // y == floor(x / y) diff --git a/sympy/core/traversal.py b/sympy/core/traversal.py index 980ff4a03381..0615c221a538 100644 --- a/sympy/core/traversal.py +++ b/sympy/core/traversal.py @@ -41,6 +41,7 @@ def iterfreeargs(expr, _first=True): Examples ======== + >>> from sympy import Integral, Function >>> from sympy.abc import x >>> f = Function('f') @@ -66,8 +67,6 @@ def iterfreeargs(expr, _first=True): pass # for cases like f being an arg - - class preorder_traversal: """ Do a pre-order traversal of a tree. diff --git a/sympy/polys/matrices/linsolve.py b/sympy/polys/matrices/linsolve.py index 75ae26d669b2..08fa5030f8f0 100644 --- a/sympy/polys/matrices/linsolve.py +++ b/sympy/polys/matrices/linsolve.py @@ -42,8 +42,11 @@ sdm_nullspace_from_rref ) +from sympy.utilities.misc import filldedent + def _linsolve(eqs, syms): + """Solve a linear system of equations. Examples @@ -69,8 +72,8 @@ def _linsolve(eqs, syms): nsyms = len(syms) # Convert to sparse augmented matrix (len(eqs) x (nsyms+1)) - eqsdict, rhs = _linear_eq_to_dict(eqs, syms) - Aaug = sympy_dict_to_dm(eqsdict, rhs, syms) + eqsdict, const = _linear_eq_to_dict(eqs, syms) + Aaug = sympy_dict_to_dm(eqsdict, const, syms) K = Aaug.domain # sdm_irref has issues with float matrices. This uses the ddm_rref() @@ -126,50 +129,67 @@ def sympy_dict_to_dm(eqs_coeffs, eqs_rhs, syms): for eq, rhs in zip(eqs_coeffs, eqs_rhs): eqdict = {sym2index[s]: elem_map[c] for s, c in eq.items()} if rhs: - eqdict[nsyms] = - elem_map[rhs] + eqdict[nsyms] = -elem_map[rhs] if eqdict: eqsdict.append(eqdict) - sdm_aug = SDM(enumerate(eqsdict), (neqs, nsyms+1), K) + sdm_aug = SDM(enumerate(eqsdict), (neqs, nsyms + 1), K) return sdm_aug -def _expand_eqs_deprecated(eqs): - """Use expand to cancel nonlinear terms. +def _linear_eq_to_dict(eqs, syms): + """Convert a system Expr/Eq equations into dict form, returning + the coefficient dictionaries and a list of syms-independent terms + from each expression in ``eqs```. - This approach matches previous behaviour of linsolve but should be - deprecated. + Examples + ======== + + >>> from sympy.polys.matrices.linsolve import _linear_eq_to_dict + >>> from sympy.abc import x + >>> _linear_eq_to_dict([2*x + 3], {x}) + ([{x: 2}], [3]) """ - def expand_eq(eq): - if eq.is_Equality: - eq = eq.lhs - eq.rhs - return eq.expand() + coeffs = [] + ind = [] + symset = set(syms) + for i, e in enumerate(eqs): + if e.is_Equality: + coeff, terms = _lin_eq2dict(e.lhs, symset) + cR, tR = _lin_eq2dict(e.rhs, symset) + # there were no nonlinear errors so now + # cancellation is allowed + coeff -= cR + for k, v in tR.items(): + if k in terms: + terms[k] -= v + else: + terms[k] = -v + # don't store coefficients of 0, however + terms = {k: v for k, v in terms.items() if v} + c, d = coeff, terms + else: + c, d = _lin_eq2dict(e, symset) + coeffs.append(d) + ind.append(c) + return coeffs, ind - return [expand_eq(eq) for eq in eqs] +def _lin_eq2dict(a, symset): + """return (c, d) where c is the sym-independent part of ``a`` and + ``d`` is an efficiently calculated dictionary mapping symbols to + their coefficients. A PolyNonlinearError is raised if non-linearity + is detected. -def _linear_eq_to_dict(eqs, syms): - """Convert a system Expr/Eq equations into dict form""" - try: - return _linear_eq_to_dict_inner(eqs, syms) - except PolyNonlinearError: - # XXX: This should be deprecated: - eqs = _expand_eqs_deprecated(eqs) - return _linear_eq_to_dict_inner(eqs, syms) - - -def _linear_eq_to_dict_inner(eqs, syms): - """Convert a system Expr/Eq equations into dict form""" - syms = set(syms) - eqsdict, eqs_rhs = [], [] - for eq in eqs: - rhs, eqdict = _lin_eq2dict(eq, syms) - eqsdict.append(eqdict) - eqs_rhs.append(rhs) - return eqsdict, eqs_rhs + The values in the dictionary will be non-zero. + Examples + ======== -def _lin_eq2dict(a, symset): - """Efficiently convert a linear equation to a dict of coefficients""" + >>> from sympy.polys.matrices.linsolve import _lin_eq2dict + >>> from sympy.abc import x, y + >>> _lin_eq2dict(x + 2*y + 3, {x, y}) + (3, {x: 1, y: 2}) + """ if a in symset: return S.Zero, {a: S.One} elif a.is_Add: @@ -194,16 +214,17 @@ def _lin_eq2dict(a, symset): terms = ti terms_coeff = ci else: - raise PolyNonlinearError - coeff = Mul(*coeff_list) + # since ti is not null and we already have + # a term, this is a cross term + raise PolyNonlinearError(filldedent(''' + nonlinear cross-term: %s''' % a)) + coeff = Mul._from_args(coeff_list) if terms is None: return coeff, {} else: terms = {sym: coeff * c for sym, c in terms.items()} return coeff * terms_coeff, terms - elif a.is_Equality: - return _lin_eq2dict(a.lhs - a.rhs, symset) - elif not a.has_free(*symset): + elif not a.has_xfree(symset): return a, {} else: - raise PolyNonlinearError + raise PolyNonlinearError('nonlinear term: %s' % a) diff --git a/sympy/polys/matrices/tests/test_linsolve.py b/sympy/polys/matrices/tests/test_linsolve.py index e82f3f6af664..9d8cd7eb9feb 100644 --- a/sympy/polys/matrices/tests/test_linsolve.py +++ b/sympy/polys/matrices/tests/test_linsolve.py @@ -103,6 +103,9 @@ def all_close(sol1, sol2, eps=1e-15): def test__linsolve_deprecated(): - assert _linsolve([Eq(x**2, x**2+y)], [x, y]) == {x:x, y:S.Zero} - assert _linsolve([(x+y)**2-x**2], [x]) == {x:-y/2} - assert _linsolve([Eq((x+y)**2, x**2)], [x]) == {x:-y/2} + raises(PolyNonlinearError, lambda: + _linsolve([Eq(x**2, x**2 + y)], [x, y])) + raises(PolyNonlinearError, lambda: + _linsolve([(x + y)**2 - x**2], [x])) + raises(PolyNonlinearError, lambda: + _linsolve([Eq((x + y)**2, x**2)], [x])) diff --git a/sympy/solvers/ode/systems.py b/sympy/solvers/ode/systems.py index 82bdea435e8d..98263983e6f7 100644 --- a/sympy/solvers/ode/systems.py +++ b/sympy/solvers/ode/systems.py @@ -197,7 +197,6 @@ def simpcoeff(coeff, wrt2): rep = {} sol = [Eq(s.lhs, simprhs(s.rhs, rep, wrt1, wrt2)) for s in sol] - return sol @@ -468,13 +467,7 @@ def linear_ode_to_matrix(eqs, funcs, t, order): for o in range(order, -1, -1): # Work from the highest derivative down - funcs_deriv = [func.diff(t, o) for func in funcs] - - # linear_eq_to_matrix expects a proper symbol so substitute e.g. - # Derivative(x(t), t) for a Dummy. - rep = {func_deriv: Dummy() for func_deriv in funcs_deriv} - eqs = [eq.subs(rep) for eq in eqs] - syms = [rep[func_deriv] for func_deriv in funcs_deriv] + syms = [func.diff(t, o) for func in funcs] # Ai is the matrix for X(t).diff(t, o) # eqs is minus the remainder of the equations. @@ -947,7 +940,7 @@ def linodesolve(A, t, b=None, B=None, type="auto", doit=False, # constants = numbered_symbols(prefix='C', cls=Dummy, start=const_idx+1) Cvect = Matrix(list(Dummy() for _ in range(n))) - if any(type == typ for typ in ["type2", "type4", "type6"]) and b is None: + if b is None and any(type == typ for typ in ["type2", "type4", "type6"]): b = zeros(n, 1) is_transformed = tau is not None @@ -973,6 +966,7 @@ def linodesolve(A, t, b=None, B=None, type="auto", doit=False, A = system_info['A'] b = system_info['b'] + intx_wrtt = lambda x: Integral(x, t) if x else 0 if type in ("type1", "type2", "type5", "type6"): P, J = matrix_exp_jordan_form(A, t) P = simplify(P) @@ -981,8 +975,7 @@ def linodesolve(A, t, b=None, B=None, type="auto", doit=False, sol_vector = P * (J * Cvect) else: Jinv = J.subs(t, -t) - sol_vector = P * J * ((Jinv * P.inv() * b).applyfunc(lambda x: Integral(x, t)) + Cvect) - + sol_vector = P * J * ((Jinv * P.inv() * b).applyfunc(intx_wrtt) + Cvect) else: if B is None: B, _ = _is_commutative_anti_derivative(A, t) @@ -990,7 +983,7 @@ def linodesolve(A, t, b=None, B=None, type="auto", doit=False, if type == "type3": sol_vector = B.exp() * Cvect else: - sol_vector = B.exp() * (((-B).exp() * b).applyfunc(lambda x: Integral(x, t)) + Cvect) + sol_vector = B.exp() * (((-B).exp() * b).applyfunc(intx_wrtt) + Cvect) if is_transformed: sol_vector = sol_vector.subs(t, tau) diff --git a/sympy/solvers/ode/tests/test_systems.py b/sympy/solvers/ode/tests/test_systems.py index dd08d644f7a2..2a2b1e155c6d 100644 --- a/sympy/solvers/ode/tests/test_systems.py +++ b/sympy/solvers/ode/tests/test_systems.py @@ -1071,8 +1071,8 @@ def test_sysode_linear_neq_order1_type2(): eqs6 = [Eq(Derivative(f(x), x), -9*f(x) - 4*g(x)), Eq(Derivative(g(x), x), -4*g(x)), Eq(Derivative(h(x), x), h(x) + exp(x))] - sol6 = [Eq(f(x), C1*exp(-4*x)*Rational(-4, 5) + C2*exp(-9*x)), - Eq(g(x), C1*exp(-4*x)), + sol6 = [Eq(f(x), C2*exp(-4*x)*Rational(-4, 5) + C1*exp(-9*x)), + Eq(g(x), C2*exp(-4*x)), Eq(h(x), C3*exp(x) + x*exp(x))] assert dsolve(eqs6) == sol6 assert checksysodesol(eqs6, sol6) == (True, [0, 0, 0]) @@ -1647,11 +1647,11 @@ def test_higher_order_to_first_order_9(): eqs9 = [f(x) + g(x) - 2*exp(I*x) + 2*Derivative(f(x), x) + Derivative(f(x), (x, 2)), f(x) + g(x) - 2*exp(I*x) + 2*Derivative(g(x), x) + Derivative(g(x), (x, 2))] - sol9 = [Eq(f(x), -C1 + C2*exp(-2*x)/2 - (C3/2 - C4/2)*exp(-x)*cos(x) - + (C3/2 + C4/2)*exp(-x)*sin(x) + 2*((1 - 2*I)*exp(I*x)*sin(x)**2/5) + sol9 = [Eq(f(x), -C1 + C4*exp(-2*x)/2 - (C2/2 - C3/2)*exp(-x)*cos(x) + + (C2/2 + C3/2)*exp(-x)*sin(x) + 2*((1 - 2*I)*exp(I*x)*sin(x)**2/5) + 2*((1 - 2*I)*exp(I*x)*cos(x)**2/5)), - Eq(g(x), C1 - C2*exp(-2*x)/2 - (C3/2 - C4/2)*exp(-x)*cos(x) - + (C3/2 + C4/2)*exp(-x)*sin(x) + 2*((1 - 2*I)*exp(I*x)*sin(x)**2/5) + Eq(g(x), C1 - C4*exp(-2*x)/2 - (C2/2 - C3/2)*exp(-x)*cos(x) + + (C2/2 + C3/2)*exp(-x)*sin(x) + 2*((1 - 2*I)*exp(I*x)*sin(x)**2/5) + 2*((1 - 2*I)*exp(I*x)*cos(x)**2/5))] assert dsolve(eqs9) == sol9 assert checksysodesol(eqs9, sol9) == (True, [0, 0]) @@ -1977,11 +1977,8 @@ def test_linodesolve(): # non-homogeneous term assumed to be 0 sol1 = [-C1*exp(-t/2 + sqrt(5)*t/2)/2 + sqrt(5)*C1*exp(-t/2 + sqrt(5)*t/2)/2 - sqrt(5)*C2*exp(-sqrt(5)*t/2 - - t/2)/2 - C2*exp(-sqrt(5)*t/2 - t/2)/2 - exp(-t/2 + sqrt(5)*t/2)*Integral(0, t)/2 + - sqrt(5)*exp(-t/2 + sqrt(5)*t/2)*Integral(0, t)/2 - sqrt(5)*exp(-sqrt(5)*t/2 - t/2)*Integral(0, t)/2 - - exp(-sqrt(5)*t/2 - t/2)*Integral(0, t)/2, - C1*exp(-t/2 + sqrt(5)*t/2) + C2*exp(-sqrt(5)*t/2 - t/2) - + exp(-t/2 + sqrt(5)*t/2)*Integral(0, t) + exp(-sqrt(5)*t/2 - t/2)*Integral(0, t)] + - t/2)/2 - C2*exp(-sqrt(5)*t/2 - t/2)/2, + C1*exp(-t/2 + sqrt(5)*t/2) + C2*exp(-sqrt(5)*t/2 - t/2)] assert constant_renumber(linodesolve(A, t, type="type2"), variables=[t]) == sol1 # Testing the Errors diff --git a/sympy/solvers/solveset.py b/sympy/solvers/solveset.py index b0c49825a988..3f058d5ea824 100644 --- a/sympy/solvers/solveset.py +++ b/sympy/solvers/solveset.py @@ -12,7 +12,7 @@ """ from sympy.core.sympify import sympify from sympy.core import (S, Pow, Dummy, pi, Expr, Wild, Mul, Equality, - Add) + Add, Basic) from sympy.core.containers import Tuple from sympy.core.function import (Lambda, expand_complex, AppliedUndef, expand_log, _mexpand, expand_trig, nfloat) @@ -23,7 +23,7 @@ from sympy.core.sorting import default_sort_key, ordered from sympy.core.symbol import Symbol, _uniquely_named_symbol from sympy.core.sympify import _sympify -from sympy.core.traversal import iterfreeargs +from sympy.polys.matrices.linsolve import _linear_eq_to_dict from sympy.polys.polyroots import UnsolvableFactorError from sympy.simplify.simplify import simplify, fraction, trigsimp, nsimplify from sympy.simplify import powdenest, logcombine @@ -38,7 +38,7 @@ from sympy.sets import (FiniteSet, imageset, Interval, Intersection, Union, ConditionSet, ImageSet, Complement, Contains) from sympy.sets.sets import Set, ProductSet -from sympy.matrices import Matrix, MatrixBase +from sympy.matrices import zeros, Matrix, MatrixBase from sympy.ntheory import totient from sympy.ntheory.factor_ import divisors from sympy.ntheory.residue_ntheory import discrete_log, nthroot_mod @@ -54,11 +54,10 @@ from sympy.solvers.polysys import solve_poly_system from sympy.utilities import filldedent from sympy.utilities.iterables import (numbered_symbols, has_dups, - is_sequence) + is_sequence, iterable) from sympy.calculus.util import periodicity, continuous_domain, function_range from types import GeneratorType -from collections import defaultdict class NonlinearError(ValueError): @@ -2411,7 +2410,7 @@ def solvify(f, symbol, domain): ############################################################################### -def linear_coeffs(eq, *syms, **_kw): +def linear_coeffs(eq, *syms, dict=False): """Return a list whose elements are the coefficients of the corresponding symbols in the sum of terms in ``eq``. The additive constant is returned as the last element of the @@ -2422,69 +2421,84 @@ def linear_coeffs(eq, *syms, **_kw): NonlinearError The equation contains a nonlinear term + ValueError + duplicate or unordered symbols are passed + + Parameters + ========== + + dict - (default False) when True, return coefficients as a + dictionary with coefficients keyed to syms that were present; + key 1 gives the constant term Examples ======== >>> from sympy.solvers.solveset import linear_coeffs >>> from sympy.abc import x, y, z - >>> linear_coeffs(3*x + 2*y - 1, x, y) [3, 2, -1] It is not necessary to expand the expression: - >>> linear_coeffs(x + y*(z*(x*3 + 2) + 3), x) - [3*y*z + 1, y*(2*z + 3)] + >>> linear_coeffs(x + y*(z*(x*3 + 2) + 3), x) + [3*y*z + 1, y*(2*z + 3)] - But if there are nonlinear or cross terms -- even if they would - cancel after simplification -- an error is raised so the situation - does not pass silently past the caller's attention: + When nonlinear is detected, an error will be raised: - >>> eq = 1/x*(x - 1) + 1/x - >>> linear_coeffs(eq.expand(), x) - [0, 1] - >>> linear_coeffs(eq, x) - Traceback (most recent call last): - ... - NonlinearError: nonlinear term encountered: 1/x + * even if they would cancel after expansion (so the + situation does not pass silently past the caller's + attention) - >>> linear_coeffs(x*(y + 1) - x*y, x, y) - Traceback (most recent call last): - ... - NonlinearError: nonlinear term encountered: x*(y + 1) + >>> eq = 1/x*(x - 1) + 1/x + >>> linear_coeffs(eq.expand(), x) + [0, 1] + >>> linear_coeffs(eq, x) + Traceback (most recent call last): + ... + NonlinearError: + nonlinear in given generators + + * when there are cross terms + + >>> linear_coeffs(x*(y + 1), x, y) + Traceback (most recent call last): + ... + NonlinearError: + symbol-dependent cross-terms encountered + + * when there are terms that contain an expression + dependent on the symbols that is not linear + + >>> linear_coeffs(x**2, x) + Traceback (most recent call last): + ... + NonlinearError: + nonlinear in given generators """ - d = defaultdict(list) eq = _sympify(eq) + if len(syms) == 1 and iterable(syms[0]) and not isinstance(syms[0], Basic): + raise ValueError('expecting unpacked symbols, *syms') symset = set(syms) if len(symset) != len(syms): raise ValueError('duplicate symbols given') - has = set(iterfreeargs(eq)) & symset - if not has: - return [S.Zero]*len(syms) + [eq] - c, terms = eq.as_coeff_add(*has) - d[0].extend(Add.make_args(c)) - for t in terms: - m, f = t.as_coeff_mul(*has) - if len(f) != 1: - break - f = f[0] - if f in symset: - d[f].append(m) - elif f.is_Add: - d1 = linear_coeffs(f, *has, **{'dict': True}) - d[0].append(m*d1.pop(0)) - for xf, vf in d1.items(): - d[xf].append(m*vf) - else: - break - else: - for k, v in d.items(): - d[k] = Add(*v) - if not _kw: - return [d.get(s, S.Zero) for s in syms]+ [d[0]] - return d # default is still list but this won't matter - raise NonlinearError('nonlinear term encountered: %s' % t) + try: + d, c = _linear_eq_to_dict([eq], symset) + d = d[0] + c = c[0] + except PolyNonlinearError as err: + raise NonlinearError(str(err)) + if dict: + if c: + d[S.One] = c + return d + rv = [S.Zero]*(len(syms) + 1) + rv[-1] = c + for i, k in enumerate(syms): + if k not in d: + continue + rv[i] = d[k] + return rv def linear_eq_to_matrix(equations, *symbols): @@ -2531,39 +2545,39 @@ def linear_eq_to_matrix(equations, *symbols): The coefficients (numerical or symbolic) of the symbols will be returned as matrices: - >>> eqns = [c*x + z - 1 - c, y + z, x - y] - >>> A, b = linear_eq_to_matrix(eqns, [x, y, z]) - >>> A - Matrix([ - [c, 0, 1], - [0, 1, 1], - [1, -1, 0]]) - >>> b - Matrix([ - [c + 1], - [ 0], - [ 0]]) + >>> eqns = [c*x + z - 1 - c, y + z, x - y] + >>> A, b = linear_eq_to_matrix(eqns, [x, y, z]) + >>> A + Matrix([ + [c, 0, 1], + [0, 1, 1], + [1, -1, 0]]) + >>> b + Matrix([ + [c + 1], + [ 0], + [ 0]]) This routine does not simplify expressions and will raise an error if nonlinearity is encountered: - >>> eqns = [ - ... (x**2 - 3*x)/(x - 3) - 3, - ... y**2 - 3*y - y*(y - 4) + x - 4] - >>> linear_eq_to_matrix(eqns, [x, y]) - Traceback (most recent call last): - ... - NonlinearError: - The term (x**2 - 3*x)/(x - 3) is nonlinear in {x, y} + >>> eqns = [ + ... (x**2 - 3*x)/(x - 3) - 3, + ... y**2 - 3*y - y*(y - 4) + x - 4] + >>> linear_eq_to_matrix(eqns, [x, y]) + Traceback (most recent call last): + ... + NonlinearError: + symbol-dependent term can be ignored using `strict=False` - Simplifying these equations will discard the removable singularity - in the first, reveal the linear structure of the second: + Simplifying these equations will discard the removable singularity + in the first and reveal the linear structure of the second: - >>> [e.simplify() for e in eqns] - [x - 3, x + y - 4] + >>> [e.simplify() for e in eqns] + [x - 3, x + y - 4] - Any such simplification needed to eliminate nonlinear terms must - be done before calling this routine. + Any such simplification needed to eliminate nonlinear terms must + be done *before* calling this routine. """ if not symbols: raise ValueError(filldedent(''' @@ -2574,12 +2588,6 @@ def linear_eq_to_matrix(equations, *symbols): if hasattr(symbols[0], '__iter__'): symbols = symbols[0] - for i in symbols: - if not isinstance(i, Symbol): - raise ValueError(filldedent(''' - Expecting a Symbol but got %s - ''' % i)) - if has_dups(symbols): raise ValueError('Symbols must be unique') @@ -2594,14 +2602,20 @@ def linear_eq_to_matrix(equations, *symbols): Eq or Matrix. ''')) - A, b = [], [] - for i, f in enumerate(equations): - if isinstance(f, Equality): - f = f.rewrite(Add, evaluate=False) - coeff_list = linear_coeffs(f, *symbols) - b.append(-coeff_list.pop()) - A.append(coeff_list) - A, b = map(Matrix, (A, b)) + # construct the dictionaries + try: + eq, c = _linear_eq_to_dict(equations, symbols) + except PolyNonlinearError as err: + raise NonlinearError(str(err)) + # prepare output matrices + n, m = shape = len(eq), len(symbols) + ix = dict(zip(symbols, range(m))) + A = zeros(*shape) + for row, d in enumerate(eq): + for k in d: + col = ix[k] + A[row, col] = d[k] + b = Matrix(n, 1, [-i for i in c]) return A, b @@ -2760,16 +2774,23 @@ def linsolve(system, *symbols): >>> linsolve([], x) EmptySet - * An error is raised if, after expansion, any nonlinearity - is detected: + * An error is raised if any nonlinearity is detected, even + if it could be removed with expansion + + >>> linsolve([x*(1/x - 1)], x) + Traceback (most recent call last): + ... + NonlinearError: nonlinear term: 1/x + + >>> linsolve([x*(y + 1)], x, y) + Traceback (most recent call last): + ... + NonlinearError: nonlinear cross-term: x*(y + 1) - >>> linsolve([x*(1/x - 1), (y - 1)**2 - y**2 + 1], x, y) - {(1, 1)} >>> linsolve([x**2 - 1], x) Traceback (most recent call last): ... - NonlinearError: - nonlinear term encountered: x**2 + NonlinearError: nonlinear term: x**2 """ if not system: return S.EmptySet diff --git a/sympy/solvers/tests/test_solveset.py b/sympy/solvers/tests/test_solveset.py index 87def61c7632..a6805ea2fe8f 100644 --- a/sympy/solvers/tests/test_solveset.py +++ b/sympy/solvers/tests/test_solveset.py @@ -1362,6 +1362,10 @@ def test_abs_invert_solvify(): def test_linear_eq_to_matrix(): + assert linear_eq_to_matrix(0, x) == (Matrix([[0]]), Matrix([[0]])) + assert linear_eq_to_matrix(1, x) == (Matrix([[0]]), Matrix([[-1]])) + + # integer coefficients eqns1 = [2*x + y - 2*z - 3, x - y - z, x + y + 3*z - 12] eqns2 = [Eq(3*x + 2*y - z, 1), Eq(2*x - 2*y + 4*z, -2), -2*x + y - 2*z] @@ -1379,17 +1383,24 @@ def test_linear_eq_to_matrix(): assert A == Matrix([[a*b, b, c], [d + e, f, g], [i, j, k]]) assert B == Matrix([[d], [h], [l]]) - # raise ValueError if + # raise Errors if # 1) no symbols are given raises(ValueError, lambda: linear_eq_to_matrix(eqns3)) # 2) there are duplicates raises(ValueError, lambda: linear_eq_to_matrix(eqns3, [x, x, y])) - # 3) there are non-symbols - raises(ValueError, lambda: linear_eq_to_matrix(eqns3, [x, 1/a, y])) - # 4) a nonlinear term is detected in the original expression + # 3) a nonlinear term is detected in the original expression raises(NonlinearError, lambda: linear_eq_to_matrix(Eq(1/x + x, 1/x), [x])) + raises(NonlinearError, lambda: linear_eq_to_matrix([x**2], [x])) + raises(NonlinearError, lambda: linear_eq_to_matrix([x*y], [x, y])) + # 4) Eq being used to represent equations autoevaluates + # (use unevaluated Eq instead) + raises(ValueError, lambda: linear_eq_to_matrix(Eq(x, x), x)) + raises(ValueError, lambda: linear_eq_to_matrix(Eq(x, x + 1), x)) + + + # if non-symbols are passed, the user is responsible for interpreting + assert linear_eq_to_matrix([x], [1/x]) == (Matrix([[0]]), Matrix([[-x]])) - assert linear_eq_to_matrix(1, x) == (Matrix([[0]]), Matrix([[-1]])) # issue 15195 assert linear_eq_to_matrix(x + y*(z*(3*x + 2) + 3), x) == ( Matrix([[3*y*z + 1]]), Matrix([[-y*(2*z + 3)]])) @@ -1502,12 +1513,13 @@ def test_linsolve(): assert linsolve(Eqns, x, y) == { (kilo*newton*Rational(-28, 3), kN*Rational(4, 3))} - # linsolve fully expands expressions, so removable singularities - # and other nonlinearity does not raise an error + # linsolve does not allow expansion (real or implemented) + # to remove singularities, but it will cancel linear terms assert linsolve([Eq(x, x + y)], [x, y]) == {(x, 0)} - assert linsolve([Eq(1/x, 1/x + y)], [x, y]) == {(x, 0)} - assert linsolve([Eq(y/x, y/x + y)], [x, y]) == {(x, 0)} - assert linsolve([Eq(x*(x + 1), x**2 + y)], [x, y]) == {(y, y)} + assert linsolve([Eq(x + x*y, 1 + y)], [x]) == {(1,)} + assert linsolve([Eq(1 + y, x + x*y)], [x]) == {(1,)} + raises(NonlinearError, lambda: + linsolve([Eq(x**2, x**2 + y)], [x, y])) # corner cases # @@ -2785,8 +2797,11 @@ def test_linear_coeffs(): linear_coeffs(x, x, x)) assert linear_coeffs(a*(x + y), x, y) == [a, a, 0] assert linear_coeffs(1.0, x, y) == [0, 0, 1.0] + # don't include coefficients of 0 + assert linear_coeffs(Eq(x, x + y), x, y, dict=True) == {y: -1} + assert linear_coeffs(0, x, y, dict=True) == {} + -# modular tests def test_is_modular(): assert _is_modular(y, x) is False assert _is_modular(Mod(x, 3) - 1, x) is True
{ "difficulty": "medium", "estimated_review_effort": 4, "problem_domain": "Bug Fixes" }
sympy__sympy-24488@3d0b53d
sympy/sympy
Python
24,488
Improves the docstring of rigid body
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> Fixes #24451 #### Brief description of what is fixed or changed #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> NO ENTRY <!-- END RELEASE NOTES -->
2023-01-10T10:07:24Z
sympy.physics.mechanics // RigidBody I looked at the 'explanations' of RigidBody, using _help(me.RigidBody)._ For the kinetic energy I saw this: _'T = 1/2 (I omega^2 + m v^2)'_ Should it not be: T = 1/2 ( **(** I omega **)**^2 + m v^2 ) A **very minor** point, but since I saw it, I wanted to point it out.
`T = rotational_KE + translational_KE` where `rotational_KE = 1/2(I ω^2)` and `translational_KE = 1/2(m v^2)` Hence , `T = 1/2(I ω^2 + m v^2 )` So , It was mentioned [right](https://github.com/sympy/sympy/blob/master/sympy/physics/mechanics/rigidbody.py#L235). Are you sure? I is a 3 x 3 matrix omega is a vector, hence omega^2 = dot(omega, omega) is a scalar. Hence, I omega^2 is a 3 x 3 matrix. So, are you not adding a 3 x 3 matrix and the scalar m v^2 ? Like I wrote, it is a small issue, only two parenthesis are missing. On Sat 31. Dec 2022 at 08:26 ABHISHEK PATIDAR ***@***.***> wrote: > T = rotational_KE + translational_KE > where rotational_KE = 1/2(I ω^2) > and translational_KE = 1/2(m v^2) > Hence , T = 1/2(I ω^2 + m v^2 ) > So , It was mentioned right > <https://github.com/sympy/sympy/blob/master/sympy/physics/mechanics/rigidbody.py#L235> > . > > — > Reply to this email directly, view it on GitHub > <https://github.com/sympy/sympy/issues/24451#issuecomment-1368178298>, or > unsubscribe > <https://github.com/notifications/unsubscribe-auth/AT5MQURO2K772FSSPI3TK7LWP7N3PANCNFSM6AAAAAATNMJNW4> > . > You are receiving this because you authored the thread.Message ID: > ***@***.***> > -- Best regards, Peter Stahlecker The moment of inertia is a tensor because it behaves both as a scalar and a vector, `rotational_KE = 1/2(I ω^2)` is also scalar quantity because it does not have a direction associated with it. It is simply a measure of the energy that an object possesses due to its rotational motion. We can check how `rotational_KE = 1/2(I ω^2)` cames [here](https://farside.ph.utexas.edu/teaching/336k/Newton/node65.html) Also we can check through dimensinal analysis of `1/2(I ω^2)` and `1/2(m v^2)` both are [ML^2T^(-2)] Naturally I is a tensor, but does this mean it behaves like a vector *and* a scalar - whatever this is supposed to mean? In my opinion, I * omega^2 is not a scalar, (I * omega)^2 is a scalar - both times interpreting vector^2 as the inner product of the vector with itself. The dimensional analysis is only a necessary condition for a formula to be correct, certainly not a sufficient one. I think, my *very* small point is not worth of further discussion. On Sat 31. Dec 2022 at 09:43 ABHISHEK PATIDAR ***@***.***> wrote: > The moment of inertia is a tensor because it behaves both as a scalar and > a vector, rotational_KE = 1/2(I ω^2) is also scalar quantity because it > does not have a direction associated with it. It is simply a measure of the > energy that an object possesses due to its rotational motion. > Also we can check through dimensinal analysis of 1/2(I ω^2) and 1/2(m v^2) > both are [ML^2T^(-2)] > > — > Reply to this email directly, view it on GitHub > <https://github.com/sympy/sympy/issues/24451#issuecomment-1368185902>, or > unsubscribe > <https://github.com/notifications/unsubscribe-auth/AT5MQUVXJGDLM3KL7YOHBRLWP7W2DANCNFSM6AAAAAATNMJNW4> > . > You are receiving this because you authored the thread.Message ID: > ***@***.***> > -- Best regards, Peter Stahlecker In a vector/matrix notation, it should be `omega.I.omega`, just like for the translational kinetic energy `v.m.v`. Then when the matrix `I` is just the identity matrix times a scalar the equation simplifies to the one given. I agree that this is somewhat of a sloppy notation, since this condition is certainly not always true, so your concern is correct, however, the suggested correction is incorrect. Personally I prefer a tensor with einstein summation convention notation: `T = 1/2 (I_ij.omega^i.omega^j + m_kl.v^k.v^l)` but this is often not used in text book equations (because it requires somewhat of a deeper understanding). It does mean that we are not relient on the order of the symbols that are multiplied and it immediately tells us that I and m transform in the opposite way w.r.t. omega and v respectively under a coordinate transformation. I think, my suggested correction is correct, assuming vector^2 is considered the inner product of vector with itself. On Sat 31. Dec 2022 at 10:59 ThePauliPrinciple ***@***.***> wrote: > In a vector/matrix notation, it should be omega.I.omega, just like for > the translational kinetic energy v.m.v. > Then when the matrix I is just the identity matrix times a scalar the > equation simplifies to the one given. I agree that this is somewhat of a > sloppy notation, since this condition is certainly not always true, so your > concern is correct, however, the suggested correction is incorrect. > Personally I prefer a tensor with einstein summation convention notation: > T = 1/2 (I_ij.omega^i.omega^j + m_kl.v^k.v^l) > but this is often not used in text book equations (because it requires > somewhat of a deeper understanding). It does mean that we are not relient > on the order of the symbols that are multiplied and it immediately tells us > that I and m transform in the opposite way w.r.t. omega and v respectively > under a coordinate transformation. > > — > Reply to this email directly, view it on GitHub > <https://github.com/sympy/sympy/issues/24451#issuecomment-1368193814>, or > unsubscribe > <https://github.com/notifications/unsubscribe-auth/AT5MQURTQYMJQE3MWQPB6SDWP77XPANCNFSM6AAAAAATNMJNW4> > . > You are receiving this because you authored the thread.Message ID: > ***@***.***> > -- Best regards, Peter Stahlecker The equation for translational velocity is exactly the same, so then according to your argument the translational energy should be `(m.v)^2` No, it is correct as given. On Sat 31. Dec 2022 at 11:05, ThePauliPrinciple ***@***.***> wrote: > The equation for translational velocity is exactly the same, so then > according to your argument the translational energy should be (m.v)^2 > > — > Reply to this email directly, view it on GitHub > <https://github.com/sympy/sympy/issues/24451#issuecomment-1368194631>, or > unsubscribe > <https://github.com/notifications/unsubscribe-auth/AT5MQUUZYYESOZFMREAA6ZTWQAAP3ANCNFSM6AAAAAATNMJNW4> > . > You are receiving this because you authored the thread.Message ID: > ***@***.***> > -- Best regards, Peter Stahlecker I was wrong, the rotational energy is: ` 0.5 * inner_product(I*omega, omega) = 0.5 * dot(I*omega, omega)` If this is the same as I * omega^2, then all is fine. Yes, the same is meant. Using squares is mathematically not the nicest way to write it, as $\mathbf{v}$ and $\mathbf{\omega}$ are in fact vectors. However as you've pointed out, this is probably not worth further discussion, so I'll close this issue. P.S. Planning to work on rewriting some of the mechanics, so maybe I'll change it to $\mathbf{v}^T M \mathbf{v} + \mathbf{\omega}^T I \mathbf{\omega}$ or $M \mathbf{v} \cdot \mathbf{v} + I \mathbf{\omega} \cdot \mathbf{\omega}$.
[ { "body": "I looked at the 'explanations' of RigidBody, using _help(me.RigidBody)._\n\nFor the kinetic energy I saw this:\n\n_'T = 1/2 (I omega^2 + m v^2)'_\n\nShould it not be:\n\nT = 1/2 ( **(** I omega **)**^2 + m v^2 )\n\nA **very minor** point, but since I saw it, I wanted to point it out.", "number": 24451, "title": "sympy.physics.mechanics // RigidBody" } ]
f47b9f2c39d39a06a993e364edfbf3836bb6140d
{ "head_commit": "3d0b53d5d23e6091ea4ddff7af9defd66ad98a92", "head_commit_message": "fixes # 24451", "patch_to_review": "diff --git a/sympy/physics/mechanics/particle.py b/sympy/physics/mechanics/particle.py\nindex d8db1942c40b..6474ac7aad32 100644\n--- a/sympy/physics/mechanics/particle.py\n+++ b/sympy/physics/mechanics/particle.py\n@@ -124,7 +124,7 @@ def angular_momentum(self, point, frame):\n The angular momentum H, about some point O of a particle, P, is given\n by:\n \n- H = r x m * v\n+ `H = cross(r, m) * v`\n \n where r is the position vector from point O to the particle P, m is\n the mass of the particle, and v is the velocity of the particle in\n@@ -167,7 +167,7 @@ def kinetic_energy(self, frame):\n \n The kinetic energy, T, of a particle, P, is given by\n \n- 'T = 1/2 m v^2'\n+ `T = 1/2 (dot(m * v, v))`\n \n where m is the mass of particle P, and v is the velocity of the\n particle in the supplied ReferenceFrame.\ndiff --git a/sympy/physics/mechanics/rigidbody.py b/sympy/physics/mechanics/rigidbody.py\nindex 59ef489e7dc6..496d182fa344 100644\n--- a/sympy/physics/mechanics/rigidbody.py\n+++ b/sympy/physics/mechanics/rigidbody.py\n@@ -182,7 +182,7 @@ def angular_momentum(self, point, frame):\n The angular momentum H of a rigid body B about some point O in a frame\n N is given by:\n \n- H = I . w + r x Mv\n+ `H = dot(I, w) + cross(r, M) * v`\n \n where I is the central inertia dyadic of B, w is the angular velocity\n of body B in the frame, N, r is the position vector from point O to the\n@@ -232,7 +232,7 @@ def kinetic_energy(self, frame):\n \n The kinetic energy, T, of a rigid body, B, is given by\n \n- 'T = 1/2 (I omega^2 + m v^2)'\n+ `T = 1/2 * (dot(dot(I, w), w) + dot(m * v, v))`\n \n where I and m are the central inertia dyadic and mass of rigid body B,\n respectively, omega is the body's angular velocity and v is the\n" }
[ { "diff_hunk": "@@ -182,7 +182,7 @@ def angular_momentum(self, point, frame):\n The angular momentum H of a rigid body B about some point O in a frame\n N is given by:\n \n- H = I . w + r x Mv\n+ `H = dot(I, w) + cross(r, M) * v`", "line": null, "original_line": 185, "original_start_line": null, "path": "sympy/physics/mechanics/rigidbody.py", "start_line": null, "text": "@user1:\nShould it not be `cross(r, Mv)`? I think cross products are only defined on vectors.\n\n@author:\nSorry by mistake, I corrected it" } ]
2408e0b5a1f767abefaf136385b5d244bff1fea2
diff --git a/.mailmap b/.mailmap index 4b106d6f6eb4..0aa5ef9e9165 100644 --- a/.mailmap +++ b/.mailmap @@ -209,6 +209,7 @@ Abhinav Anand <[email protected]> Abhinav Chanda <[email protected]> Abhishek <[email protected]> Abhishek Garg <[email protected]> +Abhishek Patidar <[email protected]> ABHISHEK PATIDAR <[email protected]> Abhishek Patidar <[email protected]> Abhishek Patidar <[email protected]> Abhishek Verma <[email protected]> Achal Jain <[email protected]> diff --git a/sympy/physics/mechanics/particle.py b/sympy/physics/mechanics/particle.py index d8db1942c40b..2abc5235566e 100644 --- a/sympy/physics/mechanics/particle.py +++ b/sympy/physics/mechanics/particle.py @@ -83,7 +83,7 @@ def linear_momentum(self, frame): =========== The linear momentum L, of a particle P, with respect to frame N is - given by + given by: L = m * v @@ -124,7 +124,7 @@ def angular_momentum(self, point, frame): The angular momentum H, about some point O of a particle, P, is given by: - H = r x m * v + ``H = cross(r, m * v)`` where r is the position vector from point O to the particle P, m is the mass of the particle, and v is the velocity of the particle in @@ -165,9 +165,9 @@ def kinetic_energy(self, frame): Explanation =========== - The kinetic energy, T, of a particle, P, is given by + The kinetic energy, T, of a particle, P, is given by: - 'T = 1/2 m v^2' + ``T = 1/2 (dot(m * v, v))`` where m is the mass of particle P, and v is the velocity of the particle in the supplied ReferenceFrame. diff --git a/sympy/physics/mechanics/rigidbody.py b/sympy/physics/mechanics/rigidbody.py index 59ef489e7dc6..16f2c2db58d4 100644 --- a/sympy/physics/mechanics/rigidbody.py +++ b/sympy/physics/mechanics/rigidbody.py @@ -138,7 +138,7 @@ def linear_momentum(self, frame): =========== The linear momentum L, of a rigid body B, with respect to frame N is - given by + given by: L = M * v* @@ -182,7 +182,7 @@ def angular_momentum(self, point, frame): The angular momentum H of a rigid body B about some point O in a frame N is given by: - H = I . w + r x Mv + ``H = dot(I, w) + cross(r, M * v)`` where I is the central inertia dyadic of B, w is the angular velocity of body B in the frame, N, r is the position vector from point O to the @@ -230,9 +230,9 @@ def kinetic_energy(self, frame): Explanation =========== - The kinetic energy, T, of a rigid body, B, is given by + The kinetic energy, T, of a rigid body, B, is given by: - 'T = 1/2 (I omega^2 + m v^2)' + ``T = 1/2 * (dot(dot(I, w), w) + dot(m * v, v))`` where I and m are the central inertia dyadic and mass of rigid body B, respectively, omega is the body's angular velocity and v is the
{ "difficulty": "low", "estimated_review_effort": 1, "problem_domain": "Documentation Updates" }
sympy__sympy-23404@d37a3c0
sympy/sympy
Python
23,404
Improve printing for `PrimeIdeal`
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> Fixes #22974 #### Brief description of what is fixed or changed * It is now possible for a `Submodule` representing the maximal ideal of an `AlgebraicField` to have a reference back to that field. * When the maximal ideal is constructed via `AlgebraicField.maximal_order()` or `AlgebraicField.integral_basis()` it _does_ have such a reference. * As a consequence, we're able to improve the printing methods for `PrimeIdeal`, so that they use the alias of the associated field's primitive element, if any. In particular, we: - Support latex printing - Rename `_pretty()` --> `repr()` since this is not 2D printing. - Provide a `__str__()` method, which prints less info than the `__repr__()` method. #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> * polys * Improve printing for `PrimeIdeal` <!-- END RELEASE NOTES -->
2022-04-21T00:02:16Z
polys/numberfields: `PrimeIdeal` printing methods need work. Two methods in the `PrimeIdeal` class, `pretty()` (for printing) and `reduce_poly()`, involve generating polynomial expressions. Ideally, the generator symbol in these expressions should be able to be automatically the same as the `alias` symbol for the generator of the surrounding `AlgebraicField`. To do this, instances of `PrimeIdeal` will need a reference to that field, which they currently lack.
So, in #22977 these two methods were made private, so that they could still be changed after v1.10. Here are some thoughts on improving them for v1.11: ## `PrimeIdeal.pretty` First, this is misnamed, because it's not "pretty printing" in the sense of 2D displays. It's really just a `repr`-type string. Apart from this, we should have better `__str__` and LaTeX printing for the `PrimeIdeal` class, along the lines discussed in the OP (i.e. where the alias of the primitive element of the associated `AlgebraicField` (if any) can be automatically used). Before that can happen though, it would be best if #23338 could be resolved. ## `PrimeIdeal.reduce_poly` I'd like to improve this, and keep it around because it is useful, but I'd like to keep it private (leading underscore) in case we want to change the argument or return types as the need becomes more clear. The basic operation is an important one: we're mapping elements of a ring of integers `R` into the finite field `R/P`, where `P` is a prime ideal. So it's like the [`nfmodpr`](http://pari.math.u-bordeaux.fr/dochtml/html-stable/General_number_fields.html#nfmodpr) function of PARI. The existing implementation is funny though, in that it accepts a `Poly`. It makes more sense to accept a class like `ANP` or `AlgebraicNumber` (or both), which represents an algebraic number more directly. (And the method should be called something simpler like `_reduce()`). As for the return value, we're using a `Poly` mod `p`, which does make sense as a representation of an element of a finite field of order `p^n`, but again it's not clear if this should change in the future. We might just want a `dup` instead of a `Poly`, or at some point SymPy might provide `GF(p^n)` as a domain type (for `n > 1`), and then we might want it to be one of those (in particular if `GF(p^n)` lets you choose the irreducible poly by which it's defined).
[ { "body": "Two methods in the `PrimeIdeal` class, `pretty()` (for printing) and `reduce_poly()`, involve generating polynomial expressions. Ideally, the generator symbol in these expressions should be able to be automatically the same as the `alias` symbol for the generator of the surrounding `AlgebraicField`.\r\n\r\nTo do this, instances of `PrimeIdeal` will need a reference to that field, which they currently lack.", "number": 22974, "title": "polys/numberfields: `PrimeIdeal` printing methods need work." } ]
29f265496ed010b427b95435d18b942ed2c5cb0a
{ "head_commit": "d37a3c05b98c8144d401fa264af687a525b5e39c", "head_commit_message": "Improve printing for `PrimeIdeal`\n\n* Support latex printing\n* Rename `_pretty()` --> `repr()` since this is not 2D printing.\n* Provide a `__str__()` method, which prints less info than the `__repr__()` method.", "patch_to_review": "diff --git a/sympy/polys/domains/algebraicfield.py b/sympy/polys/domains/algebraicfield.py\nindex 830d820ba24e..241ce819a2a8 100644\n--- a/sympy/polys/domains/algebraicfield.py\n+++ b/sympy/polys/domains/algebraicfield.py\n@@ -213,8 +213,7 @@ class AlgebraicField(Field, CharacteristicZero, SimpleDomain):\n >>> K\n QQ<exp(2*I*pi/7)>\n >>> K.primes_above(11)\n- [[ (11, _x**3 + 5*_x**2 + 4*_x - 1) e=1, f=3 ],\n- [ (11, _x**3 - 4*_x**2 - 5*_x - 1) e=1, f=3 ]]\n+ [(11, _x**3 + 5*_x**2 + 4*_x - 1), (11, _x**3 - 4*_x**2 - 5*_x - 1)]\n \n Notes\n =====\n@@ -433,7 +432,7 @@ def from_GaussianRationalField(K1, a, K0):\n \n def _do_round_two(self):\n from sympy.polys.numberfields.basis import round_two\n- ZK, dK = round_two(self.ext.minpoly, radicals=self._nilradicals_mod_p)\n+ ZK, dK = round_two(self, radicals=self._nilradicals_mod_p)\n self._maximal_order = ZK\n self._discriminant = dK\n \n@@ -512,7 +511,6 @@ def integral_basis(self, fmt=None):\n return [self.to_alg_num(b) for b in B]\n return B\n \n-\n def discriminant(self):\n \"\"\"Get the discriminant of the field.\"\"\"\n if self._discriminant is None:\ndiff --git a/sympy/polys/numberfields/basis.py b/sympy/polys/numberfields/basis.py\nindex 9cb4b0e0aab4..a4ff2df0b286 100644\n--- a/sympy/polys/numberfields/basis.py\n+++ b/sympy/polys/numberfields/basis.py\n@@ -1,6 +1,7 @@\n \"\"\"Computing integral bases for number fields. \"\"\"\n \n from sympy.polys.polytools import Poly\n+from sympy.polys.domains.algebraicfield import AlgebraicField\n from sympy.polys.domains.integerring import ZZ\n from sympy.polys.domains.rationalfield import QQ\n from sympy.polys.polyerrors import CoercionFailed\n@@ -103,6 +104,10 @@ def round_two(T, radicals=None):\n polynomial *T* over :ref:`ZZ`. This computes an integral basis and the\n discriminant for the field $K = \\mathbb{Q}[x]/(T(x))$.\n \n+ Alternatively, you may pass an :py:class:`~.AlgebraicField` instance, in\n+ place of the polynomial *T*, in which case the algorithm is applied to the\n+ minimal polynomial for the field's primitive element.\n+\n Ordinarily this function need not be called directly, as one can instead\n access the :py:meth:`~.AlgebraicField.maximal_order`,\n :py:meth:`~.AlgebraicField.integral_basis`, and\n@@ -147,9 +152,10 @@ def round_two(T, radicals=None):\n Parameters\n ==========\n \n- T : :py:class:`~.Poly`\n- The irreducible monic polynomial over :ref:`ZZ` defining the number\n- field.\n+ T : :py:class:`~.Poly`, :py:class:`~.AlgebraicField`\n+ Either (1) the irreducible monic polynomial over :ref:`ZZ` defining the\n+ number field, or (2) an :py:class:`~.AlgebraicField` representing the\n+ number field itself.\n \n radicals : dict, optional\n This is a way for any $p$-radicals (if computed) to be returned by\n@@ -182,6 +188,9 @@ def round_two(T, radicals=None):\n .. [1] Cohen, H. *A Course in Computational Algebraic Number Theory.*\n \n \"\"\"\n+ K = None\n+ if isinstance(T, AlgebraicField):\n+ K, T = T, T.ext.minpoly_of_element()\n if T.domain == QQ:\n try:\n T = Poly(T, domain=ZZ)\n@@ -198,7 +207,7 @@ def round_two(T, radicals=None):\n # D must be 0 or 1 mod 4 (see Cohen Sec 4.4), which ensures we can write\n # it in the form D = D_0 * F**2, where D_0 is 1 or a fundamental discriminant.\n _, F = extract_fundamental_discriminant(D)\n- Ztheta = PowerBasis(T)\n+ Ztheta = PowerBasis(K or T)\n H = Ztheta.whole_submodule()\n nilrad = None\n while F:\ndiff --git a/sympy/polys/numberfields/modules.py b/sympy/polys/numberfields/modules.py\nindex 22813c8065f3..bd852553d56c 100644\n--- a/sympy/polys/numberfields/modules.py\n+++ b/sympy/polys/numberfields/modules.py\n@@ -10,8 +10,9 @@\n \n * For a :py:class:`~.PowerBasis`, the generators are the first $n$ powers\n (starting with the zeroth) of an algebraic integer $\\theta$ of degree $n$.\n- The :py:class:`~.PowerBasis` is constructed by passing the minimal\n- polynomial of $\\theta$.\n+ The :py:class:`~.PowerBasis` is constructed by passing either the minimal\n+ polynomial of $\\theta$, or an :py:class:`~.AlgebraicField` having $\\theta$\n+ as its primitive element.\n \n * For a :py:class:`~.Submodule`, the generators are a set of\n $\\mathbb{Q}$-linear combinations of the generators of another module. That\n@@ -181,6 +182,7 @@\n from sympy.core.symbol import Dummy\n from sympy.polys.polytools import Poly\n from sympy.polys.densetools import dup_clear_denoms\n+from sympy.polys.domains.algebraicfield import AlgebraicField\n from sympy.polys.domains.finitefield import FF\n from sympy.polys.domains.rationalfield import QQ\n from sympy.polys.domains.integerring import ZZ\n@@ -436,6 +438,28 @@ def nearest_common_ancestor(self, other):\n break\n return nca\n \n+ @property\n+ def number_field(self):\n+ r\"\"\"\n+ Return the associated :py:class:`~.AlgebraicField`, if any.\n+\n+ Explanation\n+ ===========\n+\n+ A :py:class:`~.PowerBasis` can be constructed on a :py:class:`~.Poly`\n+ $f$ or on an :py:class:`~.AlgebraicField` $K$. In the latter case, the\n+ :py:class:`~.PowerBasis` and all its descendant modules will return $K$\n+ as their ``.number_field`` property, while in the former case they will\n+ all return ``None``.\n+\n+ Returns\n+ =======\n+\n+ :py:class:`~.AlgebraicField`, ``None``\n+\n+ \"\"\"\n+ return self.power_basis_ancestor().number_field\n+\n def is_compat_col(self, col):\n \"\"\"Say whether *col* is a suitable column vector for this module.\"\"\"\n return isinstance(col, DomainMatrix) and col.shape == (self.n, 1) and col.domain.is_ZZ\n@@ -678,15 +702,30 @@ def __init__(self, T):\n Parameters\n ==========\n \n- T : :py:class:`~.Poly`\n- The monic, irreducible, univariate polynomial over :ref:`ZZ`, a\n- root of which is the generator of the power basis.\n+ T : :py:class:`~.Poly`, :py:class:`~.AlgebraicField`\n+ Either (1) the monic, irreducible, univariate polynomial over\n+ :ref:`ZZ`, a root of which is the generator of the power basis,\n+ or (2) an :py:class:`~.AlgebraicField` whose primitive element\n+ is the generator of the power basis.\n \n \"\"\"\n+ K = None\n+ if isinstance(T, AlgebraicField):\n+ K, T = T, T.ext.minpoly_of_element()\n+ if T.domain == QQ:\n+ try:\n+ T = Poly(T, domain=ZZ)\n+ except CoercionFailed:\n+ raise ValueError('A polynomial over ZZ is required')\n+ self.K = K\n self.T = T\n self._n = T.degree()\n self._mult_tab = None\n \n+ @property\n+ def number_field(self):\n+ return self.K\n+\n def __repr__(self):\n return f'PowerBasis({self.T.as_expr()})'\n \n@@ -1542,6 +1581,28 @@ def poly(self, x=None):\n \"\"\"Obtain the number as a polynomial over :ref:`QQ`.\"\"\"\n return self.numerator(x=x) // self.denom\n \n+ @property\n+ def is_rational(self):\n+ \"\"\"Say whether this element represents a rational number.\"\"\"\n+ return self.col[1:, :].is_zero_matrix\n+\n+ @property\n+ def generator(self):\n+ \"\"\"\n+ Return a :py:class:`~.Symbol` to be used when expressing this element\n+ as a polynomial.\n+\n+ If we have an associated :py:class:`~.AlgebraicField` whose primitive\n+ element has an alias symbol, we use that. Otherwise we use the variable\n+ of the minimal polynomial defining the power basis to which we belong.\n+ \"\"\"\n+ K = self.module.number_field\n+ return K.ext.alias if K and K.ext.is_aliased else self.T.gen\n+\n+ def as_expr(self, x=None):\n+ \"\"\"Create a Basic expression from ``self``. \"\"\"\n+ return self.poly(x or self.generator).as_expr()\n+\n def norm(self, T=None):\n \"\"\"Compute the norm of this number.\"\"\"\n T = T or self.T\ndiff --git a/sympy/polys/numberfields/primes.py b/sympy/polys/numberfields/primes.py\nindex dd3f6145cbfc..7fed6672cb3e 100644\n--- a/sympy/polys/numberfields/primes.py\n+++ b/sympy/polys/numberfields/primes.py\n@@ -70,7 +70,12 @@ def __init__(self, ZK, p, alpha, f, e=None):\n self._test_factor = None\n self.e = e if e is not None else self.valuation(p * ZK)\n \n- def pretty(self, field_gen=None, just_gens=False):\n+ def __str__(self):\n+ if self.alpha.is_rational:\n+ return f'({self.p})'\n+ return f'({self.p}, {self.alpha.as_expr()})'\n+\n+ def repr(self, field_gen=None, just_gens=False):\n \"\"\"\n Print a representation of this prime ideal.\n \n@@ -82,11 +87,11 @@ def pretty(self, field_gen=None, just_gens=False):\n >>> T = cyclotomic_poly(7, x)\n >>> K = QQ.algebraic_field((T, zeta))\n >>> P = K.primes_above(11)\n- >>> print(P[0].pretty())\n+ >>> print(P[0].repr())\n [ (11, x**3 + 5*x**2 + 4*x - 1) e=1, f=3 ]\n- >>> print(P[0].pretty(field_gen=zeta))\n+ >>> print(P[0].repr(field_gen=zeta))\n [ (11, zeta**3 + 5*zeta**2 + 4*zeta - 1) e=1, f=3 ]\n- >>> print(P[0].pretty(field_gen=zeta, just_gens=True))\n+ >>> print(P[0].repr(field_gen=zeta, just_gens=True))\n (11, zeta**3 + 5*zeta**2 + 4*zeta - 1)\n \n Parameters\n@@ -114,7 +119,7 @@ def pretty(self, field_gen=None, just_gens=False):\n return f'[ {gens} e={e}, f={f} ]'\n \n def __repr__(self):\n- return self.pretty()\n+ return self.repr()\n \n def as_submodule(self):\n r\"\"\"\ndiff --git a/sympy/polys/numberfields/tests/test_primes.py b/sympy/polys/numberfields/tests/test_primes.py\nindex 09e31e012918..93ec01375564 100644\n--- a/sympy/polys/numberfields/tests/test_primes.py\n+++ b/sympy/polys/numberfields/tests/test_primes.py\n@@ -193,17 +193,16 @@ def test_decomp_8():\n x ** 3 + 9 * x ** 2 + 6 * x - 8,\n x ** 3 + 15 * x ** 2 - 9 * x + 13,\n )\n- '''\n def display(T, p, radical, P, I, J):\n \"\"\"Useful for inspection, when running test manually.\"\"\"\n print('=' * 20)\n print(T, p, radical)\n for Pi in P:\n- print(f' ({Pi.pretty()})')\n+ print(f' ({Pi!r})')\n print(\"I: \", I)\n print(\"J: \", J)\n print(f'Equal: {I == J}')\n- '''\n+ inspect = False\n for g in cases:\n T = Poly(g)\n rad = {}\n@@ -216,7 +215,8 @@ def display(T, p, radical, P, I, J):\n P = prime_decomp(p, T, dK=dK, ZK=ZK, radical=radical)\n I = prod(Pi**Pi.e for Pi in P)\n J = p * ZK\n- #display(T, p, radical, P, I, J)\n+ if inspect:\n+ display(T, p, radical, P, I, J)\n assert I == J\n \n \n@@ -238,16 +238,31 @@ def test_PrimeIdeal_add():\n assert P0 + 7 * P0.ZK == P0.as_submodule()\n \n \n-def test_pretty_printing():\n- d = -7\n- T = Poly(x ** 2 - d)\n- rad = {}\n- ZK, dK = round_two(T, radicals=rad)\n- p = 2\n- P = prime_decomp(p, T, dK=dK, ZK=ZK, radical=rad.get(p))\n+def test_str():\n+ # Without alias:\n+ k = QQ.alg_field_from_poly(Poly(x**2 + 7))\n+ frp = k.primes_above(2)[0]\n+ assert str(frp) == '(2, 3*_x/2 + 1/2)'\n+\n+ frp = k.primes_above(3)[0]\n+ assert str(frp) == '(3)'\n+\n+ # With alias:\n+ k = QQ.alg_field_from_poly(Poly(x ** 2 + 7), alias='alpha')\n+ frp = k.primes_above(2)[0]\n+ assert str(frp) == '(2, 3*alpha/2 + 1/2)'\n+\n+ frp = k.primes_above(3)[0]\n+ assert str(frp) == '(3)'\n+\n+\n+def test_repr():\n+ T = Poly(x**2 + 7)\n+ ZK, dK = round_two(T)\n+ P = prime_decomp(2, T, dK=dK, ZK=ZK)\n assert repr(P[0]) == '[ (2, (3*x + 1)/2) e=1, f=1 ]'\n- assert P[0].pretty(field_gen=theta) == '[ (2, (3*theta + 1)/2) e=1, f=1 ]'\n- assert P[0].pretty(field_gen=theta, just_gens=True) == '(2, (3*theta + 1)/2)'\n+ assert P[0].repr(field_gen=theta) == '[ (2, (3*theta + 1)/2) e=1, f=1 ]'\n+ assert P[0].repr(field_gen=theta, just_gens=True) == '(2, (3*theta + 1)/2)'\n \n \n def test_PrimeIdeal_reduce_poly():\ndiff --git a/sympy/printing/latex.py b/sympy/printing/latex.py\nindex d8c4ebbf67dd..99f8579b24e7 100644\n--- a/sympy/printing/latex.py\n+++ b/sympy/printing/latex.py\n@@ -630,6 +630,13 @@ def _print_AlgebraicNumber(self, expr):\n else:\n return self._print(expr.as_expr())\n \n+ def _print_PrimeIdeal(self, expr):\n+ p = self._print(expr.p)\n+ if expr.alpha.is_rational:\n+ return rf'\\left({p}\\right)'\n+ alpha = self._print(expr.alpha.as_expr())\n+ return rf'\\left({p}, {alpha}\\right)'\n+\n def _print_Pow(self, expr):\n # Treat x**Rational(1,n) as special case\n if expr.exp.is_Rational and abs(expr.exp.p) == 1 and expr.exp.q != 1 \\\ndiff --git a/sympy/printing/tests/test_latex.py b/sympy/printing/tests/test_latex.py\nindex a158cdabcad4..de8e402c5802 100644\n--- a/sympy/printing/tests/test_latex.py\n+++ b/sympy/printing/tests/test_latex.py\n@@ -206,6 +206,11 @@ def test_latex_basic():\n r\"\\zeta^{3} + 2 \\zeta^{2} + 3 \\zeta + 4\"\n assert latex(k.ext.field_element([1, 2, 3, 4]), order='old') == \\\n r\"4 + 3 \\zeta + 2 \\zeta^{2} + \\zeta^{3}\"\n+ assert latex(k.primes_above(19)[0]) == \\\n+ r\"\\left(19, \\zeta^{2} + 5 \\zeta + 1\\right)\"\n+ assert latex(k.primes_above(19)[0], order='old') == \\\n+ r\"\\left(19, 1 + 5 \\zeta + \\zeta^{2}\\right)\"\n+ assert latex(k.primes_above(7)[0]) == r\"\\left(7\\right)\"\n \n assert latex(1.5e20*x) == r\"1.5 \\cdot 10^{20} x\"\n assert latex(1.5e20*x, mul_symbol='dot') == r\"1.5 \\cdot 10^{20} \\cdot x\"\n" }
[ { "diff_hunk": "@@ -678,15 +702,30 @@ def __init__(self, T):\n Parameters\n ==========\n \n- T : :py:class:`~.Poly`\n- The monic, irreducible, univariate polynomial over :ref:`ZZ`, a\n- root of which is the generator of the power basis.\n+ T : :py:class:`~.Poly`, :py:class:`~.AlgebraicField`\n+ Either (1) the monic, irreducible, univariate polynomial over\n+ :ref:`ZZ`, a root of which is the generator of the power basis,\n+ or (2) an :py:class:`~.AlgebraicField` whose primitive element\n+ is the generator of the power basis.\n \n \"\"\"\n+ K = None\n+ if isinstance(T, AlgebraicField):\n+ K, T = T, T.ext.minpoly_of_element()\n+ if T.domain == QQ:\n+ try:\n+ T = Poly(T, domain=ZZ)", "line": null, "original_line": 717, "original_start_line": null, "path": "sympy/polys/numberfields/modules.py", "start_line": null, "text": "@user1:\nThe direct way to do this is `T.set_domain(ZZ)`.\n\n@user1:\nI guess it isn't necessary to check if the domain is `QQ` since it should also raise for e.g. `ZZ[x]` if it can't be coerced to `ZZ`.\r\n\r\nAnother possibility would be to use `clear_denoms` I guess.\n\n@author:\n`set_domain()` looks good for this. Thanks, was not aware of this method!" } ]
a8ce918e6d0aee616abae750c631b1bb8a2fa0cd
diff --git a/sympy/polys/domains/algebraicfield.py b/sympy/polys/domains/algebraicfield.py index 830d820ba24e..241ce819a2a8 100644 --- a/sympy/polys/domains/algebraicfield.py +++ b/sympy/polys/domains/algebraicfield.py @@ -213,8 +213,7 @@ class AlgebraicField(Field, CharacteristicZero, SimpleDomain): >>> K QQ<exp(2*I*pi/7)> >>> K.primes_above(11) - [[ (11, _x**3 + 5*_x**2 + 4*_x - 1) e=1, f=3 ], - [ (11, _x**3 - 4*_x**2 - 5*_x - 1) e=1, f=3 ]] + [(11, _x**3 + 5*_x**2 + 4*_x - 1), (11, _x**3 - 4*_x**2 - 5*_x - 1)] Notes ===== @@ -433,7 +432,7 @@ def from_GaussianRationalField(K1, a, K0): def _do_round_two(self): from sympy.polys.numberfields.basis import round_two - ZK, dK = round_two(self.ext.minpoly, radicals=self._nilradicals_mod_p) + ZK, dK = round_two(self, radicals=self._nilradicals_mod_p) self._maximal_order = ZK self._discriminant = dK @@ -512,7 +511,6 @@ def integral_basis(self, fmt=None): return [self.to_alg_num(b) for b in B] return B - def discriminant(self): """Get the discriminant of the field.""" if self._discriminant is None: diff --git a/sympy/polys/numberfields/basis.py b/sympy/polys/numberfields/basis.py index 9cb4b0e0aab4..a4ff2df0b286 100644 --- a/sympy/polys/numberfields/basis.py +++ b/sympy/polys/numberfields/basis.py @@ -1,6 +1,7 @@ """Computing integral bases for number fields. """ from sympy.polys.polytools import Poly +from sympy.polys.domains.algebraicfield import AlgebraicField from sympy.polys.domains.integerring import ZZ from sympy.polys.domains.rationalfield import QQ from sympy.polys.polyerrors import CoercionFailed @@ -103,6 +104,10 @@ def round_two(T, radicals=None): polynomial *T* over :ref:`ZZ`. This computes an integral basis and the discriminant for the field $K = \mathbb{Q}[x]/(T(x))$. + Alternatively, you may pass an :py:class:`~.AlgebraicField` instance, in + place of the polynomial *T*, in which case the algorithm is applied to the + minimal polynomial for the field's primitive element. + Ordinarily this function need not be called directly, as one can instead access the :py:meth:`~.AlgebraicField.maximal_order`, :py:meth:`~.AlgebraicField.integral_basis`, and @@ -147,9 +152,10 @@ def round_two(T, radicals=None): Parameters ========== - T : :py:class:`~.Poly` - The irreducible monic polynomial over :ref:`ZZ` defining the number - field. + T : :py:class:`~.Poly`, :py:class:`~.AlgebraicField` + Either (1) the irreducible monic polynomial over :ref:`ZZ` defining the + number field, or (2) an :py:class:`~.AlgebraicField` representing the + number field itself. radicals : dict, optional This is a way for any $p$-radicals (if computed) to be returned by @@ -182,6 +188,9 @@ def round_two(T, radicals=None): .. [1] Cohen, H. *A Course in Computational Algebraic Number Theory.* """ + K = None + if isinstance(T, AlgebraicField): + K, T = T, T.ext.minpoly_of_element() if T.domain == QQ: try: T = Poly(T, domain=ZZ) @@ -198,7 +207,7 @@ def round_two(T, radicals=None): # D must be 0 or 1 mod 4 (see Cohen Sec 4.4), which ensures we can write # it in the form D = D_0 * F**2, where D_0 is 1 or a fundamental discriminant. _, F = extract_fundamental_discriminant(D) - Ztheta = PowerBasis(T) + Ztheta = PowerBasis(K or T) H = Ztheta.whole_submodule() nilrad = None while F: diff --git a/sympy/polys/numberfields/modules.py b/sympy/polys/numberfields/modules.py index 22813c8065f3..f945fbfc9487 100644 --- a/sympy/polys/numberfields/modules.py +++ b/sympy/polys/numberfields/modules.py @@ -10,8 +10,9 @@ * For a :py:class:`~.PowerBasis`, the generators are the first $n$ powers (starting with the zeroth) of an algebraic integer $\theta$ of degree $n$. - The :py:class:`~.PowerBasis` is constructed by passing the minimal - polynomial of $\theta$. + The :py:class:`~.PowerBasis` is constructed by passing either the minimal + polynomial of $\theta$, or an :py:class:`~.AlgebraicField` having $\theta$ + as its primitive element. * For a :py:class:`~.Submodule`, the generators are a set of $\mathbb{Q}$-linear combinations of the generators of another module. That @@ -181,6 +182,7 @@ from sympy.core.symbol import Dummy from sympy.polys.polytools import Poly from sympy.polys.densetools import dup_clear_denoms +from sympy.polys.domains.algebraicfield import AlgebraicField from sympy.polys.domains.finitefield import FF from sympy.polys.domains.rationalfield import QQ from sympy.polys.domains.integerring import ZZ @@ -436,6 +438,28 @@ def nearest_common_ancestor(self, other): break return nca + @property + def number_field(self): + r""" + Return the associated :py:class:`~.AlgebraicField`, if any. + + Explanation + =========== + + A :py:class:`~.PowerBasis` can be constructed on a :py:class:`~.Poly` + $f$ or on an :py:class:`~.AlgebraicField` $K$. In the latter case, the + :py:class:`~.PowerBasis` and all its descendant modules will return $K$ + as their ``.number_field`` property, while in the former case they will + all return ``None``. + + Returns + ======= + + :py:class:`~.AlgebraicField`, ``None`` + + """ + return self.power_basis_ancestor().number_field + def is_compat_col(self, col): """Say whether *col* is a suitable column vector for this module.""" return isinstance(col, DomainMatrix) and col.shape == (self.n, 1) and col.domain.is_ZZ @@ -678,15 +702,28 @@ def __init__(self, T): Parameters ========== - T : :py:class:`~.Poly` - The monic, irreducible, univariate polynomial over :ref:`ZZ`, a - root of which is the generator of the power basis. + T : :py:class:`~.Poly`, :py:class:`~.AlgebraicField` + Either (1) the monic, irreducible, univariate polynomial over + :ref:`ZZ`, a root of which is the generator of the power basis, + or (2) an :py:class:`~.AlgebraicField` whose primitive element + is the generator of the power basis. """ + K = None + if isinstance(T, AlgebraicField): + K, T = T, T.ext.minpoly_of_element() + # Sometimes incoming Polys are formally over QQ, although all their + # coeffs are integral. We want them to be formally over ZZ. + T = T.set_domain(ZZ) + self.K = K self.T = T self._n = T.degree() self._mult_tab = None + @property + def number_field(self): + return self.K + def __repr__(self): return f'PowerBasis({self.T.as_expr()})' @@ -1542,6 +1579,28 @@ def poly(self, x=None): """Obtain the number as a polynomial over :ref:`QQ`.""" return self.numerator(x=x) // self.denom + @property + def is_rational(self): + """Say whether this element represents a rational number.""" + return self.col[1:, :].is_zero_matrix + + @property + def generator(self): + """ + Return a :py:class:`~.Symbol` to be used when expressing this element + as a polynomial. + + If we have an associated :py:class:`~.AlgebraicField` whose primitive + element has an alias symbol, we use that. Otherwise we use the variable + of the minimal polynomial defining the power basis to which we belong. + """ + K = self.module.number_field + return K.ext.alias if K and K.ext.is_aliased else self.T.gen + + def as_expr(self, x=None): + """Create a Basic expression from ``self``. """ + return self.poly(x or self.generator).as_expr() + def norm(self, T=None): """Compute the norm of this number.""" T = T or self.T diff --git a/sympy/polys/numberfields/primes.py b/sympy/polys/numberfields/primes.py index dd3f6145cbfc..7fed6672cb3e 100644 --- a/sympy/polys/numberfields/primes.py +++ b/sympy/polys/numberfields/primes.py @@ -70,7 +70,12 @@ def __init__(self, ZK, p, alpha, f, e=None): self._test_factor = None self.e = e if e is not None else self.valuation(p * ZK) - def pretty(self, field_gen=None, just_gens=False): + def __str__(self): + if self.alpha.is_rational: + return f'({self.p})' + return f'({self.p}, {self.alpha.as_expr()})' + + def repr(self, field_gen=None, just_gens=False): """ Print a representation of this prime ideal. @@ -82,11 +87,11 @@ def pretty(self, field_gen=None, just_gens=False): >>> T = cyclotomic_poly(7, x) >>> K = QQ.algebraic_field((T, zeta)) >>> P = K.primes_above(11) - >>> print(P[0].pretty()) + >>> print(P[0].repr()) [ (11, x**3 + 5*x**2 + 4*x - 1) e=1, f=3 ] - >>> print(P[0].pretty(field_gen=zeta)) + >>> print(P[0].repr(field_gen=zeta)) [ (11, zeta**3 + 5*zeta**2 + 4*zeta - 1) e=1, f=3 ] - >>> print(P[0].pretty(field_gen=zeta, just_gens=True)) + >>> print(P[0].repr(field_gen=zeta, just_gens=True)) (11, zeta**3 + 5*zeta**2 + 4*zeta - 1) Parameters @@ -114,7 +119,7 @@ def pretty(self, field_gen=None, just_gens=False): return f'[ {gens} e={e}, f={f} ]' def __repr__(self): - return self.pretty() + return self.repr() def as_submodule(self): r""" diff --git a/sympy/polys/numberfields/tests/test_primes.py b/sympy/polys/numberfields/tests/test_primes.py index 09e31e012918..93ec01375564 100644 --- a/sympy/polys/numberfields/tests/test_primes.py +++ b/sympy/polys/numberfields/tests/test_primes.py @@ -193,17 +193,16 @@ def test_decomp_8(): x ** 3 + 9 * x ** 2 + 6 * x - 8, x ** 3 + 15 * x ** 2 - 9 * x + 13, ) - ''' def display(T, p, radical, P, I, J): """Useful for inspection, when running test manually.""" print('=' * 20) print(T, p, radical) for Pi in P: - print(f' ({Pi.pretty()})') + print(f' ({Pi!r})') print("I: ", I) print("J: ", J) print(f'Equal: {I == J}') - ''' + inspect = False for g in cases: T = Poly(g) rad = {} @@ -216,7 +215,8 @@ def display(T, p, radical, P, I, J): P = prime_decomp(p, T, dK=dK, ZK=ZK, radical=radical) I = prod(Pi**Pi.e for Pi in P) J = p * ZK - #display(T, p, radical, P, I, J) + if inspect: + display(T, p, radical, P, I, J) assert I == J @@ -238,16 +238,31 @@ def test_PrimeIdeal_add(): assert P0 + 7 * P0.ZK == P0.as_submodule() -def test_pretty_printing(): - d = -7 - T = Poly(x ** 2 - d) - rad = {} - ZK, dK = round_two(T, radicals=rad) - p = 2 - P = prime_decomp(p, T, dK=dK, ZK=ZK, radical=rad.get(p)) +def test_str(): + # Without alias: + k = QQ.alg_field_from_poly(Poly(x**2 + 7)) + frp = k.primes_above(2)[0] + assert str(frp) == '(2, 3*_x/2 + 1/2)' + + frp = k.primes_above(3)[0] + assert str(frp) == '(3)' + + # With alias: + k = QQ.alg_field_from_poly(Poly(x ** 2 + 7), alias='alpha') + frp = k.primes_above(2)[0] + assert str(frp) == '(2, 3*alpha/2 + 1/2)' + + frp = k.primes_above(3)[0] + assert str(frp) == '(3)' + + +def test_repr(): + T = Poly(x**2 + 7) + ZK, dK = round_two(T) + P = prime_decomp(2, T, dK=dK, ZK=ZK) assert repr(P[0]) == '[ (2, (3*x + 1)/2) e=1, f=1 ]' - assert P[0].pretty(field_gen=theta) == '[ (2, (3*theta + 1)/2) e=1, f=1 ]' - assert P[0].pretty(field_gen=theta, just_gens=True) == '(2, (3*theta + 1)/2)' + assert P[0].repr(field_gen=theta) == '[ (2, (3*theta + 1)/2) e=1, f=1 ]' + assert P[0].repr(field_gen=theta, just_gens=True) == '(2, (3*theta + 1)/2)' def test_PrimeIdeal_reduce_poly(): diff --git a/sympy/printing/latex.py b/sympy/printing/latex.py index d8c4ebbf67dd..99f8579b24e7 100644 --- a/sympy/printing/latex.py +++ b/sympy/printing/latex.py @@ -630,6 +630,13 @@ def _print_AlgebraicNumber(self, expr): else: return self._print(expr.as_expr()) + def _print_PrimeIdeal(self, expr): + p = self._print(expr.p) + if expr.alpha.is_rational: + return rf'\left({p}\right)' + alpha = self._print(expr.alpha.as_expr()) + return rf'\left({p}, {alpha}\right)' + def _print_Pow(self, expr): # Treat x**Rational(1,n) as special case if expr.exp.is_Rational and abs(expr.exp.p) == 1 and expr.exp.q != 1 \ diff --git a/sympy/printing/tests/test_latex.py b/sympy/printing/tests/test_latex.py index a158cdabcad4..de8e402c5802 100644 --- a/sympy/printing/tests/test_latex.py +++ b/sympy/printing/tests/test_latex.py @@ -206,6 +206,11 @@ def test_latex_basic(): r"\zeta^{3} + 2 \zeta^{2} + 3 \zeta + 4" assert latex(k.ext.field_element([1, 2, 3, 4]), order='old') == \ r"4 + 3 \zeta + 2 \zeta^{2} + \zeta^{3}" + assert latex(k.primes_above(19)[0]) == \ + r"\left(19, \zeta^{2} + 5 \zeta + 1\right)" + assert latex(k.primes_above(19)[0], order='old') == \ + r"\left(19, 1 + 5 \zeta + \zeta^{2}\right)" + assert latex(k.primes_above(7)[0]) == r"\left(7\right)" assert latex(1.5e20*x) == r"1.5 \cdot 10^{20} x" assert latex(1.5e20*x, mul_symbol='dot') == r"1.5 \cdot 10^{20} \cdot x"
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Code Refactoring / Architectural Improvement" }
sympy__sympy-23241@d566b5c
sympy/sympy
Python
23,241
Series/Limits: Fixed leading term method inconsistency for expressions involving variables in powers
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> Fixes #23231 #### Brief description of what is fixed or changed The block we are getting rid of which is as follows . ``` --- a/sympy/core/power.py +++ b/sympy/core/power.py @@ -1792,11 +1792,7 @@ def _eval_as_leading_term(self, x, logx=None, cdir=0): raise PoleError("Cannot expand %s around 0" % (self)) elif e.has(x): lt = exp(e * log(b)) - try: - lt = lt.as_leading_term(x, logx=logx, cdir=cdir) - except PoleError: - pass - return lt + return lt.as_leading_term(x, logx=logx, cdir=cdir) ``` This block was responsible for handling the following cases (cases where expressions have args involving variables in exponents/powers for eg `x**x` , `(2**x + x**(1/x))` ). Hence getting rid of this block will spoil almost all cases of limits and order of such type as something like `(x**(1/x)).as_leading_term(x)` would give a `NotImplementedError/PoleError` . ``` # On master >>> (x**(1/x)).as_leading_term(x) exp(log(x)/x) >>> (2**(1/x)).as_leading_term(x) exp(log(2)/x) >>> ((1/x)**(1/x)).as_leading_term(x) exp(log(1/x)/x) ``` What would be potentially spoiled here . ``` # results after making the change >>> O(x**x, (x, oo)) # on master O(exp(x*log(x)), (x, oo)) O((1/x)**x, (x, oo)) >>> O(x**x + 2**x, (x, oo)) # on master O(exp(x*log(x)), (x, oo)) O(1, (x, oo)) >>> O(x**x + 1/x**2, (x, oo)) # on master O(exp(x*log(x)), (x, oo)) O(x**(-2), (x, oo)) >>> >>> O(x**x + x**2, (x, oo)) # on master O(exp(x*log(x)), (x, oo)) O(x**2, (x, oo)) ``` Hence now this case needs to expressed in the leading term method of `exponential.py` . Order for such cases (variables present in exponents) was under tested(just one trivial case i guess) or rather not tested at all , hence I have addressed a test for this too . #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> * series * Fixed leading term method inconsistency for expressions involving variables in powers. <!-- END RELEASE NOTES -->
2022-03-15T07:04:32Z
Sympy giving the wrong solution Hello, While doing an example with ``` f(x) =( 2^(1/x) -2^(-1/x))/( 2^(1/x) +2^(-1/x)) ``` Where you want to see the left end-behavior and the right end-behavior for f(1/x). Now left end-behavior is -1 and right end-behavior is 1 but according to SymPy its results is equal to 1. ``` julia> using PythonCall julia> sympy = pyimport("sympy") Python module: <module 'sympy' from '/Users/verzani/.julia/environments/v1.7/.CondaPkg/env/lib/python3.10/site-packages/sympy/__init__.py'> julia> sympy.limit((2^x - 2^(-x))/(2^x + 2^(-x)), x, -sympy.oo)^C julia> x = sympy.symbols("x") Python Symbol: x julia> sympy.limit((2^x - 2^(-x))/(2^x + 2^(-x)), x, -sympy.oo) Python One: 1 julia> sympy.__version__ Python str: '1.10' ``` Any reason behind this error?
This looks like a regression, it was working correctly in e.g. 1.7.1 ``` I can confirm this is an error >>> f = (2**x - 2**(-x))/(2**x + 2**(-x)) >>> >>> limit(f, x, -oo) 1 # expected -1 ``` Bisected to bf1eb95c20d7b74e5bd30bfe03b17745f09faf23 from #21589. CC @0sidharth @jksuom It seems that `Pow._eval_as_leading_term` masks a `PoleError` that should be seen by `Limit.doit`. Something like this might resolve the issue: ``` --- a/sympy/core/power.py +++ b/sympy/core/power.py @@ -1792,11 +1792,7 @@ def _eval_as_leading_term(self, x, logx=None, cdir=0): raise PoleError("Cannot expand %s around 0" % (self)) elif e.has(x): lt = exp(e * log(b)) - try: - lt = lt.as_leading_term(x, logx=logx, cdir=cdir) - except PoleError: - pass - return lt + return lt.as_leading_term(x, logx=logx, cdir=cdir) else: from sympy.functions.elementary.complexes import im f = b.as_leading_term(x, logx=logx, cdir=cdir) ``` I had just started solving this just recently and realized you provided the diff just now, so maybe I'll confirm and verify any mistake (test failures due to this) after the change. **EDIT1:** I realize what was going wrong here and this change seems to address the issue correctly .Although I don't see any serious errors (just 1 overall which is a trivial case in `test_order.py` and can be addressed) , I presume this was added to address some case in `order.py` , would need some observation to see if something is going wrong. **EDIT2:** Okay I now know which cases were dependent on the current implementation . Exponential of the form `x**x` and `x**y` (multivariate version) is being spoiled here . Though there aren't many tests to catch this in test_order.py ``` # results after making the change >>> O(x**x, (x, oo)) # on master O(exp(x*log(x)), (x, oo)) O((1/x)**x, (x, oo)) >>> O(x**x + 2**x, (x, oo)) # on master O(exp(x*log(x)), (x, oo)) O(1, (x, oo)) >>> O(x**x + 1/x**2, (x, oo)) # on master O(exp(x*log(x)), (x, oo)) O(x**(-2), (x, oo)) >>> >>> O(x**x + x**2, (x, oo)) # on master O(exp(x*log(x)), (x, oo)) O(x**2, (x, oo)) ``` Will try to check what's going wrong here sometime soon !
[ { "body": "Hello, \r\n\r\nWhile doing an example with \r\n```\r\nf(x) =( 2^(1/x) -2^(-1/x))/( 2^(1/x) +2^(-1/x))\r\n```\r\nWhere you want to see the left end-behavior and the right end-behavior for f(1/x). Now left end-behavior is -1 and right end-behavior is 1 but according to SymPy its results is equal to 1. \r\n\r\n```\r\njulia> using PythonCall\r\n \r\njulia> sympy = pyimport(\"sympy\")\r\nPython module: <module 'sympy' from '/Users/verzani/.julia/environments/v1.7/.CondaPkg/env/lib/python3.10/site-packages/sympy/__init__.py'>\r\n \r\njulia> sympy.limit((2^x - 2^(-x))/(2^x + 2^(-x)), x, -sympy.oo)^C\r\n \r\njulia> x = sympy.symbols(\"x\")\r\nPython Symbol: x\r\n \r\njulia> sympy.limit((2^x - 2^(-x))/(2^x + 2^(-x)), x, -sympy.oo)\r\nPython One: 1\r\n \r\njulia> sympy.__version__\r\nPython str: '1.10'\r\n```\r\n\r\nAny reason behind this error? ", "number": 23231, "title": "Sympy giving the wrong solution " } ]
ebda8eb85f7c07c7c01d6bf7678aca03243a2ef3
{ "head_commit": "d566b5c03fca3cdf38308516e30dc6979d04f33a", "head_commit_message": "Added a case in leading term method of exponential.py", "patch_to_review": "diff --git a/sympy/core/power.py b/sympy/core/power.py\nindex c85a1fcee2c1..8be33df01ad4 100644\n--- a/sympy/core/power.py\n+++ b/sympy/core/power.py\n@@ -1793,11 +1793,7 @@ def _eval_as_leading_term(self, x, logx=None, cdir=0):\n raise PoleError(\"Cannot expand %s around 0\" % (self))\n elif e.has(x):\n lt = exp(e * log(b))\n- try:\n- lt = lt.as_leading_term(x, logx=logx, cdir=cdir)\n- except PoleError:\n- pass\n- return lt\n+ return lt.as_leading_term(x, logx=logx, cdir=cdir)\n else:\n from sympy.functions.elementary.complexes import im\n f = b.as_leading_term(x, logx=logx, cdir=cdir)\ndiff --git a/sympy/functions/elementary/exponential.py b/sympy/functions/elementary/exponential.py\nindex a3994b3a04f7..39fdd8490777 100644\n--- a/sympy/functions/elementary/exponential.py\n+++ b/sympy/functions/elementary/exponential.py\n@@ -541,6 +541,8 @@ def _eval_as_leading_term(self, x, logx=None, cdir=0):\n # Check out function: test_issue_18473() in test_exponential.py and\n # test_limits.py for more information.\n return exp(arg0)\n+ if isinstance(arg.as_numer_denom()[0], log):\n+ return self\n if arg0 is S.NaN:\n arg0 = arg.limit(x, 0)\n if arg0.is_infinite is False:\ndiff --git a/sympy/functions/elementary/tests/test_exponential.py b/sympy/functions/elementary/tests/test_exponential.py\nindex 9b5383429818..c6ded13d480a 100644\n--- a/sympy/functions/elementary/tests/test_exponential.py\n+++ b/sympy/functions/elementary/tests/test_exponential.py\n@@ -210,6 +210,12 @@ def test_exp_leading_term():\n # raises(NotImplementedError, lambda: exp(x + 1/x).as_leading_term(x))\n \n \n+def test_issue_23231():\n+ assert (x**(1/x)).as_leading_term(x) == exp(log(x)/x)\n+ assert (2**(1/x)).as_leading_term(x) == exp(log(2)/x)\n+ assert ((1/x)**(1/x)).as_leading_term(x) == exp(log(1/x)/x)\n+\n+\n @_both_exp_pow\n def test_exp_taylor_term():\n x = symbols('x')\ndiff --git a/sympy/series/tests/test_limits.py b/sympy/series/tests/test_limits.py\nindex d3d0ee0e10b9..6f7de1e35707 100644\n--- a/sympy/series/tests/test_limits.py\n+++ b/sympy/series/tests/test_limits.py\n@@ -1102,3 +1102,8 @@ def test_issue_21785():\n \n def test_issue_22181():\n assert limit((-1)**x * 2**(-x), x, oo) == 0\n+\n+\n+def test_issue_23231():\n+ f = (2**x - 2**(-x))/(2**x + 2**(-x))\n+ assert limit(f, x, -oo) == -1\ndiff --git a/sympy/series/tests/test_order.py b/sympy/series/tests/test_order.py\nindex 18e7bd1adf78..132f35af8fc0 100644\n--- a/sympy/series/tests/test_order.py\n+++ b/sympy/series/tests/test_order.py\n@@ -460,5 +460,14 @@ def test_issue_22165():\n assert O(log(x)).contains(2)\n \n \n+def test_issue_23231():\n+ # This test checks Order for expressions having\n+ # arguments containing variables in exponents/powers.\n+ assert O(x**x + 2**x, (x, oo)) == O(exp(x*log(x)), (x, oo))\n+ assert O(x**x + x**2, (x, oo)) == O(exp(x*log(x)), (x, oo))\n+ assert O(x**x + 1/x**2, (x, oo)) == O(exp(x*log(x)), (x, oo))\n+ assert O(2**x + 3**x , (x, oo)) == O(exp(x*log(3)), (x, oo))\n+\n+\n def test_issue_9917():\n assert O(x*sin(x) + 1, (x, oo)) == O(x, (x, oo))\n" }
[ { "diff_hunk": "@@ -541,6 +541,8 @@ def _eval_as_leading_term(self, x, logx=None, cdir=0):\n # Check out function: test_issue_18473() in test_exponential.py and\n # test_limits.py for more information.\n return exp(arg0)\n+ if isinstance(arg.as_numer_denom()[0], log):", "line": null, "original_line": 544, "original_start_line": null, "path": "sympy/functions/elementary/exponential.py", "start_line": null, "text": "@user1:\nIt seems that this will only work in very special cases not including this:\r\n```\r\n>>> O(x**(2*x), (x, oo))\r\nO(exp(x*log(1/x)), (x, oo))\r\n```\r\n\n\n@author:\nTrue I am fixing this , i realized this will become a bit specialized . Hence I turned this into draft . Working on this" } ]
7e161a6d3bc6186f1acdb17ca7e844adfb316472
diff --git a/sympy/core/power.py b/sympy/core/power.py index c85a1fcee2c1..8be33df01ad4 100644 --- a/sympy/core/power.py +++ b/sympy/core/power.py @@ -1793,11 +1793,7 @@ def _eval_as_leading_term(self, x, logx=None, cdir=0): raise PoleError("Cannot expand %s around 0" % (self)) elif e.has(x): lt = exp(e * log(b)) - try: - lt = lt.as_leading_term(x, logx=logx, cdir=cdir) - except PoleError: - pass - return lt + return lt.as_leading_term(x, logx=logx, cdir=cdir) else: from sympy.functions.elementary.complexes import im f = b.as_leading_term(x, logx=logx, cdir=cdir) diff --git a/sympy/series/order.py b/sympy/series/order.py index 46afcc95c810..29cf19df6d1b 100644 --- a/sympy/series/order.py +++ b/sympy/series/order.py @@ -3,6 +3,7 @@ from sympy.core.containers import Tuple from sympy.core.function import Function, PoleError, expand_power_base, expand_log from sympy.core.sorting import default_sort_key +from sympy.functions.elementary.exponential import exp, log from sympy.sets.sets import Complement from sympy.utilities.iterables import uniq, is_sequence @@ -255,7 +256,9 @@ def __new__(cls, expr, *args, **kwargs): elif expr.is_Mul: expr = Mul(*[a.expr for a in orders]) elif expr.is_Pow: - expr = orders[0].expr**orders[1].expr + e = expr.exp + b = expr.base + expr = exp(e * log(b)) expr = expr.as_independent(*args, as_Add=False)[1] expr = expand_power_base(expr) diff --git a/sympy/series/tests/test_limits.py b/sympy/series/tests/test_limits.py index d3d0ee0e10b9..6f7de1e35707 100644 --- a/sympy/series/tests/test_limits.py +++ b/sympy/series/tests/test_limits.py @@ -1102,3 +1102,8 @@ def test_issue_21785(): def test_issue_22181(): assert limit((-1)**x * 2**(-x), x, oo) == 0 + + +def test_issue_23231(): + f = (2**x - 2**(-x))/(2**x + 2**(-x)) + assert limit(f, x, -oo) == -1 diff --git a/sympy/series/tests/test_order.py b/sympy/series/tests/test_order.py index 18e7bd1adf78..c3a695f81341 100644 --- a/sympy/series/tests/test_order.py +++ b/sympy/series/tests/test_order.py @@ -382,7 +382,7 @@ def test_order_at_infinity(): # issue 7207 assert Order(exp(x), (x, oo)).expr == Order(2*exp(x), (x, oo)).expr == exp(x) - assert Order(y**x, (x, oo)).expr == Order(2*y**x, (x, oo)).expr == exp(log(y)*x) + assert Order(y**x, (x, oo)).expr == Order(2*y**x, (x, oo)).expr == exp(x*log(y)) # issue 19545 assert Order(1/x - 3/(3*x + 2), (x, oo)).expr == x**(-2) @@ -460,5 +460,14 @@ def test_issue_22165(): assert O(log(x)).contains(2) +def test_issue_23231(): + # This test checks Order for expressions having + # arguments containing variables in exponents/powers. + assert O(x**x + 2**x, (x, oo)) == O(exp(x*log(x)), (x, oo)) + assert O(x**x + x**2, (x, oo)) == O(exp(x*log(x)), (x, oo)) + assert O(x**x + 1/x**2, (x, oo)) == O(exp(x*log(x)), (x, oo)) + assert O(2**x + 3**x , (x, oo)) == O(exp(x*log(3)), (x, oo)) + + def test_issue_9917(): assert O(x*sin(x) + 1, (x, oo)) == O(x, (x, oo))
{ "difficulty": "high", "estimated_review_effort": 4, "problem_domain": "Bug Fixes" }
sympy__sympy-23218@74e3612
sympy/sympy
Python
23,218
feat(physics): `quantity_simplify` can now optionally simplify units across dimensions
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> Fixes #21234 #### Brief description of what is fixed or changed Added an optional flag, `across_dimensions` to the function `quantity_simplify`, which allows units to be simplified across dimensions, eg. joule/coulomb -> volt. #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> * physics.units * `Quantity`: it is now possible to determine if a Quantity is prefixed (eg. `nanometers`) with the property `is_prefixed`. * `Quantity`: it is now possible to determine if a Quantity represents a physics constant (eg. `speed_of_light`) with the property `is_physical_constant` or if is an instance of the new subclass `PhysicalConstant`. * `quantity_simplify()` can now simplify units across dimensions using the optional flag `across_dimensions=True`. <!-- END RELEASE NOTES -->
2022-03-08T16:59:56Z
Can't get joule/m^3 to simplify into pascal Sample: ```python from sympy.physics.units import * print(joule/meter**3) print(simplify(joule/meter**3)) ``` I expected one of these to give me pascal as the unit, but instead I get `joule/meter**3` in both cases. If I try `(joule/meter**3).convert_to(pascal)` instead, I get `AttributeError: 'Mul' object has no attribute 'convert_to'`. However, `convert_to(joule/meter**3, pascal)` succeeds. I also tried `sympy.physics.units.util.quantity_simplify(joule/meter**3)`, but that also gives me `joule/meter**3`
I'd like to work on this issue. @dyc3 I looked over this and tested some parts of it, from what I understood, is that quantity_simplify() only works when the parameters are of the same unit, in the code, there is an example: >>> quantity_simplify(kilo*foot*inch) 250*foot**2/3 This works because foot and inch are both units of length, unlike your expression. Let me know if I have erred somewhere! I'm looking more into the code and will let you know if I find something :) For `quantity_simplify`, the docs state: > Return an equivalent expression in which prefixes are replaced with numerical values and **all units of a given dimension are the unified** in a canonical manner. So technically, this is expected behavior. However, I think that `quantity_simplify` should be expanded to cover cases like this, where the dimensions of the unit in the expression can be simplified into a different unit. These types of unit simplifications are not necessarily intuitive to end users, so being able to have sympy cover it would help. For example: - `joule/meter**3` -> `pascal` - `ampere * ohm` -> `volt` - `farad * ohm` -> `second` - `joule / second` -> `watt` It also might be nice to have this extra feature gated behind a flag argument that defaults to true. Opened a PR: #23218
[ { "body": "Sample:\r\n```python\r\nfrom sympy.physics.units import *\r\nprint(joule/meter**3)\r\nprint(simplify(joule/meter**3))\r\n```\r\nI expected one of these to give me pascal as the unit, but instead I get `joule/meter**3` in both cases.\r\n\r\nIf I try `(joule/meter**3).convert_to(pascal)` instead, I get `AttributeError: 'Mul' object has no attribute 'convert_to'`. However, `convert_to(joule/meter**3, pascal)` succeeds.\r\n\r\nI also tried `sympy.physics.units.util.quantity_simplify(joule/meter**3)`, but that also gives me `joule/meter**3`", "number": 21234, "title": "Can't get joule/m^3 to simplify into pascal" } ]
fdc707f73a65a429935c01532cd3970d3355eab6
{ "head_commit": "74e3612c823c3017ae855d67994b55504367687c", "head_commit_message": "refactor(physics.units): hard code preferred units map into each unit system\n\nI also took the liberty of renaming some imports to be more explicit and clear.", "patch_to_review": "diff --git a/.mailmap b/.mailmap\nindex 18f20831db4a..ee94aa49dc36 100644\n--- a/.mailmap\n+++ b/.mailmap\n@@ -271,6 +271,7 @@ Calvin Jay Ross <[email protected]>\n Cameron King <[email protected]>\n Carl Sandrock <[email protected]>\n Carlos Cordoba <[email protected]>\n+Carson McManus <[email protected]> Carson McManus <[email protected]>\n Carsten Knoll <[email protected]>\n Case Van Horsen <[email protected]>\n Cavendish McKay <[email protected]>\ndiff --git a/sympy/core/tests/test_args.py b/sympy/core/tests/test_args.py\nindex b20e408567be..171312eba78e 100644\n--- a/sympy/core/tests/test_args.py\n+++ b/sympy/core/tests/test_args.py\n@@ -4251,6 +4251,11 @@ def test_sympy__physics__units__quantities__Quantity():\n assert _test_args(Quantity(\"dam\"))\n \n \n+def test_sympy__physics__units__quantities__PhysicalConstant():\n+ from sympy.physics.units.quantities import PhysicalConstant\n+ assert _test_args(PhysicalConstant(\"foo\"))\n+\n+\n def test_sympy__physics__units__prefixes__Prefix():\n from sympy.physics.units.prefixes import Prefix\n assert _test_args(Prefix('kilo', 'k', 3))\ndiff --git a/sympy/physics/units/definitions/unit_definitions.py b/sympy/physics/units/definitions/unit_definitions.py\nindex b55569fab87a..4939fa65acb5 100644\n--- a/sympy/physics/units/definitions/unit_definitions.py\n+++ b/sympy/physics/units/definitions/unit_definitions.py\n@@ -5,7 +5,7 @@\n from sympy.core.numbers import (Rational, pi)\n from sympy.core.singleton import S as S_singleton\n from sympy.physics.units.prefixes import kilo, mega, milli, micro, deci, centi, nano, pico, kibi, mebi, gibi, tebi, pebi, exbi\n-from sympy.physics.units.quantities import Quantity\n+from sympy.physics.units.quantities import PhysicalConstant, Quantity\n \n One = S_singleton.One\n \n@@ -117,7 +117,7 @@\n ug.set_global_relative_scale_factor(micro, gram)\n \n # Atomic mass constant\n-Da = dalton = amu = amus = atomic_mass_unit = atomic_mass_constant = Quantity(\"atomic_mass_constant\")\n+Da = dalton = amu = amus = atomic_mass_unit = atomic_mass_constant = PhysicalConstant(\"atomic_mass_constant\")\n \n t = metric_ton = tonne = Quantity(\"tonne\", abbrev=\"t\")\n tonne.set_global_relative_scale_factor(mega, gram)\n@@ -232,62 +232,62 @@\n #### CONSTANTS ####\n \n # Newton constant\n-G = gravitational_constant = Quantity(\"gravitational_constant\", abbrev=\"G\")\n+G = gravitational_constant = PhysicalConstant(\"gravitational_constant\", abbrev=\"G\")\n \n # speed of light\n-c = speed_of_light = Quantity(\"speed_of_light\", abbrev=\"c\")\n+c = speed_of_light = PhysicalConstant(\"speed_of_light\", abbrev=\"c\")\n \n # elementary charge\n-elementary_charge = Quantity(\"elementary_charge\", abbrev=\"e\")\n+elementary_charge = PhysicalConstant(\"elementary_charge\", abbrev=\"e\")\n \n # Planck constant\n-planck = Quantity(\"planck\", abbrev=\"h\")\n+planck = PhysicalConstant(\"planck\", abbrev=\"h\")\n \n # Reduced Planck constant\n-hbar = Quantity(\"hbar\", abbrev=\"hbar\")\n+hbar = PhysicalConstant(\"hbar\", abbrev=\"hbar\")\n \n # Electronvolt\n-eV = electronvolt = electronvolts = Quantity(\"electronvolt\", abbrev=\"eV\")\n+eV = electronvolt = electronvolts = PhysicalConstant(\"electronvolt\", abbrev=\"eV\")\n \n # Avogadro number\n-avogadro_number = Quantity(\"avogadro_number\")\n+avogadro_number = PhysicalConstant(\"avogadro_number\")\n \n # Avogadro constant\n-avogadro = avogadro_constant = Quantity(\"avogadro_constant\")\n+avogadro = avogadro_constant = PhysicalConstant(\"avogadro_constant\")\n \n # Boltzmann constant\n-boltzmann = boltzmann_constant = Quantity(\"boltzmann_constant\")\n+boltzmann = boltzmann_constant = PhysicalConstant(\"boltzmann_constant\")\n \n # Stefan-Boltzmann constant\n-stefan = stefan_boltzmann_constant = Quantity(\"stefan_boltzmann_constant\")\n+stefan = stefan_boltzmann_constant = PhysicalConstant(\"stefan_boltzmann_constant\")\n \n # Molar gas constant\n-R = molar_gas_constant = Quantity(\"molar_gas_constant\", abbrev=\"R\")\n+R = molar_gas_constant = PhysicalConstant(\"molar_gas_constant\", abbrev=\"R\")\n \n # Faraday constant\n-faraday_constant = Quantity(\"faraday_constant\")\n+faraday_constant = PhysicalConstant(\"faraday_constant\")\n \n # Josephson constant\n-josephson_constant = Quantity(\"josephson_constant\", abbrev=\"K_j\")\n+josephson_constant = PhysicalConstant(\"josephson_constant\", abbrev=\"K_j\")\n \n # Von Klitzing constant\n-von_klitzing_constant = Quantity(\"von_klitzing_constant\", abbrev=\"R_k\")\n+von_klitzing_constant = PhysicalConstant(\"von_klitzing_constant\", abbrev=\"R_k\")\n \n # Acceleration due to gravity (on the Earth surface)\n-gee = gees = acceleration_due_to_gravity = Quantity(\"acceleration_due_to_gravity\", abbrev=\"g\")\n+gee = gees = acceleration_due_to_gravity = PhysicalConstant(\"acceleration_due_to_gravity\", abbrev=\"g\")\n \n # magnetic constant:\n-u0 = magnetic_constant = vacuum_permeability = Quantity(\"magnetic_constant\")\n+u0 = magnetic_constant = vacuum_permeability = PhysicalConstant(\"magnetic_constant\")\n \n # electric constat:\n-e0 = electric_constant = vacuum_permittivity = Quantity(\"vacuum_permittivity\")\n+e0 = electric_constant = vacuum_permittivity = PhysicalConstant(\"vacuum_permittivity\")\n \n # vacuum impedance:\n-Z0 = vacuum_impedance = Quantity(\"vacuum_impedance\", abbrev='Z_0', latex_repr=r'Z_{0}')\n+Z0 = vacuum_impedance = PhysicalConstant(\"vacuum_impedance\", abbrev='Z_0', latex_repr=r'Z_{0}')\n \n # Coulomb's constant:\n coulomb_constant = coulombs_constant = electric_force_constant = \\\n- Quantity(\"coulomb_constant\", abbrev=\"k_e\")\n+ PhysicalConstant(\"coulomb_constant\", abbrev=\"k_e\")\n \n \n atmosphere = atmospheres = atm = Quantity(\"atmosphere\", abbrev=\"atm\")\ndiff --git a/sympy/physics/units/prefixes.py b/sympy/physics/units/prefixes.py\nindex 10e1bbcbf52d..0fe8fbe7dac3 100644\n--- a/sympy/physics/units/prefixes.py\n+++ b/sympy/physics/units/prefixes.py\n@@ -137,7 +137,8 @@ def prefix_unit(unit, prefixes):\n for prefix_abbr, prefix in prefixes.items():\n quantity = Quantity(\n \"%s%s\" % (prefix.name, unit.name),\n- abbrev=(\"%s%s\" % (prefix.abbrev, unit.abbrev))\n+ abbrev=(\"%s%s\" % (prefix.abbrev, unit.abbrev)),\n+ is_prefixed=True,\n )\n UnitSystem._quantity_dimensional_equivalence_map_global[quantity] = unit\n UnitSystem._quantity_scale_factors_global[quantity] = (prefix.scale_factor, unit)\ndiff --git a/sympy/physics/units/quantities.py b/sympy/physics/units/quantities.py\nindex 4edc9acdcd1b..4f18d3656a11 100644\n--- a/sympy/physics/units/quantities.py\n+++ b/sympy/physics/units/quantities.py\n@@ -21,11 +21,13 @@ class Quantity(AtomicExpr):\n is_real = True\n is_number = False\n is_nonzero = True\n+ is_physical_constant = False\n _diff_wrt = True\n \n def __new__(cls, name, abbrev=None, dimension=None, scale_factor=None,\n latex_repr=None, pretty_unicode_repr=None,\n pretty_ascii_repr=None, mathml_presentation_repr=None,\n+ is_prefixed=False,\n **assumptions):\n \n if not isinstance(name, Symbol):\n@@ -63,6 +65,9 @@ def __new__(cls, name, abbrev=None, dimension=None, scale_factor=None,\n elif isinstance(abbrev, str):\n abbrev = Symbol(abbrev)\n \n+ # HACK: These are here purely for type checking. They actually get assigned below.\n+ cls._is_prefixed = is_prefixed\n+\n obj = AtomicExpr.__new__(cls, name, abbrev)\n obj._name = name\n obj._abbrev = abbrev\n@@ -70,6 +75,7 @@ def __new__(cls, name, abbrev=None, dimension=None, scale_factor=None,\n obj._unicode_repr = pretty_unicode_repr\n obj._ascii_repr = pretty_ascii_repr\n obj._mathml_repr = mathml_presentation_repr\n+ obj._is_prefixed = is_prefixed\n \n if dimension is not None:\n # TODO: remove after deprecation:\n@@ -80,6 +86,7 @@ def __new__(cls, name, abbrev=None, dimension=None, scale_factor=None,\n # TODO: remove after deprecation:\n with ignore_warnings(SymPyDeprecationWarning):\n obj.set_scale_factor(scale_factor)\n+\n return obj\n \n def set_dimension(self, dimension, unit_system=\"SI\"):\n@@ -119,6 +126,8 @@ def set_global_relative_scale_factor(self, scale_factor, reference_quantity):\n \"\"\"\n from sympy.physics.units import UnitSystem\n scale_factor = sympify(scale_factor)\n+ if isinstance(scale_factor, Prefix):\n+ self._is_prefixed = True\n # replace all prefixes by their ratio to canonical units:\n scale_factor = scale_factor.replace(\n lambda x: isinstance(x, Prefix),\n@@ -232,3 +241,13 @@ def convert_to(self, other, unit_system=\"SI\"):\n def free_symbols(self):\n \"\"\"Return free symbols from quantity.\"\"\"\n return set()\n+\n+ @property\n+ def is_prefixed(self):\n+ \"\"\"Whether or not the quantity is prefixed. Eg. `kilogram` is prefixed, but `gram` is not.\"\"\"\n+ return self._is_prefixed\n+\n+class PhysicalConstant(Quantity):\n+ \"\"\"Represents a physical constant, eg. `speed_of_light` or `avogadro_constant`.\"\"\"\n+\n+ is_physical_constant = True\ndiff --git a/sympy/physics/units/systems/mks.py b/sympy/physics/units/systems/mks.py\nindex 116ab36f8555..4295254e8122 100644\n--- a/sympy/physics/units/systems/mks.py\n+++ b/sympy/physics/units/systems/mks.py\n@@ -4,8 +4,8 @@\n MKS stands for \"meter, kilogram, second\".\n \"\"\"\n \n-from sympy.physics.units import UnitSystem, DimensionSystem\n-from sympy.physics.units.definitions import G, Hz, J, N, Pa, W, c, g, kg, m, s\n+from sympy.physics.units import UnitSystem\n+from sympy.physics.units.definitions import gravitational_constant, hertz, joule, newton, pascal, watt, speed_of_light, gram, kilogram, meter, second\n from sympy.physics.units.definitions.dimension_definitions import (\n acceleration, action, energy, force, frequency, momentum,\n power, pressure, velocity, length, mass, time)\n@@ -15,25 +15,32 @@\n dims = (velocity, acceleration, momentum, force, energy, power, pressure,\n frequency, action)\n \n-units = [m, g, s, J, N, W, Pa, Hz]\n+units = [meter, gram, second, joule, newton, watt, pascal, hertz]\n all_units = []\n \n-# Prefixes of units like g, J, N etc get added using `prefix_unit`\n+# Prefixes of units like gram, joule, newton etc get added using `prefix_unit`\n # in the for loop, but the actual units have to be added manually.\n-all_units.extend([g, J, N, W, Pa, Hz])\n+all_units.extend([gram, joule, newton, watt, pascal, hertz])\n \n for u in units:\n all_units.extend(prefix_unit(u, PREFIXES))\n-all_units.extend([G, c])\n+all_units.extend([gravitational_constant, speed_of_light])\n \n # unit system\n-MKS = UnitSystem(base_units=(m, kg, s), units=all_units, name=\"MKS\", dimension_system=dimsys_length_weight_time)\n+MKS = UnitSystem(base_units=(meter, kilogram, second), units=all_units, name=\"MKS\", dimension_system=dimsys_length_weight_time, derived_units={\n+ power.name: watt,\n+ time.name: second,\n+ pressure.name: pascal,\n+ length.name: meter,\n+ frequency.name: hertz,\n+ mass.name: kilogram,\n+ force.name: newton,\n+ energy.name: joule,\n+ velocity.name: meter/second,\n+ acceleration.name: meter/(second**2),\n+})\n \n \n __all__ = [\n- 'force', 'DimensionSystem', 'energy', 'Pa', 'MKS',\n- 'dimsys_length_weight_time', 'Hz', 'power', 's', 'UnitSystem', 'units',\n- 'mass', 'momentum', 'acceleration', 'G', 'J', 'N', 'pressure', 'W',\n- 'all_units', 'c', 'kg', 'g', 'dims', 'prefix_unit', 'm', 'PREFIXES',\n- 'length', 'frequency', 'u', 'time', 'action', 'velocity',\n+ 'MKS', 'units', 'all_units', 'dims',\n ]\ndiff --git a/sympy/physics/units/systems/mksa.py b/sympy/physics/units/systems/mksa.py\nindex cb68e59af232..76df5a80f79d 100644\n--- a/sympy/physics/units/systems/mksa.py\n+++ b/sympy/physics/units/systems/mksa.py\n@@ -6,7 +6,7 @@\n \n from typing import List\n \n-from sympy.physics.units.definitions import Z0, A, C, F, H, S, T, V, Wb, ohm\n+from sympy.physics.units.definitions import Z0, ampere, coulomb, farad, henry, siemens, tesla, volt, weber, ohm\n from sympy.physics.units.definitions.dimension_definitions import (\n capacitance, charge, conductance, current, impedance, inductance,\n magnetic_density, magnetic_flux, voltage)\n@@ -17,11 +17,12 @@\n dims = (voltage, impedance, conductance, current, capacitance, inductance, charge,\n magnetic_density, magnetic_flux)\n \n-units = [A, V, ohm, S, F, H, C, T, Wb]\n+units = [ampere, volt, ohm, siemens, farad, henry, coulomb, tesla, weber]\n \n all_units = [] # type: List[Quantity]\n for u in units:\n all_units.extend(prefix_unit(u, PREFIXES))\n+all_units.extend(units)\n \n all_units.append(Z0)\n \n@@ -40,4 +41,14 @@\n magnetic_flux=dict(length=2, mass=1, current=-1, time=-2),\n ))\n \n-MKSA = MKS.extend(base=(A,), units=all_units, name='MKSA', dimension_system=dimsys_MKSA)\n+MKSA = MKS.extend(base=(ampere,), units=all_units, name='MKSA', dimension_system=dimsys_MKSA, derived_units={\n+ magnetic_flux.name: weber,\n+ impedance.name: ohm,\n+ current.name: ampere,\n+ voltage.name: volt,\n+ inductance.name: henry,\n+ conductance.name: siemens,\n+ magnetic_density.name: tesla,\n+ charge.name: coulomb,\n+ capacitance.name: farad,\n+})\ndiff --git a/sympy/physics/units/systems/si.py b/sympy/physics/units/systems/si.py\nindex d483d56d790b..1e77b471443b 100644\n--- a/sympy/physics/units/systems/si.py\n+++ b/sympy/physics/units/systems/si.py\n@@ -56,6 +56,7 @@\n for u in units:\n all_units.extend(prefix_unit(u, PREFIXES))\n \n+all_units.extend(units)\n all_units.extend([mol, cd, K, lux])\n \n \n@@ -71,7 +72,29 @@\n [information],\n )\n \n-SI = MKSA.extend(base=(mol, cd, K), units=all_units, name='SI', dimension_system=dimsys_SI)\n+SI = MKSA.extend(base=(mol, cd, K), units=all_units, name='SI', dimension_system=dimsys_SI, derived_units={\n+ power.name: watt,\n+ magnetic_flux.name: weber,\n+ time.name: second,\n+ impedance.name: ohm,\n+ pressure.name: pascal,\n+ current.name: ampere,\n+ voltage.name: volt,\n+ length.name: meter,\n+ frequency.name: hertz,\n+ inductance.name: henry,\n+ temperature.name: kelvin,\n+ amount_of_substance.name: mole,\n+ luminous_intensity.name: candela,\n+ conductance.name: siemens,\n+ mass.name: kilogram,\n+ magnetic_density.name: tesla,\n+ charge.name: coulomb,\n+ force.name: newton,\n+ capacitance.name: farad,\n+ energy.name: joule,\n+ velocity.name: meter/second,\n+})\n \n One = S.One\n \ndiff --git a/sympy/physics/units/tests/test_quantities.py b/sympy/physics/units/tests/test_quantities.py\nindex c9e5900d3d48..998b630eb7aa 100644\n--- a/sympy/physics/units/tests/test_quantities.py\n+++ b/sympy/physics/units/tests/test_quantities.py\n@@ -11,7 +11,9 @@\n from sympy.functions.elementary.trigonometric import sin\n from sympy.integrals.integrals import integrate\n from sympy.physics.units import (amount_of_substance, area, convert_to, find_unit,\n- volume, kilometer, joule)\n+ volume, kilometer, joule, molar_gas_constant,\n+ vacuum_permittivity, elementary_charge, volt,\n+ ohm)\n from sympy.physics.units.definitions import (amu, au, centimeter, coulomb,\n day, foot, grams, hour, inch, kg, km, m, meter, millimeter,\n minute, quart, s, second, speed_of_light, bit,\n@@ -23,7 +25,7 @@\n energy\n )\n from sympy.physics.units.prefixes import PREFIXES, kilo\n-from sympy.physics.units.quantities import Quantity\n+from sympy.physics.units.quantities import PhysicalConstant, Quantity\n from sympy.physics.units.systems import SI\n from sympy.testing.pytest import XFAIL, raises, warns_deprecated_sympy\n \n@@ -524,3 +526,34 @@ def test_issue_22819():\n assert tonne.convert_to(gram) == 1000000*gram\n assert dimsys_SI.get_dimensional_dependencies(area) == {'length': 2}\n assert Da.scale_factor == 1.66053906660000e-24\n+\n+\n+def test_prefixed_property():\n+ assert not meter.is_prefixed\n+ assert not joule.is_prefixed\n+ assert not day.is_prefixed\n+ assert not second.is_prefixed\n+ assert not volt.is_prefixed\n+ assert not ohm.is_prefixed\n+ assert centimeter.is_prefixed\n+ assert kilometer.is_prefixed\n+ assert kilogram.is_prefixed\n+ assert pebibyte.is_prefixed\n+\n+def test_physics_constant():\n+ from sympy.physics.units import definitions\n+\n+ for name in dir(definitions):\n+ quantity = getattr(definitions, name)\n+ if not isinstance(quantity, Quantity):\n+ continue\n+ if name.endswith('_constant'):\n+ assert isinstance(quantity, PhysicalConstant), f\"{quantity} must be PhysicalConstant, but is {type(quantity)}\"\n+ assert quantity.is_physical_constant, f\"{name} is not marked as physics constant when it should be\"\n+\n+ for const in [gravitational_constant, molar_gas_constant, vacuum_permittivity, speed_of_light, elementary_charge]:\n+ assert isinstance(const, PhysicalConstant), f\"{const} must be PhysicalConstant, but is {type(const)}\"\n+ assert const.is_physical_constant, f\"{const} is not marked as physics constant when it should be\"\n+\n+ assert not meter.is_physical_constant\n+ assert not joule.is_physical_constant\ndiff --git a/sympy/physics/units/tests/test_unitsystem.py b/sympy/physics/units/tests/test_unitsystem.py\nindex 5c0729cfdd47..a04f3aabb627 100644\n--- a/sympy/physics/units/tests/test_unitsystem.py\n+++ b/sympy/physics/units/tests/test_unitsystem.py\n@@ -63,3 +63,24 @@ def test_is_consistent():\n dimension_system = DimensionSystem([length, time])\n us = UnitSystem([m, s], dimension_system=dimension_system)\n assert us.is_consistent == True\n+\n+\n+def test_get_units_non_prefixed():\n+ from sympy.physics.units import volt, ohm\n+ unit_system = UnitSystem.get_unit_system(\"SI\")\n+ units = unit_system.get_units_non_prefixed()\n+ for prefix in [\"giga\", \"tera\", \"peta\", \"exa\", \"zetta\", \"yotta\", \"kilo\", \"hecto\", \"deca\", \"deci\", \"centi\", \"milli\", \"micro\", \"nano\", \"pico\", \"femto\", \"atto\", \"zepto\", \"yocto\"]:\n+ for unit in units:\n+ assert isinstance(unit, Quantity), f\"{unit} must be a Quantity, not {type(unit)}\"\n+ assert not unit.is_prefixed, f\"{unit} is marked as prefixed\"\n+ assert not unit.is_physical_constant, f\"{unit} is marked as physics constant\"\n+ assert not unit.name.name.startswith(prefix), f\"Unit {unit.name} has prefix {prefix}\"\n+ assert volt in units\n+ assert ohm in units\n+\n+def test_derived_units_must_exist_in_unit_system():\n+ for unit_system in UnitSystem._unit_systems.values():\n+ for preferred_unit in unit_system.derived_units.values():\n+ units = preferred_unit.atoms(Quantity)\n+ for unit in units:\n+ assert unit in unit_system._units, f\"Unit {unit} is not in unit system {unit_system}\"\ndiff --git a/sympy/physics/units/tests/test_util.py b/sympy/physics/units/tests/test_util.py\nindex 4e0bcfb83ef6..d5f464652cb8 100644\n--- a/sympy/physics/units/tests/test_util.py\n+++ b/sympy/physics/units/tests/test_util.py\n@@ -120,6 +120,30 @@ def test_quantity_simplify():\n assert quantity_simplify(2**(foot/inch*kilo/1000)*inch) == 4096*foot/12\n assert quantity_simplify(foot**2*inch + inch**2*foot) == 13*foot**3/144\n \n+def test_quantity_simplify_across_dimensions():\n+ from sympy.physics.units.util import quantity_simplify\n+ from sympy.physics.units import ampere, ohm, volt, joule, pascal, farad, second, watt, siemens, henry, tesla, weber, hour, newton\n+\n+ assert quantity_simplify(ampere*ohm, across_dimensions=True) == volt\n+ assert quantity_simplify(6*ampere*ohm, across_dimensions=True) == 6*volt\n+ assert quantity_simplify(volt/ampere, across_dimensions=True) == ohm\n+ assert quantity_simplify(volt/ohm, across_dimensions=True) == ampere\n+ assert quantity_simplify(joule/meter**3, across_dimensions=True) == pascal\n+ assert quantity_simplify(farad*ohm, across_dimensions=True) == second\n+ assert quantity_simplify(joule/second, across_dimensions=True) == watt\n+ assert quantity_simplify(meter**3/second, across_dimensions=True) == meter**3/second\n+ assert quantity_simplify(joule/second, across_dimensions=True) == watt\n+\n+ assert quantity_simplify(joule/coulomb, across_dimensions=True) == volt\n+ assert quantity_simplify(volt/ampere, across_dimensions=True) == ohm\n+ assert quantity_simplify(ampere/volt, across_dimensions=True) == siemens\n+ assert quantity_simplify(coulomb/volt, across_dimensions=True) == farad\n+ assert quantity_simplify(volt*second/ampere, across_dimensions=True) == henry\n+ assert quantity_simplify(volt*second/meter**2, across_dimensions=True) == tesla\n+ assert quantity_simplify(joule/ampere, across_dimensions=True) == weber\n+\n+ assert quantity_simplify(5*kilometer/hour, across_dimensions=True) == 25*meter/(18*second)\n+ assert quantity_simplify(5*kilogram*meter/second**2, across_dimensions=True) == 5*newton\n \n def test_check_dimensions():\n x = symbols('x')\ndiff --git a/sympy/physics/units/unitsystem.py b/sympy/physics/units/unitsystem.py\nindex dd402260da90..eaf889bfdb0a 100644\n--- a/sympy/physics/units/unitsystem.py\n+++ b/sympy/physics/units/unitsystem.py\n@@ -2,14 +2,16 @@\n Unit system for physical quantities; include definition of constants.\n \"\"\"\n \n-from typing import Dict as tDict\n+from typing import Dict as tDict, Set as tSet\n \n from sympy.core.add import Add\n from sympy.core.function import (Derivative, Function)\n from sympy.core.mul import Mul\n from sympy.core.power import Pow\n from sympy.core.singleton import S\n+from sympy.core.symbol import Symbol\n from sympy.physics.units.dimensions import _QuantityMapper\n+from sympy.physics.units.quantities import Quantity\n \n from .dimensions import Dimension\n \n@@ -26,7 +28,7 @@ class UnitSystem(_QuantityMapper):\n \n _unit_systems = {} # type: tDict[str, UnitSystem]\n \n- def __init__(self, base_units, units=(), name=\"\", descr=\"\", dimension_system=None):\n+ def __init__(self, base_units, units=(), name=\"\", descr=\"\", dimension_system=None, derived_units: tDict[Symbol, Quantity]={}):\n \n UnitSystem._unit_systems[name] = self\n \n@@ -37,6 +39,7 @@ def __init__(self, base_units, units=(), name=\"\", descr=\"\", dimension_system=Non\n self._dimension_system = dimension_system\n self._units = tuple(set(base_units) | set(units))\n self._base_units = tuple(base_units)\n+ self._derived_units = derived_units\n \n super().__init__()\n \n@@ -57,7 +60,7 @@ def __str__(self):\n def __repr__(self):\n return '<UnitSystem: %s>' % repr(self._base_units)\n \n- def extend(self, base, units=(), name=\"\", description=\"\", dimension_system=None):\n+ def extend(self, base, units=(), name=\"\", description=\"\", dimension_system=None, derived_units: tDict[Symbol, Quantity]={}):\n \"\"\"Extend the current system into a new one.\n \n Take the base and normal units of the current system to merge\n@@ -68,7 +71,7 @@ def extend(self, base, units=(), name=\"\", description=\"\", dimension_system=None)\n base = self._base_units + tuple(base)\n units = self._units + tuple(units)\n \n- return UnitSystem(base, units, name, description, dimension_system)\n+ return UnitSystem(base, units, name, description, dimension_system, {**self._derived_units, **derived_units})\n \n def get_dimension_system(self):\n return self._dimension_system\n@@ -121,6 +124,10 @@ def is_consistent(self):\n # test is performed in DimensionSystem\n return self.get_dimension_system().is_consistent\n \n+ @property\n+ def derived_units(self) -> tDict[Symbol, Quantity]:\n+ return self._derived_units\n+\n def get_dimensional_expr(self, expr):\n from sympy.physics.units import Quantity\n if isinstance(expr, Mul):\n@@ -192,3 +199,9 @@ def _collect_factor_and_dimension(self, expr):\n return S.One, expr\n else:\n return expr, Dimension(1)\n+\n+ def get_units_non_prefixed(self) -> tSet[Quantity]:\n+ \"\"\"\n+ Return the units of the system that do not have a prefix.\n+ \"\"\"\n+ return set(filter(lambda u: not u.is_prefixed and not u.is_physical_constant, self._units))\ndiff --git a/sympy/physics/units/util.py b/sympy/physics/units/util.py\nindex 13125ac1ddc0..54a30039f923 100644\n--- a/sympy/physics/units/util.py\n+++ b/sympy/physics/units/util.py\n@@ -11,9 +11,10 @@\n from sympy.core.sorting import ordered\n from sympy.core.sympify import sympify\n from sympy.matrices.common import NonInvertibleMatrixError\n-from sympy.physics.units.dimensions import Dimension\n+from sympy.physics.units.dimensions import Dimension, DimensionSystem\n from sympy.physics.units.prefixes import Prefix\n from sympy.physics.units.quantities import Quantity\n+from sympy.physics.units.unitsystem import UnitSystem\n from sympy.utilities.iterables import sift\n \n \n@@ -120,21 +121,24 @@ def get_total_scale_factor(expr):\n return expr_scale_factor * Mul.fromiter((1/get_total_scale_factor(u) * u) ** p for u, p in zip(target_units, depmat))\n \n \n-def quantity_simplify(expr):\n+def quantity_simplify(expr, across_dimensions: bool=False, unit_system=\"SI\"):\n \"\"\"Return an equivalent expression in which prefixes are replaced\n with numerical values and all units of a given dimension are the\n- unified in a canonical manner.\n+ unified in a canonical manner by default. `across_dimensions` allows\n+ for units of different dimensions to be simplified together.\n \n Examples\n ========\n \n >>> from sympy.physics.units.util import quantity_simplify\n >>> from sympy.physics.units.prefixes import kilo\n- >>> from sympy.physics.units import foot, inch\n+ >>> from sympy.physics.units import foot, inch, joule, coulomb\n >>> quantity_simplify(kilo*foot*inch)\n 250*foot**2/3\n >>> quantity_simplify(foot - 6*inch)\n foot/2\n+ >>> quantity_simplify(5*joule/coulomb, across_dimensions=True)\n+ 5*volt\n \"\"\"\n \n if expr.is_Atom or not expr.has(Prefix, Quantity):\n@@ -154,6 +158,29 @@ def quantity_simplify(expr):\n ref = v[0]/v[0].scale_factor\n expr = expr.xreplace({vi: ref*vi.scale_factor for vi in v[1:]})\n \n+ if across_dimensions:\n+ # combine quantities of different dimensions into a single\n+ # quantity that is equivalent to the original expression\n+\n+ unit_system = UnitSystem.get_unit_system(unit_system)\n+ dimension_system: DimensionSystem = unit_system.get_dimension_system()\n+ dim_expr = unit_system.get_dimensional_expr(expr)\n+ dim_deps = dimension_system.get_dimensional_dependencies(dim_expr, mark_dimensionless=True)\n+\n+ target_dimension = None\n+ for result_dim, result_deps in dimension_system.dimensional_dependencies.items():\n+ if result_deps == dim_deps:\n+ target_dimension = result_dim\n+ break\n+\n+ if target_dimension is None:\n+ # if we can't find a target dimension, we can't do anything. unsure how to handle this case.\n+ return expr\n+\n+ target_unit = unit_system.derived_units.get(target_dimension)\n+ if target_unit:\n+ expr = convert_to(expr, target_unit, unit_system)\n+\n return expr\n \n \n" }
[ { "diff_hunk": "@@ -26,7 +28,7 @@ class UnitSystem(_QuantityMapper):\n \n _unit_systems = {} # type: tDict[str, UnitSystem]\n \n- def __init__(self, base_units, units=(), name=\"\", descr=\"\", dimension_system=None):\n+ def __init__(self, base_units, units=(), name=\"\", descr=\"\", dimension_system=None, derived_units: tDict[Symbol, Quantity]={}):", "line": null, "original_line": 31, "original_start_line": null, "path": "sympy/physics/units/unitsystem.py", "start_line": null, "text": "@user1:\nWhat about mapping `tDict[Dimension, Quantity]` instead of the name of the dimension?\n\n@author:\nThat might work, I'm not sure if `Dimension` can be used as a dictionary key.\n\n@author:\nThis works, but unfortunately requires an extra dict lookup. I'll add a new commit\n\n@user1:\n> That might work, I'm not sure if `Dimension` can be used as a dictionary key.\r\n\r\nThere was a bug some time ago not allowing `Dimension` to have a hash (the object was mutable), so the code was developed in order not to use `Dimension` objects as dictionary keys. I think this has been since fixed." } ]
d4899d4d352df65c6d689f4c3f85567a02a46262
diff --git a/.mailmap b/.mailmap index 18f20831db4a..ee94aa49dc36 100644 --- a/.mailmap +++ b/.mailmap @@ -271,6 +271,7 @@ Calvin Jay Ross <[email protected]> Cameron King <[email protected]> Carl Sandrock <[email protected]> Carlos Cordoba <[email protected]> +Carson McManus <[email protected]> Carson McManus <[email protected]> Carsten Knoll <[email protected]> Case Van Horsen <[email protected]> Cavendish McKay <[email protected]> diff --git a/sympy/core/tests/test_args.py b/sympy/core/tests/test_args.py index b20e408567be..171312eba78e 100644 --- a/sympy/core/tests/test_args.py +++ b/sympy/core/tests/test_args.py @@ -4251,6 +4251,11 @@ def test_sympy__physics__units__quantities__Quantity(): assert _test_args(Quantity("dam")) +def test_sympy__physics__units__quantities__PhysicalConstant(): + from sympy.physics.units.quantities import PhysicalConstant + assert _test_args(PhysicalConstant("foo")) + + def test_sympy__physics__units__prefixes__Prefix(): from sympy.physics.units.prefixes import Prefix assert _test_args(Prefix('kilo', 'k', 3)) diff --git a/sympy/physics/units/definitions/unit_definitions.py b/sympy/physics/units/definitions/unit_definitions.py index b55569fab87a..4939fa65acb5 100644 --- a/sympy/physics/units/definitions/unit_definitions.py +++ b/sympy/physics/units/definitions/unit_definitions.py @@ -5,7 +5,7 @@ from sympy.core.numbers import (Rational, pi) from sympy.core.singleton import S as S_singleton from sympy.physics.units.prefixes import kilo, mega, milli, micro, deci, centi, nano, pico, kibi, mebi, gibi, tebi, pebi, exbi -from sympy.physics.units.quantities import Quantity +from sympy.physics.units.quantities import PhysicalConstant, Quantity One = S_singleton.One @@ -117,7 +117,7 @@ ug.set_global_relative_scale_factor(micro, gram) # Atomic mass constant -Da = dalton = amu = amus = atomic_mass_unit = atomic_mass_constant = Quantity("atomic_mass_constant") +Da = dalton = amu = amus = atomic_mass_unit = atomic_mass_constant = PhysicalConstant("atomic_mass_constant") t = metric_ton = tonne = Quantity("tonne", abbrev="t") tonne.set_global_relative_scale_factor(mega, gram) @@ -232,62 +232,62 @@ #### CONSTANTS #### # Newton constant -G = gravitational_constant = Quantity("gravitational_constant", abbrev="G") +G = gravitational_constant = PhysicalConstant("gravitational_constant", abbrev="G") # speed of light -c = speed_of_light = Quantity("speed_of_light", abbrev="c") +c = speed_of_light = PhysicalConstant("speed_of_light", abbrev="c") # elementary charge -elementary_charge = Quantity("elementary_charge", abbrev="e") +elementary_charge = PhysicalConstant("elementary_charge", abbrev="e") # Planck constant -planck = Quantity("planck", abbrev="h") +planck = PhysicalConstant("planck", abbrev="h") # Reduced Planck constant -hbar = Quantity("hbar", abbrev="hbar") +hbar = PhysicalConstant("hbar", abbrev="hbar") # Electronvolt -eV = electronvolt = electronvolts = Quantity("electronvolt", abbrev="eV") +eV = electronvolt = electronvolts = PhysicalConstant("electronvolt", abbrev="eV") # Avogadro number -avogadro_number = Quantity("avogadro_number") +avogadro_number = PhysicalConstant("avogadro_number") # Avogadro constant -avogadro = avogadro_constant = Quantity("avogadro_constant") +avogadro = avogadro_constant = PhysicalConstant("avogadro_constant") # Boltzmann constant -boltzmann = boltzmann_constant = Quantity("boltzmann_constant") +boltzmann = boltzmann_constant = PhysicalConstant("boltzmann_constant") # Stefan-Boltzmann constant -stefan = stefan_boltzmann_constant = Quantity("stefan_boltzmann_constant") +stefan = stefan_boltzmann_constant = PhysicalConstant("stefan_boltzmann_constant") # Molar gas constant -R = molar_gas_constant = Quantity("molar_gas_constant", abbrev="R") +R = molar_gas_constant = PhysicalConstant("molar_gas_constant", abbrev="R") # Faraday constant -faraday_constant = Quantity("faraday_constant") +faraday_constant = PhysicalConstant("faraday_constant") # Josephson constant -josephson_constant = Quantity("josephson_constant", abbrev="K_j") +josephson_constant = PhysicalConstant("josephson_constant", abbrev="K_j") # Von Klitzing constant -von_klitzing_constant = Quantity("von_klitzing_constant", abbrev="R_k") +von_klitzing_constant = PhysicalConstant("von_klitzing_constant", abbrev="R_k") # Acceleration due to gravity (on the Earth surface) -gee = gees = acceleration_due_to_gravity = Quantity("acceleration_due_to_gravity", abbrev="g") +gee = gees = acceleration_due_to_gravity = PhysicalConstant("acceleration_due_to_gravity", abbrev="g") # magnetic constant: -u0 = magnetic_constant = vacuum_permeability = Quantity("magnetic_constant") +u0 = magnetic_constant = vacuum_permeability = PhysicalConstant("magnetic_constant") # electric constat: -e0 = electric_constant = vacuum_permittivity = Quantity("vacuum_permittivity") +e0 = electric_constant = vacuum_permittivity = PhysicalConstant("vacuum_permittivity") # vacuum impedance: -Z0 = vacuum_impedance = Quantity("vacuum_impedance", abbrev='Z_0', latex_repr=r'Z_{0}') +Z0 = vacuum_impedance = PhysicalConstant("vacuum_impedance", abbrev='Z_0', latex_repr=r'Z_{0}') # Coulomb's constant: coulomb_constant = coulombs_constant = electric_force_constant = \ - Quantity("coulomb_constant", abbrev="k_e") + PhysicalConstant("coulomb_constant", abbrev="k_e") atmosphere = atmospheres = atm = Quantity("atmosphere", abbrev="atm") diff --git a/sympy/physics/units/prefixes.py b/sympy/physics/units/prefixes.py index 10e1bbcbf52d..0fe8fbe7dac3 100644 --- a/sympy/physics/units/prefixes.py +++ b/sympy/physics/units/prefixes.py @@ -137,7 +137,8 @@ def prefix_unit(unit, prefixes): for prefix_abbr, prefix in prefixes.items(): quantity = Quantity( "%s%s" % (prefix.name, unit.name), - abbrev=("%s%s" % (prefix.abbrev, unit.abbrev)) + abbrev=("%s%s" % (prefix.abbrev, unit.abbrev)), + is_prefixed=True, ) UnitSystem._quantity_dimensional_equivalence_map_global[quantity] = unit UnitSystem._quantity_scale_factors_global[quantity] = (prefix.scale_factor, unit) diff --git a/sympy/physics/units/quantities.py b/sympy/physics/units/quantities.py index 4edc9acdcd1b..4f18d3656a11 100644 --- a/sympy/physics/units/quantities.py +++ b/sympy/physics/units/quantities.py @@ -21,11 +21,13 @@ class Quantity(AtomicExpr): is_real = True is_number = False is_nonzero = True + is_physical_constant = False _diff_wrt = True def __new__(cls, name, abbrev=None, dimension=None, scale_factor=None, latex_repr=None, pretty_unicode_repr=None, pretty_ascii_repr=None, mathml_presentation_repr=None, + is_prefixed=False, **assumptions): if not isinstance(name, Symbol): @@ -63,6 +65,9 @@ def __new__(cls, name, abbrev=None, dimension=None, scale_factor=None, elif isinstance(abbrev, str): abbrev = Symbol(abbrev) + # HACK: These are here purely for type checking. They actually get assigned below. + cls._is_prefixed = is_prefixed + obj = AtomicExpr.__new__(cls, name, abbrev) obj._name = name obj._abbrev = abbrev @@ -70,6 +75,7 @@ def __new__(cls, name, abbrev=None, dimension=None, scale_factor=None, obj._unicode_repr = pretty_unicode_repr obj._ascii_repr = pretty_ascii_repr obj._mathml_repr = mathml_presentation_repr + obj._is_prefixed = is_prefixed if dimension is not None: # TODO: remove after deprecation: @@ -80,6 +86,7 @@ def __new__(cls, name, abbrev=None, dimension=None, scale_factor=None, # TODO: remove after deprecation: with ignore_warnings(SymPyDeprecationWarning): obj.set_scale_factor(scale_factor) + return obj def set_dimension(self, dimension, unit_system="SI"): @@ -119,6 +126,8 @@ def set_global_relative_scale_factor(self, scale_factor, reference_quantity): """ from sympy.physics.units import UnitSystem scale_factor = sympify(scale_factor) + if isinstance(scale_factor, Prefix): + self._is_prefixed = True # replace all prefixes by their ratio to canonical units: scale_factor = scale_factor.replace( lambda x: isinstance(x, Prefix), @@ -232,3 +241,13 @@ def convert_to(self, other, unit_system="SI"): def free_symbols(self): """Return free symbols from quantity.""" return set() + + @property + def is_prefixed(self): + """Whether or not the quantity is prefixed. Eg. `kilogram` is prefixed, but `gram` is not.""" + return self._is_prefixed + +class PhysicalConstant(Quantity): + """Represents a physical constant, eg. `speed_of_light` or `avogadro_constant`.""" + + is_physical_constant = True diff --git a/sympy/physics/units/systems/mks.py b/sympy/physics/units/systems/mks.py index 116ab36f8555..18cc4b1be5e2 100644 --- a/sympy/physics/units/systems/mks.py +++ b/sympy/physics/units/systems/mks.py @@ -4,8 +4,8 @@ MKS stands for "meter, kilogram, second". """ -from sympy.physics.units import UnitSystem, DimensionSystem -from sympy.physics.units.definitions import G, Hz, J, N, Pa, W, c, g, kg, m, s +from sympy.physics.units import UnitSystem +from sympy.physics.units.definitions import gravitational_constant, hertz, joule, newton, pascal, watt, speed_of_light, gram, kilogram, meter, second from sympy.physics.units.definitions.dimension_definitions import ( acceleration, action, energy, force, frequency, momentum, power, pressure, velocity, length, mass, time) @@ -15,25 +15,32 @@ dims = (velocity, acceleration, momentum, force, energy, power, pressure, frequency, action) -units = [m, g, s, J, N, W, Pa, Hz] +units = [meter, gram, second, joule, newton, watt, pascal, hertz] all_units = [] -# Prefixes of units like g, J, N etc get added using `prefix_unit` +# Prefixes of units like gram, joule, newton etc get added using `prefix_unit` # in the for loop, but the actual units have to be added manually. -all_units.extend([g, J, N, W, Pa, Hz]) +all_units.extend([gram, joule, newton, watt, pascal, hertz]) for u in units: all_units.extend(prefix_unit(u, PREFIXES)) -all_units.extend([G, c]) +all_units.extend([gravitational_constant, speed_of_light]) # unit system -MKS = UnitSystem(base_units=(m, kg, s), units=all_units, name="MKS", dimension_system=dimsys_length_weight_time) +MKS = UnitSystem(base_units=(meter, kilogram, second), units=all_units, name="MKS", dimension_system=dimsys_length_weight_time, derived_units={ + power: watt, + time: second, + pressure: pascal, + length: meter, + frequency: hertz, + mass: kilogram, + force: newton, + energy: joule, + velocity: meter/second, + acceleration: meter/(second**2), +}) __all__ = [ - 'force', 'DimensionSystem', 'energy', 'Pa', 'MKS', - 'dimsys_length_weight_time', 'Hz', 'power', 's', 'UnitSystem', 'units', - 'mass', 'momentum', 'acceleration', 'G', 'J', 'N', 'pressure', 'W', - 'all_units', 'c', 'kg', 'g', 'dims', 'prefix_unit', 'm', 'PREFIXES', - 'length', 'frequency', 'u', 'time', 'action', 'velocity', + 'MKS', 'units', 'all_units', 'dims', ] diff --git a/sympy/physics/units/systems/mksa.py b/sympy/physics/units/systems/mksa.py index cb68e59af232..1bbb149bdefa 100644 --- a/sympy/physics/units/systems/mksa.py +++ b/sympy/physics/units/systems/mksa.py @@ -6,7 +6,7 @@ from typing import List -from sympy.physics.units.definitions import Z0, A, C, F, H, S, T, V, Wb, ohm +from sympy.physics.units.definitions import Z0, ampere, coulomb, farad, henry, siemens, tesla, volt, weber, ohm from sympy.physics.units.definitions.dimension_definitions import ( capacitance, charge, conductance, current, impedance, inductance, magnetic_density, magnetic_flux, voltage) @@ -17,11 +17,12 @@ dims = (voltage, impedance, conductance, current, capacitance, inductance, charge, magnetic_density, magnetic_flux) -units = [A, V, ohm, S, F, H, C, T, Wb] +units = [ampere, volt, ohm, siemens, farad, henry, coulomb, tesla, weber] all_units = [] # type: List[Quantity] for u in units: all_units.extend(prefix_unit(u, PREFIXES)) +all_units.extend(units) all_units.append(Z0) @@ -40,4 +41,14 @@ magnetic_flux=dict(length=2, mass=1, current=-1, time=-2), )) -MKSA = MKS.extend(base=(A,), units=all_units, name='MKSA', dimension_system=dimsys_MKSA) +MKSA = MKS.extend(base=(ampere,), units=all_units, name='MKSA', dimension_system=dimsys_MKSA, derived_units={ + magnetic_flux: weber, + impedance: ohm, + current: ampere, + voltage: volt, + inductance: henry, + conductance: siemens, + magnetic_density: tesla, + charge: coulomb, + capacitance: farad, +}) diff --git a/sympy/physics/units/systems/si.py b/sympy/physics/units/systems/si.py index d483d56d790b..700495ad9d26 100644 --- a/sympy/physics/units/systems/si.py +++ b/sympy/physics/units/systems/si.py @@ -56,6 +56,7 @@ for u in units: all_units.extend(prefix_unit(u, PREFIXES)) +all_units.extend(units) all_units.extend([mol, cd, K, lux]) @@ -71,7 +72,29 @@ [information], ) -SI = MKSA.extend(base=(mol, cd, K), units=all_units, name='SI', dimension_system=dimsys_SI) +SI = MKSA.extend(base=(mol, cd, K), units=all_units, name='SI', dimension_system=dimsys_SI, derived_units={ + power: watt, + magnetic_flux: weber, + time: second, + impedance: ohm, + pressure: pascal, + current: ampere, + voltage: volt, + length: meter, + frequency: hertz, + inductance: henry, + temperature: kelvin, + amount_of_substance: mole, + luminous_intensity: candela, + conductance: siemens, + mass: kilogram, + magnetic_density: tesla, + charge: coulomb, + force: newton, + capacitance: farad, + energy: joule, + velocity: meter/second, +}) One = S.One diff --git a/sympy/physics/units/tests/test_quantities.py b/sympy/physics/units/tests/test_quantities.py index c9e5900d3d48..998b630eb7aa 100644 --- a/sympy/physics/units/tests/test_quantities.py +++ b/sympy/physics/units/tests/test_quantities.py @@ -11,7 +11,9 @@ from sympy.functions.elementary.trigonometric import sin from sympy.integrals.integrals import integrate from sympy.physics.units import (amount_of_substance, area, convert_to, find_unit, - volume, kilometer, joule) + volume, kilometer, joule, molar_gas_constant, + vacuum_permittivity, elementary_charge, volt, + ohm) from sympy.physics.units.definitions import (amu, au, centimeter, coulomb, day, foot, grams, hour, inch, kg, km, m, meter, millimeter, minute, quart, s, second, speed_of_light, bit, @@ -23,7 +25,7 @@ energy ) from sympy.physics.units.prefixes import PREFIXES, kilo -from sympy.physics.units.quantities import Quantity +from sympy.physics.units.quantities import PhysicalConstant, Quantity from sympy.physics.units.systems import SI from sympy.testing.pytest import XFAIL, raises, warns_deprecated_sympy @@ -524,3 +526,34 @@ def test_issue_22819(): assert tonne.convert_to(gram) == 1000000*gram assert dimsys_SI.get_dimensional_dependencies(area) == {'length': 2} assert Da.scale_factor == 1.66053906660000e-24 + + +def test_prefixed_property(): + assert not meter.is_prefixed + assert not joule.is_prefixed + assert not day.is_prefixed + assert not second.is_prefixed + assert not volt.is_prefixed + assert not ohm.is_prefixed + assert centimeter.is_prefixed + assert kilometer.is_prefixed + assert kilogram.is_prefixed + assert pebibyte.is_prefixed + +def test_physics_constant(): + from sympy.physics.units import definitions + + for name in dir(definitions): + quantity = getattr(definitions, name) + if not isinstance(quantity, Quantity): + continue + if name.endswith('_constant'): + assert isinstance(quantity, PhysicalConstant), f"{quantity} must be PhysicalConstant, but is {type(quantity)}" + assert quantity.is_physical_constant, f"{name} is not marked as physics constant when it should be" + + for const in [gravitational_constant, molar_gas_constant, vacuum_permittivity, speed_of_light, elementary_charge]: + assert isinstance(const, PhysicalConstant), f"{const} must be PhysicalConstant, but is {type(const)}" + assert const.is_physical_constant, f"{const} is not marked as physics constant when it should be" + + assert not meter.is_physical_constant + assert not joule.is_physical_constant diff --git a/sympy/physics/units/tests/test_unitsystem.py b/sympy/physics/units/tests/test_unitsystem.py index 5c0729cfdd47..a04f3aabb627 100644 --- a/sympy/physics/units/tests/test_unitsystem.py +++ b/sympy/physics/units/tests/test_unitsystem.py @@ -63,3 +63,24 @@ def test_is_consistent(): dimension_system = DimensionSystem([length, time]) us = UnitSystem([m, s], dimension_system=dimension_system) assert us.is_consistent == True + + +def test_get_units_non_prefixed(): + from sympy.physics.units import volt, ohm + unit_system = UnitSystem.get_unit_system("SI") + units = unit_system.get_units_non_prefixed() + for prefix in ["giga", "tera", "peta", "exa", "zetta", "yotta", "kilo", "hecto", "deca", "deci", "centi", "milli", "micro", "nano", "pico", "femto", "atto", "zepto", "yocto"]: + for unit in units: + assert isinstance(unit, Quantity), f"{unit} must be a Quantity, not {type(unit)}" + assert not unit.is_prefixed, f"{unit} is marked as prefixed" + assert not unit.is_physical_constant, f"{unit} is marked as physics constant" + assert not unit.name.name.startswith(prefix), f"Unit {unit.name} has prefix {prefix}" + assert volt in units + assert ohm in units + +def test_derived_units_must_exist_in_unit_system(): + for unit_system in UnitSystem._unit_systems.values(): + for preferred_unit in unit_system.derived_units.values(): + units = preferred_unit.atoms(Quantity) + for unit in units: + assert unit in unit_system._units, f"Unit {unit} is not in unit system {unit_system}" diff --git a/sympy/physics/units/tests/test_util.py b/sympy/physics/units/tests/test_util.py index 4e0bcfb83ef6..ab311e86ac46 100644 --- a/sympy/physics/units/tests/test_util.py +++ b/sympy/physics/units/tests/test_util.py @@ -120,6 +120,30 @@ def test_quantity_simplify(): assert quantity_simplify(2**(foot/inch*kilo/1000)*inch) == 4096*foot/12 assert quantity_simplify(foot**2*inch + inch**2*foot) == 13*foot**3/144 +def test_quantity_simplify_across_dimensions(): + from sympy.physics.units.util import quantity_simplify + from sympy.physics.units import ampere, ohm, volt, joule, pascal, farad, second, watt, siemens, henry, tesla, weber, hour, newton + + assert quantity_simplify(ampere*ohm, across_dimensions=True, unit_system="SI") == volt + assert quantity_simplify(6*ampere*ohm, across_dimensions=True, unit_system="SI") == 6*volt + assert quantity_simplify(volt/ampere, across_dimensions=True, unit_system="SI") == ohm + assert quantity_simplify(volt/ohm, across_dimensions=True, unit_system="SI") == ampere + assert quantity_simplify(joule/meter**3, across_dimensions=True, unit_system="SI") == pascal + assert quantity_simplify(farad*ohm, across_dimensions=True, unit_system="SI") == second + assert quantity_simplify(joule/second, across_dimensions=True, unit_system="SI") == watt + assert quantity_simplify(meter**3/second, across_dimensions=True, unit_system="SI") == meter**3/second + assert quantity_simplify(joule/second, across_dimensions=True, unit_system="SI") == watt + + assert quantity_simplify(joule/coulomb, across_dimensions=True, unit_system="SI") == volt + assert quantity_simplify(volt/ampere, across_dimensions=True, unit_system="SI") == ohm + assert quantity_simplify(ampere/volt, across_dimensions=True, unit_system="SI") == siemens + assert quantity_simplify(coulomb/volt, across_dimensions=True, unit_system="SI") == farad + assert quantity_simplify(volt*second/ampere, across_dimensions=True, unit_system="SI") == henry + assert quantity_simplify(volt*second/meter**2, across_dimensions=True, unit_system="SI") == tesla + assert quantity_simplify(joule/ampere, across_dimensions=True, unit_system="SI") == weber + + assert quantity_simplify(5*kilometer/hour, across_dimensions=True, unit_system="SI") == 25*meter/(18*second) + assert quantity_simplify(5*kilogram*meter/second**2, across_dimensions=True, unit_system="SI") == 5*newton def test_check_dimensions(): x = symbols('x') diff --git a/sympy/physics/units/unitsystem.py b/sympy/physics/units/unitsystem.py index dd402260da90..ddec1e09c9ba 100644 --- a/sympy/physics/units/unitsystem.py +++ b/sympy/physics/units/unitsystem.py @@ -2,7 +2,7 @@ Unit system for physical quantities; include definition of constants. """ -from typing import Dict as tDict +from typing import Dict as tDict, Set as tSet from sympy.core.add import Add from sympy.core.function import (Derivative, Function) @@ -10,6 +10,7 @@ from sympy.core.power import Pow from sympy.core.singleton import S from sympy.physics.units.dimensions import _QuantityMapper +from sympy.physics.units.quantities import Quantity from .dimensions import Dimension @@ -26,7 +27,7 @@ class UnitSystem(_QuantityMapper): _unit_systems = {} # type: tDict[str, UnitSystem] - def __init__(self, base_units, units=(), name="", descr="", dimension_system=None): + def __init__(self, base_units, units=(), name="", descr="", dimension_system=None, derived_units: tDict[Dimension, Quantity]={}): UnitSystem._unit_systems[name] = self @@ -37,6 +38,7 @@ def __init__(self, base_units, units=(), name="", descr="", dimension_system=Non self._dimension_system = dimension_system self._units = tuple(set(base_units) | set(units)) self._base_units = tuple(base_units) + self._derived_units = derived_units super().__init__() @@ -57,7 +59,7 @@ def __str__(self): def __repr__(self): return '<UnitSystem: %s>' % repr(self._base_units) - def extend(self, base, units=(), name="", description="", dimension_system=None): + def extend(self, base, units=(), name="", description="", dimension_system=None, derived_units: tDict[Dimension, Quantity]={}): """Extend the current system into a new one. Take the base and normal units of the current system to merge @@ -68,7 +70,7 @@ def extend(self, base, units=(), name="", description="", dimension_system=None) base = self._base_units + tuple(base) units = self._units + tuple(units) - return UnitSystem(base, units, name, description, dimension_system) + return UnitSystem(base, units, name, description, dimension_system, {**self._derived_units, **derived_units}) def get_dimension_system(self): return self._dimension_system @@ -121,6 +123,10 @@ def is_consistent(self): # test is performed in DimensionSystem return self.get_dimension_system().is_consistent + @property + def derived_units(self) -> tDict[Dimension, Quantity]: + return self._derived_units + def get_dimensional_expr(self, expr): from sympy.physics.units import Quantity if isinstance(expr, Mul): @@ -192,3 +198,9 @@ def _collect_factor_and_dimension(self, expr): return S.One, expr else: return expr, Dimension(1) + + def get_units_non_prefixed(self) -> tSet[Quantity]: + """ + Return the units of the system that do not have a prefix. + """ + return set(filter(lambda u: not u.is_prefixed and not u.is_physical_constant, self._units)) diff --git a/sympy/physics/units/util.py b/sympy/physics/units/util.py index 13125ac1ddc0..74e110e2081d 100644 --- a/sympy/physics/units/util.py +++ b/sympy/physics/units/util.py @@ -3,6 +3,7 @@ """ from functools import reduce from collections.abc import Iterable +from typing import Optional from sympy.core.add import Add from sympy.core.containers import Tuple @@ -11,12 +12,13 @@ from sympy.core.sorting import ordered from sympy.core.sympify import sympify from sympy.matrices.common import NonInvertibleMatrixError -from sympy.physics.units.dimensions import Dimension +from sympy.physics.units.dimensions import Dimension, DimensionSystem +from sympy.physics.units.definitions import dimension_definitions from sympy.physics.units.prefixes import Prefix from sympy.physics.units.quantities import Quantity +from sympy.physics.units.unitsystem import UnitSystem from sympy.utilities.iterables import sift - def _get_conversion_matrix_for_expr(expr, target_units, unit_system): from sympy.matrices.dense import Matrix @@ -120,21 +122,26 @@ def get_total_scale_factor(expr): return expr_scale_factor * Mul.fromiter((1/get_total_scale_factor(u) * u) ** p for u, p in zip(target_units, depmat)) -def quantity_simplify(expr): +def quantity_simplify(expr, across_dimensions: bool=False, unit_system=None): """Return an equivalent expression in which prefixes are replaced with numerical values and all units of a given dimension are the - unified in a canonical manner. + unified in a canonical manner by default. `across_dimensions` allows + for units of different dimensions to be simplified together. + + `unit_system` must be specified if `across_dimensions` is True. Examples ======== >>> from sympy.physics.units.util import quantity_simplify >>> from sympy.physics.units.prefixes import kilo - >>> from sympy.physics.units import foot, inch + >>> from sympy.physics.units import foot, inch, joule, coulomb >>> quantity_simplify(kilo*foot*inch) 250*foot**2/3 >>> quantity_simplify(foot - 6*inch) foot/2 + >>> quantity_simplify(5*joule/coulomb, across_dimensions=True, unit_system="SI") + 5*volt """ if expr.is_Atom or not expr.has(Prefix, Quantity): @@ -154,6 +161,34 @@ def quantity_simplify(expr): ref = v[0]/v[0].scale_factor expr = expr.xreplace({vi: ref*vi.scale_factor for vi in v[1:]}) + if across_dimensions: + # combine quantities of different dimensions into a single + # quantity that is equivalent to the original expression + + if unit_system is None: + raise ValueError("unit_system must be specified if across_dimensions is True") + + unit_system = UnitSystem.get_unit_system(unit_system) + dimension_system: DimensionSystem = unit_system.get_dimension_system() + dim_expr = unit_system.get_dimensional_expr(expr) + dim_deps = dimension_system.get_dimensional_dependencies(dim_expr, mark_dimensionless=True) + + target_dimension: Optional[Dimension] = None + for result_dim, result_deps in dimension_system.dimensional_dependencies.items(): + if result_deps == dim_deps: + # Because dimensional_dependencies contains Symbols, we need to look up the + # corresponding `Dimension`. + target_dimension = getattr(dimension_definitions, result_dim.name) + break + + if target_dimension is None: + # if we can't find a target dimension, we can't do anything. unsure how to handle this case. + return expr + + target_unit = unit_system.derived_units.get(target_dimension) + if target_unit: + expr = convert_to(expr, target_unit, unit_system) + return expr
{ "difficulty": "high", "estimated_review_effort": 4, "problem_domain": "Bug Fixes" }
sympy__sympy-23991@3d268de
sympy/sympy
Python
23,991
[GSoC] physics/continuum_mechanics: Added the `draw` method for the Truss Class
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> Fixes #24040 #### Brief description of what is fixed or changed As a final step for the `Truss` class, the `draw` method for the same has been added. This method, similar to the one in `Beam`, returns a plot object representing the state of the truss denoting all of its properties like loads and supports along with its structural components like nodes and members. #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> * physics.continuum_mechanics * Added the `draw` method for getting the diagram of a given `truss`. <!-- END RELEASE NOTES -->
2022-08-30T09:42:34Z
Truss - create and expose an attribute to get the length of the members The `Truss` class can be used as a starting point to compute node deflections thanks to the principle of virtual works or energy methods. However, these computations require the knowledge of the member lengths. The class already computes them internally. It would be nice to be able to access them via some attribute: `members_lengths` maybe?
[ { "body": "The `Truss` class can be used as a starting point to compute node deflections thanks to the principle of virtual works or energy methods. However, these computations require the knowledge of the member lengths. The class already computes them internally. It would be nice to be able to access them via some attribute: `members_lengths` maybe?", "number": 24040, "title": "Truss - create and expose an attribute to get the length of the members" } ]
821bf9133c61f208bbd8ae1237acba40d83b3401
{ "head_commit": "3d268de526226c7a6899923038f0aacec624074d", "head_commit_message": "draw method added in Truss", "patch_to_review": "diff --git a/sympy/physics/continuum_mechanics/truss.py b/sympy/physics/continuum_mechanics/truss.py\nindex 5c2d0d0517aa..ccddee798669 100644\n--- a/sympy/physics/continuum_mechanics/truss.py\n+++ b/sympy/physics/continuum_mechanics/truss.py\n@@ -3,17 +3,21 @@\n to 2D Trusses.\n \"\"\"\n \n+\n from cmath import inf\n from sympy.core.add import Add\n from sympy.core.mul import Mul\n from sympy.core.symbol import Symbol\n from sympy.core.sympify import sympify\n from sympy import Matrix, pi\n+from sympy.external.importtools import import_module\n from sympy.functions.elementary.miscellaneous import sqrt\n from sympy.matrices.dense import zeros\n import math\n+from sympy.plotting import plot, plot_parametric\n+from sympy.utilities.decorator import doctest_depends_on\n \n-\n+numpy = import_module('numpy', import_kwargs={'fromlist':['arange']})\n \n class Truss:\n \"\"\"\n@@ -733,3 +737,226 @@ def solve(self):\n self._internal_forces[member] = forces_matrix[i]\n i += 1\n return\n+\n+ @doctest_depends_on(modules=('numpy',))\n+ def draw(self):\n+ \"\"\"\n+ Returns a plot object of the Truss with all its nodes, members,\n+ supports and loads.\n+\n+ .. note::\n+ The user must be careful while entering load values their\n+ directions. The draw function assumes a sign convention which\n+ is used for plotting loads.\n+\n+ Given a right handed coordinate system with XYZ coordinates,\n+ the supports are assumed to be such that reaction forces of a\n+ pinned support are in the +X and +Y direction while those of a\n+ roller support are in the +Y direction. For the load, the range\n+ of angles one can input goes all the way to 360 degrees which, in the\n+ plot, is the angle that the load vector makes with positive x-axis.\n+\n+ For example, for a 90 degree angle, the load will be a vertically\n+ directed along +Y while a 270 degrees angle denotes a vertical\n+ load as well but along -Y.\n+\n+ Examples\n+ ========\n+\n+ .. plot::\n+ :context: close-figs\n+ :format: doctest\n+ :include-source: True\n+\n+ >>> from sympy.physics.continuum_mechanics.truss import Truss\n+ >>> t = Truss()\n+ >>> t.add_node(\"node_1\", 0, 0)\n+ >>> t.add_node(\"node_2\", 6, 0)\n+ >>> t.add_node(\"node_3\", 2, 6)\n+ >>> t.add_node(\"node_4\", 2, 0)\n+ >>> t.add_member(\"member_1\", \"node_1\", \"node_4\")\n+ >>> t.add_member(\"member_2\", \"node_2\", \"node_4\")\n+ >>> t.add_member(\"member_3\", \"node_1\", \"node_3\")\n+ >>> t.add_member(\"member_4\", \"node_2\", \"node_3\")\n+ >>> t.add_member(\"member_5\", \"node_3\", \"node_4\")\n+ >>> t.apply_load(\"node_4\", magnitude=10, direction=270)\n+ >>> t.apply_support(\"node_1\", type=\"pinned\")\n+ >>> t.apply_support(\"node_2\", type=\"roller\")\n+ >>> p = t.draw()\n+ >>> p\n+ Plot object containing:\n+ [0]: cartesian line: 0 for x over (0.0, 2.0)\n+ [1]: cartesian line: 0 for x over (2.0, 6.0)\n+ [2]: cartesian line: 3*x for x over (0.0, 2.0)\n+ [3]: cartesian line: 9 - 3*x/2 for x over (2.0, 6.0)\n+ [4]: parametric cartesian line: (2, y) for y over (0.0, 6.0)\n+ >>> p.show()\n+ \"\"\"\n+ if not numpy:\n+ raise ImportError(\"To use this function numpy module is required\")\n+\n+ markers = []\n+ annotations = []\n+\n+ node_markers = self._draw_nodes()\n+ markers += node_markers\n+\n+ support_markers = self._draw_supports()\n+ markers += support_markers\n+\n+ load_annotations = self._draw_loads()\n+ annotations += load_annotations\n+\n+ xmax = max(self._node_position_x)\n+ xmin = min(self._node_position_x)\n+ ymax = max(self._node_position_y)\n+ ymin = min(self._node_position_y)\n+\n+ lim = max(xmax*1.1-xmin*0.8+1, ymax*1.1-ymin*0.8+1)\n+\n+ if lim==xmax*1.1-xmin*0.8+1:\n+ sing_plot = plot(markers=markers, show=False, annotations=annotations, xlim=(xmin*0.8-0.05*lim, xmax*1.1), ylim=(xmin*0.8-0.05*lim, xmax*1.1), axis=False)\n+ else:\n+ sing_plot = plot(markers=markers, show=False, annotations=annotations, xlim=(ymin*0.8-0.05*lim, ymax*1.1), ylim=(ymin*0.8-0.05*lim, ymax*1.1), axis=False)\n+\n+\n+ sing_plot.extend(self._draw_members())\n+\n+ return sing_plot\n+\n+\n+ def _draw_nodes(self):\n+ node_markers = []\n+\n+ for node in self._nodes:\n+ node_markers.append(\n+ {\n+ 'args':[[node[1]], [node[2]]],\n+ 'marker':'o',\n+ 'markersize':5,\n+ 'color':'black'\n+ }\n+ )\n+\n+ return node_markers\n+\n+ def _draw_members(self):\n+ x = Symbol('x')\n+ y = Symbol('y')\n+\n+ member_plot = plot(show=False)\n+\n+ for member in self._members:\n+ x1 = self._node_coordinates[self._members[member][0]][0]\n+ y1 = self._node_coordinates[self._members[member][0]][1]\n+ x2 = self._node_coordinates[self._members[member][1]][0]\n+ y2 = self._node_coordinates[self._members[member][1]][1]\n+ if x1!= x2:\n+ p1 = plot(((y2-y1)*x/(x2-x1)+(y2*x1-y1*x2)/(x1-x2), (x, min(x1, x2), max(x1, x2))), show=False, line_color=\"brown\", linewidth=20)\n+ else:\n+ p1 = plot_parametric((x1, y), (y, min(y1, y2), max(y1, y2)), line_color=\"brown\", show=False)\n+ member_plot.extend(p1)\n+\n+ return member_plot\n+\n+ def _draw_supports(self):\n+ support_markers = []\n+\n+ xmax = max(self._node_position_x)\n+ xmin = min(self._node_position_x)\n+ ymax = max(self._node_position_y)\n+ ymin = min(self._node_position_y)\n+\n+ if abs(1.1*xmax-0.8*xmin)>abs(1.1*ymax-0.8*ymin):\n+ max_diff = 1.1*xmax-0.8*xmin\n+ else:\n+ max_diff = 1.1*ymax-0.8*ymin\n+\n+ for node in self._supports:\n+ if self._supports[node]=='pinned':\n+ support_markers.append(\n+ {\n+ 'args':[\n+ [self._node_coordinates[node][0]],\n+ [self._node_coordinates[node][1]]\n+ ],\n+ 'marker':6,\n+ 'markersize':15,\n+ 'color':'black',\n+ 'markerfacecolor':'none'\n+ }\n+ )\n+ support_markers.append(\n+ {\n+ 'args':[\n+ [self._node_coordinates[node][0]],\n+ [self._node_coordinates[node][1]-0.035*max_diff]\n+ ],\n+ 'marker':'_',\n+ 'markersize':14,\n+ 'color':'black'\n+ }\n+ )\n+\n+ elif self._supports[node]=='roller':\n+ support_markers.append(\n+ {\n+ 'args':[\n+ [self._node_coordinates[node][0]],\n+ [self._node_coordinates[node][1]-0.02*max_diff]\n+ ],\n+ 'marker':'o',\n+ 'markersize':11,\n+ 'color':'black',\n+ 'markerfacecolor':'none'\n+ }\n+ )\n+ support_markers.append(\n+ {\n+ 'args':[\n+ [self._node_coordinates[node][0]],\n+ [self._node_coordinates[node][1]-0.0375*max_diff]\n+ ],\n+ 'marker':'_',\n+ 'markersize':14,\n+ 'color':'black'\n+ }\n+ )\n+\n+ return support_markers\n+\n+ def _draw_loads(self):\n+ load_annotations = []\n+\n+ xmax = max(self._node_position_x)\n+ xmin = min(self._node_position_x)\n+ ymax = max(self._node_position_y)\n+ ymin = min(self._node_position_y)\n+\n+ if abs(1.1*xmax-0.8*xmin)>abs(1.1*ymax-0.8*ymin):\n+ max_diff = 1.1*xmax-0.8*xmin+5\n+ else:\n+ max_diff = 1.1*ymax-0.8*ymin+5\n+\n+ for node in self._loads:\n+ for load in self._loads[node]:\n+ if load[0] in [Symbol('R_'+str(node)+'_x'), Symbol('R_'+str(node)+'_y')]:\n+ continue\n+ x = self._node_coordinates[node][0]\n+ y = self._node_coordinates[node][1]\n+ load_annotations.append(\n+ {\n+ 'text':'',\n+ 'xy':(\n+ x-math.cos(pi*load[1]/180)*(max_diff/100),\n+ y-math.sin(pi*load[1]/180)*(max_diff/100)\n+ ),\n+ 'xytext':(\n+ x-(max_diff/100+abs(xmax-xmin)+abs(ymax-ymin))*math.cos(pi*load[1]/180)/20,\n+ y-(max_diff/100+abs(xmax-xmin)+abs(ymax-ymin))*math.sin(pi*load[1]/180)/20\n+ ),\n+ 'arrowprops':dict(width= 1.5, headlength=5, headwidth=5, facecolor='black')\n+ }\n+ )\n+\n+ return load_annotations\n" }
[ { "diff_hunk": "@@ -733,3 +737,226 @@ def solve(self):\n self._internal_forces[member] = forces_matrix[i]\n i += 1\n return\n+\n+ @doctest_depends_on(modules=('numpy',))\n+ def draw(self):\n+ \"\"\"\n+ Returns a plot object of the Truss with all its nodes, members,\n+ supports and loads.\n+\n+ .. note::\n+ The user must be careful while entering load values their\n+ directions. The draw function assumes a sign convention which\n+ is used for plotting loads.\n+\n+ Given a right handed coordinate system with XYZ coordinates,\n+ the supports are assumed to be such that reaction forces of a\n+ pinned support are in the +X and +Y direction while those of a\n+ roller support are in the +Y direction. For the load, the range\n+ of angles one can input goes all the way to 360 degrees which, in the\n+ plot, is the angle that the load vector makes with positive x-axis.", "line": null, "original_line": 757, "original_start_line": null, "path": "sympy/physics/continuum_mechanics/truss.py", "start_line": null, "text": "@user1:\nJust add the direction of the angle as well after \"positive x-axis\"" } ]
7272a5f04bd903ac067e85ddcc76db887717374a
diff --git a/sympy/physics/continuum_mechanics/truss.py b/sympy/physics/continuum_mechanics/truss.py index 8384a673f03b..a255123c3b27 100644 --- a/sympy/physics/continuum_mechanics/truss.py +++ b/sympy/physics/continuum_mechanics/truss.py @@ -3,17 +3,24 @@ to 2D Trusses. """ -from cmath import inf + +from cmath import atan, inf from sympy.core.add import Add +from sympy.core.evalf import INF from sympy.core.mul import Mul from sympy.core.symbol import Symbol from sympy.core.sympify import sympify from sympy import Matrix, pi +from sympy.external.importtools import import_module from sympy.functions.elementary.miscellaneous import sqrt from sympy.matrices.dense import zeros +import math +from sympy.physics.units.quantities import Quantity +from sympy.plotting import plot +from sympy.utilities.decorator import doctest_depends_on from sympy import sin, cos - +numpy = import_module('numpy', import_kwargs={'fromlist':['arange']}) class Truss: """ @@ -47,7 +54,7 @@ class Truss: >>> t.add_member("member_4", "node_2", "node_3") >>> t.add_member("member_5", "node_3", "node_4") >>> t.apply_load("node_4", magnitude=10, direction=270) - >>> t.apply_support("node_1", type="fixed") + >>> t.apply_support("node_1", type="pinned") >>> t.apply_support("node_2", type="roller") """ @@ -64,6 +71,7 @@ def __init__(self): self._node_position_x = [] self._node_position_y = [] self._nodes_occupied = {} + self._member_lengths = {} self._reaction_loads = {} self._internal_forces = {} self._node_coordinates = {} @@ -97,11 +105,11 @@ def members(self): return self._members @property - def member_labels(self): + def member_lengths(self): """ - Returns the members of the truss along with the start and end points. + Returns the length of each member of the truss. """ - return self._member_labels + return self._member_lengths @property def supports(self): @@ -263,6 +271,7 @@ def add_member(self, label, start, end): else: self._members[label] = [start, end] + self._member_lengths[label] = sqrt((self._node_coordinates[end][0]-self._node_coordinates[start][0])**2 + (self._node_coordinates[end][1]-self._node_coordinates[start][1])**2) self._nodes_occupied[start, end] = True self._nodes_occupied[end, start] = True self._internal_forces[label] = 0 @@ -300,6 +309,7 @@ def remove_member(self, label): self._nodes_occupied.pop((self._members[label][0], self._members[label][1])) self._nodes_occupied.pop((self._members[label][1], self._members[label][0])) self._members.pop(label) + self._member_lengths.pop(label) self._internal_forces.pop(label) def change_node_label(self, label, new_label): @@ -434,6 +444,8 @@ def change_member_label(self, label, new_label): if member == label: self._members[new_label] = [self._members[member][0], self._members[member][1]] self._members.pop(label) + self._member_lengths[new_label] = self._member_lengths[label] + self._member_lengths.pop(label) self._internal_forces[new_label] = self._internal_forces[label] self._internal_forces.pop(label) @@ -733,3 +745,355 @@ def solve(self): self._internal_forces[member] = forces_matrix[i] i += 1 return + + @doctest_depends_on(modules=('numpy',)) + def draw(self, subs_dict=None): + """ + Returns a plot object of the Truss with all its nodes, members, + supports and loads. + + .. note:: + The user must be careful while entering load values in their + directions. The draw function assumes a sign convention that + is used for plotting loads. + + Given a right-handed coordinate system with XYZ coordinates, + the supports are assumed to be such that the reaction forces of a + pinned support is in the +X and +Y direction while those of a + roller support is in the +Y direction. For the load, the range + of angles, one can input goes all the way to 360 degrees which, in the + the plot is the angle that the load vector makes with the positive x-axis in the anticlockwise direction. + + For example, for a 90-degree angle, the load will be a vertically + directed along +Y while a 270-degree angle denotes a vertical + load as well but along -Y. + + Examples + ======== + + .. plot:: + :context: close-figs + :format: doctest + :include-source: True + + >>> from sympy.physics.continuum_mechanics.truss import Truss + >>> import math + >>> t = Truss() + >>> t.add_node("A", -4, 0) + >>> t.add_node("B", 0, 0) + >>> t.add_node("C", 4, 0) + >>> t.add_node("D", 8, 0) + >>> t.add_node("E", 6, 2/math.sqrt(3)) + >>> t.add_node("F", 2, 2*math.sqrt(3)) + >>> t.add_node("G", -2, 2/math.sqrt(3)) + >>> t.add_member("AB","A","B") + >>> t.add_member("BC","B","C") + >>> t.add_member("CD","C","D") + >>> t.add_member("AG","A","G") + >>> t.add_member("GB","G","B") + >>> t.add_member("GF","G","F") + >>> t.add_member("BF","B","F") + >>> t.add_member("FC","F","C") + >>> t.add_member("CE","C","E") + >>> t.add_member("FE","F","E") + >>> t.add_member("DE","D","E") + >>> t.apply_support("A","pinned") + >>> t.apply_support("D","roller") + >>> t.apply_load("G", 3, 90) + >>> t.apply_load("E", 3, 90) + >>> t.apply_load("F", 2, 90) + >>> p = t.draw() + >>> p + Plot object containing: + [0]: cartesian line: 1 for x over (1.0, 1.0) + >>> p.show() + """ + if not numpy: + raise ImportError("To use this function numpy module is required") + + x = Symbol('x') + + markers = [] + annotations = [] + rectangles = [] + + node_markers = self._draw_nodes(subs_dict) + markers += node_markers + + member_rectangles = self._draw_members() + rectangles += member_rectangles + + support_markers = self._draw_supports() + markers += support_markers + + load_annotations = self._draw_loads() + annotations += load_annotations + + xmax = -INF + xmin = INF + ymax = -INF + ymin = INF + + for node in list(self._node_coordinates): + xmax = max(xmax, self._node_coordinates[node][0]) + xmin = min(xmin, self._node_coordinates[node][0]) + ymax = max(ymax, self._node_coordinates[node][1]) + ymin = min(ymin, self._node_coordinates[node][1]) + + lim = max(xmax*1.1-xmin*0.8+1, ymax*1.1-ymin*0.8+1) + + if lim==xmax*1.1-xmin*0.8+1: + sing_plot = plot(1, (x, 1, 1), markers=markers, show=False, annotations=annotations, xlim=(xmin-0.05*lim, xmax*1.1), ylim=(xmin-0.05*lim, xmax*1.1), axis=False, rectangles=rectangles) + + else: + sing_plot = plot(1, (x, 1, 1), markers=markers, show=False, annotations=annotations, xlim=(ymin-0.05*lim, ymax*1.1), ylim=(ymin-0.05*lim, ymax*1.1), axis=False, rectangles=rectangles) + + return sing_plot + + + def _draw_nodes(self, subs_dict): + node_markers = [] + + for node in list(self._node_coordinates): + if (type(self._node_coordinates[node][0]) in (Symbol, Quantity)): + if self._node_coordinates[node][0] in list(subs_dict): + self._node_coordinates[node][0] = subs_dict[self._node_coordinates[node][0]] + else: + raise ValueError("provided substituted dictionary is not adequate") + elif (type(self._node_coordinates[node][0]) == Mul): + objects = self._node_coordinates[node][0].as_coeff_Mul() + for object in objects: + if type(object) in (Symbol, Quantity): + if subs_dict==None or object not in list(subs_dict): + raise ValueError("provided substituted dictionary is not adequate") + else: + self._node_coordinates[node][0] /= object + self._node_coordinates[node][0] *= subs_dict[object] + + if (type(self._node_coordinates[node][1]) in (Symbol, Quantity)): + if self._node_coordinates[node][1] in list(subs_dict): + self._node_coordinates[node][1] = subs_dict[self._node_coordinates[node][1]] + else: + raise ValueError("provided substituted dictionary is not adequate") + elif (type(self._node_coordinates[node][1]) == Mul): + objects = self._node_coordinates[node][1].as_coeff_Mul() + for object in objects: + if type(object) in (Symbol, Quantity): + if subs_dict==None or object not in list(subs_dict): + raise ValueError("provided substituted dictionary is not adequate") + else: + self._node_coordinates[node][1] /= object + self._node_coordinates[node][1] *= subs_dict[object] + + for node in list(self._node_coordinates): + node_markers.append( + { + 'args':[[self._node_coordinates[node][0]], [self._node_coordinates[node][1]]], + 'marker':'o', + 'markersize':5, + 'color':'black' + } + ) + return node_markers + + def _draw_members(self): + + member_rectangles = [] + + xmax = -INF + xmin = INF + ymax = -INF + ymin = INF + + for node in list(self._node_coordinates): + xmax = max(xmax, self._node_coordinates[node][0]) + xmin = min(xmin, self._node_coordinates[node][0]) + ymax = max(ymax, self._node_coordinates[node][1]) + ymin = min(ymin, self._node_coordinates[node][1]) + + if abs(1.1*xmax-0.8*xmin)>abs(1.1*ymax-0.8*ymin): + max_diff = 1.1*xmax-0.8*xmin + else: + max_diff = 1.1*ymax-0.8*ymin + + for member in self._members: + x1 = self._node_coordinates[self._members[member][0]][0] + y1 = self._node_coordinates[self._members[member][0]][1] + x2 = self._node_coordinates[self._members[member][1]][0] + y2 = self._node_coordinates[self._members[member][1]][1] + if x2!=x1 and y2!=y1: + if x2>x1: + member_rectangles.append( + { + 'xy':(x1-0.005*max_diff*cos(pi/4+atan((y2-y1)/(x2-x1)))/2, y1-0.005*max_diff*sin(pi/4+atan((y2-y1)/(x2-x1)))/2), + 'width':sqrt((x1-x2)**2+(y1-y2)**2)+0.005*max_diff/math.sqrt(2), + 'height':0.005*max_diff, + 'angle':180*atan((y2-y1)/(x2-x1))/pi, + 'color':'brown' + } + ) + else: + member_rectangles.append( + { + 'xy':(x2-0.005*max_diff*cos(pi/4+atan((y2-y1)/(x2-x1)))/2, y2-0.005*max_diff*sin(pi/4+atan((y2-y1)/(x2-x1)))/2), + 'width':sqrt((x1-x2)**2+(y1-y2)**2)+0.005*max_diff/math.sqrt(2), + 'height':0.005*max_diff, + 'angle':180*atan((y2-y1)/(x2-x1))/pi, + 'color':'brown' + } + ) + elif y2==y1: + if x2>x1: + member_rectangles.append( + { + 'xy':(x1-0.005*max_diff/2, y1-0.005*max_diff/2), + 'width':sqrt((x1-x2)**2+(y1-y2)**2), + 'height':0.005*max_diff, + 'angle':90*(1-math.copysign(1, x2-x1)), + 'color':'brown' + } + ) + else: + member_rectangles.append( + { + 'xy':(x1-0.005*max_diff/2, y1-0.005*max_diff/2), + 'width':sqrt((x1-x2)**2+(y1-y2)**2), + 'height':-0.005*max_diff, + 'angle':90*(1-math.copysign(1, x2-x1)), + 'color':'brown' + } + ) + else: + if y1<y2: + member_rectangles.append( + { + 'xy':(x1-0.005*max_diff/2, y1-0.005*max_diff/2), + 'width':sqrt((x1-x2)**2+(y1-y2)**2)+0.005*max_diff/2, + 'height':0.005*max_diff, + 'angle':90*math.copysign(1, y2-y1), + 'color':'brown' + } + ) + else: + member_rectangles.append( + { + 'xy':(x2-0.005*max_diff/2, y2-0.005*max_diff/2), + 'width':-(sqrt((x1-x2)**2+(y1-y2)**2)+0.005*max_diff/2), + 'height':0.005*max_diff, + 'angle':90*math.copysign(1, y2-y1), + 'color':'brown' + } + ) + + return member_rectangles + + def _draw_supports(self): + support_markers = [] + + xmax = -INF + xmin = INF + ymax = -INF + ymin = INF + + for node in list(self._node_coordinates): + xmax = max(xmax, self._node_coordinates[node][0]) + xmin = min(xmin, self._node_coordinates[node][0]) + ymax = max(ymax, self._node_coordinates[node][1]) + ymin = min(ymin, self._node_coordinates[node][1]) + if abs(1.1*xmax-0.8*xmin)>abs(1.1*ymax-0.8*ymin): + max_diff = 1.1*xmax-0.8*xmin + else: + max_diff = 1.1*ymax-0.8*ymin + + for node in self._supports: + if self._supports[node]=='pinned': + support_markers.append( + { + 'args':[ + [self._node_coordinates[node][0]], + [self._node_coordinates[node][1]] + ], + 'marker':6, + 'markersize':15, + 'color':'black', + 'markerfacecolor':'none' + } + ) + support_markers.append( + { + 'args':[ + [self._node_coordinates[node][0]], + [self._node_coordinates[node][1]-0.035*max_diff] + ], + 'marker':'_', + 'markersize':14, + 'color':'black' + } + ) + + elif self._supports[node]=='roller': + support_markers.append( + { + 'args':[ + [self._node_coordinates[node][0]], + [self._node_coordinates[node][1]-0.02*max_diff] + ], + 'marker':'o', + 'markersize':11, + 'color':'black', + 'markerfacecolor':'none' + } + ) + support_markers.append( + { + 'args':[ + [self._node_coordinates[node][0]], + [self._node_coordinates[node][1]-0.0375*max_diff] + ], + 'marker':'_', + 'markersize':14, + 'color':'black' + } + ) + return support_markers + + def _draw_loads(self): + load_annotations = [] + + xmax = -INF + xmin = INF + ymax = -INF + ymin = INF + + for node in list(self._node_coordinates): + xmax = max(xmax, self._node_coordinates[node][0]) + xmin = min(xmin, self._node_coordinates[node][0]) + ymax = max(ymax, self._node_coordinates[node][1]) + ymin = min(ymin, self._node_coordinates[node][1]) + + if abs(1.1*xmax-0.8*xmin)>abs(1.1*ymax-0.8*ymin): + max_diff = 1.1*xmax-0.8*xmin+5 + else: + max_diff = 1.1*ymax-0.8*ymin+5 + + for node in self._loads: + for load in self._loads[node]: + if load[0] in [Symbol('R_'+str(node)+'_x'), Symbol('R_'+str(node)+'_y')]: + continue + x = self._node_coordinates[node][0] + y = self._node_coordinates[node][1] + load_annotations.append( + { + 'text':'', + 'xy':( + x-math.cos(pi*load[1]/180)*(max_diff/100), + y-math.sin(pi*load[1]/180)*(max_diff/100) + ), + 'xytext':( + x-(max_diff/100+abs(xmax-xmin)+abs(ymax-ymin))*math.cos(pi*load[1]/180)/20, + y-(max_diff/100+abs(xmax-xmin)+abs(ymax-ymin))*math.sin(pi*load[1]/180)/20 + ), + 'arrowprops':{'width':1.5, 'headlength':5, 'headwidth':5, 'facecolor':'black'} + } + ) + return load_annotations
{ "difficulty": "medium", "estimated_review_effort": 4, "problem_domain": "New Feature Additions" }
sympy__sympy-23096@15cdffd
sympy/sympy
Python
23,096
Changed the name of the node visiting function from `visit_Num` to `visit_Constant`.
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> Fixes #23092 #### Brief description of what is fixed or changed I changed the name of the node visiting function `visit_Num` to `visit_Constant` because the former is deprecated since version 3.8. #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> * parsing * Changed the name of the `visit_Num` node visitor function to `visit_Constant`, as the former is deprecated. <!-- END RELEASE NOTES -->
2022-02-18T07:43:42Z
ast_parser: PendingDeprecationWarning: visit_Num is deprecated; add visit_Constant `sympy.parsing.ast_parser` triggers `PendingDeprecationWarning` in Python ≥ 3.8: ```console $ python3 -Wd Python 3.8.10 (default, Nov 26 2021, 20:14:08) [GCC 9.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from sympy.parsing.ast_parser import parse_expr >>> parse_expr('6 * 7', {}) /usr/lib/python3.8/ast.py:371: PendingDeprecationWarning: visit_Num is deprecated; add visit_Constant return visitor(node) 42 ```
This should be easy for someone to fix. I think we still support 3.7 so it will need to have a branch. I don't think there will be a problem, as the `Constant` Node type was [added in 3.6](https://greentreesnakes.readthedocs.io/en/latest/nodes.html#Constant), with plans to deprecate `Num` in 3.8.
[ { "body": "`sympy.parsing.ast_parser` triggers `PendingDeprecationWarning` in Python ≥ 3.8:\n```console\n$ python3 -Wd\nPython 3.8.10 (default, Nov 26 2021, 20:14:08) \n[GCC 9.3.0] on linux\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> from sympy.parsing.ast_parser import parse_expr\n>>> parse_expr('6 * 7', {})\n/usr/lib/python3.8/ast.py:371: PendingDeprecationWarning: visit_Num is deprecated; add visit_Constant\n return visitor(node)\n42\n```", "number": 23092, "title": "ast_parser: PendingDeprecationWarning: visit_Num is deprecated; add visit_Constant" } ]
3821fd7254db887c1e57c91a356d5563a764a1df
{ "head_commit": "15cdffd2497cd269c806b87e52ee7ed5640d9a08", "head_commit_message": "Reordered the `.mailmap` file.", "patch_to_review": "diff --git a/.mailmap b/.mailmap\nindex 24c64a770b39..a70905f4dcfa 100644\n--- a/.mailmap\n+++ b/.mailmap\n@@ -1141,6 +1141,7 @@ Tim Swast <[email protected]>\n Timothy Cyrus <[email protected]>\n Timothy Reluga <[email protected]> <[email protected]>\n Timothy Reluga <[email protected]> me <none@none>\n+Tirthankar Mazumder <[email protected]>\n TitanSnow <[email protected]>\n Tobias Lenz <[email protected]>\n Tom Bachmann <[email protected]> <[email protected]>\ndiff --git a/AUTHORS b/AUTHORS\nindex 3f356c42e94a..9038db2a8088 100644\n--- a/AUTHORS\n+++ b/AUTHORS\n@@ -4,7 +4,7 @@ those who explicitly didn't want to be mentioned. People with a * next\n to their names are not found in the metadata of the git history. This\n file is generated automatically by running `./bin/authors_update.py`.\n \n-There are a total of 1127 authors.\n+There are a total of 1129 authors.\n \n Ondřej Čertík <[email protected]>\n Fabian Pedregosa <[email protected]>\n@@ -1133,3 +1133,5 @@ Tom Fryers <[email protected]>\n Zouhair <[email protected]>\n zzj <[email protected]>\n shubhayu09 <[email protected]>\n+Siddhant Jain <[email protected]>\n+Tirthankar Mazumder <[email protected]>\ndiff --git a/sympy/parsing/ast_parser.py b/sympy/parsing/ast_parser.py\nindex 7d1d3c3f3b08..253639d18eb2 100644\n--- a/sympy/parsing/ast_parser.py\n+++ b/sympy/parsing/ast_parser.py\n@@ -32,7 +32,7 @@ def __init__(self, local_dict, global_dict):\n self.local_dict = local_dict\n self.global_dict = global_dict\n \n- def visit_Num(self, node):\n+ def visit_Constant(self, node):\n if isinstance(node.n, int):\n return fix_missing_locations(Call(func=Name('Integer', Load()),\n args=[node], keywords=[]))\n" }
[ { "diff_hunk": "@@ -4,7 +4,7 @@ those who explicitly didn't want to be mentioned. People with a * next\n to their names are not found in the metadata of the git history. This\n file is generated automatically by running `./bin/authors_update.py`.\n \n-There are a total of 1127 authors.\n+There are a total of 1129 authors.", "line": null, "original_line": 7, "original_start_line": null, "path": "AUTHORS", "start_line": null, "text": "@user1:\nThis change should also be reverted." }, { "diff_hunk": "@@ -1133,3 +1133,5 @@ Tom Fryers <[email protected]>\n Zouhair <[email protected]>\n zzj <[email protected]>\n shubhayu09 <[email protected]>\n+Siddhant Jain <[email protected]>\n+Tirthankar Mazumder <[email protected]>", "line": null, "original_line": 1137, "original_start_line": null, "path": "AUTHORS", "start_line": null, "text": "@user1:\nThe changes to the AUTHORS file should be reverted. The AUTHORS file will be updated automatically at the time of the next release based on the information in the .mailmap file." } ]
37f97867733b859c2e75b79a7390dcd70e1bef35
diff --git a/.mailmap b/.mailmap index 24c64a770b39..a70905f4dcfa 100644 --- a/.mailmap +++ b/.mailmap @@ -1141,6 +1141,7 @@ Tim Swast <[email protected]> Timothy Cyrus <[email protected]> Timothy Reluga <[email protected]> <[email protected]> Timothy Reluga <[email protected]> me <none@none> +Tirthankar Mazumder <[email protected]> TitanSnow <[email protected]> Tobias Lenz <[email protected]> Tom Bachmann <[email protected]> <[email protected]> diff --git a/sympy/parsing/ast_parser.py b/sympy/parsing/ast_parser.py index 7d1d3c3f3b08..253639d18eb2 100644 --- a/sympy/parsing/ast_parser.py +++ b/sympy/parsing/ast_parser.py @@ -32,7 +32,7 @@ def __init__(self, local_dict, global_dict): self.local_dict = local_dict self.global_dict = global_dict - def visit_Num(self, node): + def visit_Constant(self, node): if isinstance(node.n, int): return fix_missing_locations(Call(func=Name('Integer', Load()), args=[node], keywords=[]))
{ "difficulty": "low", "estimated_review_effort": 2, "problem_domain": "Code Refactoring / Architectural Improvement" }
sympy__sympy-22973@201445d
sympy/sympy
Python
22,973
Add convenience methods for constructing `AlgebraicField`
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> Fixes #22954 #### Brief description of what is fixed or changed Adds a couple of convenience methods for constructing `AlgebraicField`. Serves users who don't care about the complex embedding, by choosing an embedding for them. In particular, makes constructing a cyclotomic field much easier. #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> * polys * Add convenience constructors for `AlgebraicField` <!-- END RELEASE NOTES -->
2022-01-31T18:48:02Z
polys/numberfields: Convenience constructors for `AlgebraicField` I'd like to propose a pair of new constructor methods, to support a couple of common use cases. ## Use Case 1 * You want a number field `Q(a)`, * AND you have the minimal polynomial `f(x)` for `a`, * AND you don't care _which_ root of `f` `a` is. (E.g. it's a Galois field, so it doesn't matter.), * AND, OPTIONALLY you want an `alias` for `a` so that field elements print nicely. ## Use Case 2 * You want the `n`th cyclotomic field. * You probably want the primitive element to be aliased either `"zeta"` or `f"zeta{n}"`. ## The Current Situation For Use Case 1, you have to do this: ```python alpha = AlgebraicNumber(CRootOf(f, -1), alias="alpha") K = QQ.algebraic_field(alpha) ``` which has several problems: * It's too long. * You shouldn't have to import `CRootOf`. * You shouldn't have to choose an index for `CRootOf`. Use Case 2 is similar, and you also have to import `cyclotomic_poly`: ```python zeta = AlgebraicNumber(CRootOf(cyclotomic_poly(19), -1), alias="zeta") K = QQ.algebraic_field(zeta) ``` ## Proposal Alongside the `QQ.algebraic_field()` method (which takes one arg, being an `Expr`) I would propose two additional methods: ```python def alg_field_from_poly(self, poly, alias=None, root_index=-1): alpha = AlgebraicNumber(CRootOf(poly, root_index), alias=alias) return self.algebraic_field(alpha) def cyclotomic_field(self, n, alias="zeta", ss=False, gen=None, root_index=-1): if ss: alias += str(n) return self.alg_field_from_poly(cyclotomic_poly(n, gen), alias=alias, root_index=root_index) ``` Then the above examples become ```python K = QQ.alg_field_from_poly(f, "alpha") ``` and ```python K = QQ.cyclotomic_field(19) ``` ## Rationale For a "canonical" default root, `-1` is the best index to pass to `CRootOf`. In common cases like quadratic or cyclotomic fields, this will select the root you tend to think of: the positive square root, or the square root with positive imaginary part, or `exp(2*I*pi/n)` for the `n`th cyclotomic field.
How would `to_sympy` or `from_sympy` work if we don't know which root is the generator? We still do. These methods choose the root for you. I think this seems reasonable.
[ { "body": "I'd like to propose a pair of new constructor methods, to support a couple of common use cases.\r\n\r\n\r\n## Use Case 1\r\n\r\n* You want a number field `Q(a)`,\r\n* AND you have the minimal polynomial `f(x)` for `a`,\r\n* AND you don't care _which_ root of `f` `a` is. (E.g. it's a Galois field, so it doesn't matter.),\r\n* AND, OPTIONALLY you want an `alias` for `a` so that field elements print nicely.\r\n\r\n\r\n## Use Case 2\r\n\r\n* You want the `n`th cyclotomic field.\r\n* You probably want the primitive element to be aliased either `\"zeta\"` or `f\"zeta{n}\"`.\r\n\r\n\r\n\r\n## The Current Situation\r\n\r\nFor Use Case 1, you have to do this:\r\n```python\r\nalpha = AlgebraicNumber(CRootOf(f, -1), alias=\"alpha\")\r\nK = QQ.algebraic_field(alpha)\r\n\r\n```\r\nwhich has several problems:\r\n\r\n* It's too long.\r\n* You shouldn't have to import `CRootOf`.\r\n* You shouldn't have to choose an index for `CRootOf`.\r\n\r\n\r\nUse Case 2 is similar, and you also have to import `cyclotomic_poly`:\r\n\r\n```python\r\nzeta = AlgebraicNumber(CRootOf(cyclotomic_poly(19), -1), alias=\"zeta\")\r\nK = QQ.algebraic_field(zeta)\r\n```\r\n\r\n\r\n\r\n## Proposal\r\n\r\nAlongside the `QQ.algebraic_field()` method (which takes one arg, being an `Expr`) I would propose two additional methods:\r\n\r\n```python\r\n def alg_field_from_poly(self, poly, alias=None, root_index=-1):\r\n alpha = AlgebraicNumber(CRootOf(poly, root_index), alias=alias)\r\n return self.algebraic_field(alpha)\r\n\r\n def cyclotomic_field(self, n, alias=\"zeta\", ss=False, gen=None, root_index=-1):\r\n if ss:\r\n alias += str(n)\r\n return self.alg_field_from_poly(cyclotomic_poly(n, gen), alias=alias, root_index=root_index)\r\n\r\n```\r\n\r\nThen the above examples become\r\n\r\n```python\r\nK = QQ.alg_field_from_poly(f, \"alpha\")\r\n```\r\n\r\nand\r\n\r\n```python\r\nK = QQ.cyclotomic_field(19)\r\n```\r\n\r\n## Rationale\r\n\r\nFor a \"canonical\" default root, `-1` is the best index to pass to `CRootOf`. In common cases like quadratic or cyclotomic fields, this will select the root you tend to think of: the positive square root, or the square root with positive imaginary part, or `exp(2*I*pi/n)` for the `n`th cyclotomic field.\r\n", "number": 22954, "title": "polys/numberfields: Convenience constructors for `AlgebraicField`" } ]
c1bfbd69158c4a97ce605a14ee5a00287469e01a
{ "head_commit": "201445d88cf835807c4f37d529548e6f01e27eb3", "head_commit_message": "Simplify some tests.\n\nThese are unit and doctests where we construct an `AlgebraicField`.\nWe now make simpler constructions, using the new convenience constructor methods.", "patch_to_review": "diff --git a/sympy/polys/domains/domain.py b/sympy/polys/domains/domain.py\nindex 6228affacbfa..5a5ca468ccb7 100644\n--- a/sympy/polys/domains/domain.py\n+++ b/sympy/polys/domains/domain.py\n@@ -3,6 +3,7 @@\n \n from typing import Any, Optional, Type\n \n+from sympy.core.numbers import AlgebraicNumber\n from sympy.core import Basic, sympify\n from sympy.core.sorting import default_sort_key, ordered\n from sympy.external.gmpy import HAS_GMPY\n@@ -891,6 +892,90 @@ def algebraic_field(self, *extension):\n r\"\"\"Returns an algebraic field, i.e. `K(\\alpha, \\ldots)`. \"\"\"\n raise DomainError(\"Cannot create algebraic field over %s\" % self)\n \n+ def alg_field_from_poly(self, poly, alias=None, root_index=-1):\n+ r\"\"\"\n+ Convenience method to construct an algebraic extension on a root of a\n+ polynomial, chosen by root index.\n+\n+ Parameters\n+ ==========\n+\n+ poly : :py:class:`~.Poly`\n+ The polynomial whose root generates the extension.\n+ alias : str, optional (default=None)\n+ Symbol name for the generator of the extension.\n+ E.g. \"alpha\" or \"theta\".\n+ root_index : int, optional (default=-1)\n+ Specifies which root of the polynomial is desired. The ordering is\n+ as defined by the :py:class:`~.ComplexRootOf` class. The default of\n+ ``-1`` selects the most natural choice in the common cases of\n+ quadratic and cyclotomic fields (the square root on the positive\n+ real or imaginary axis, resp. $\\mathrm{e}^{2\\pi i/n}$).\n+\n+ Examples\n+ ========\n+\n+ >>> from sympy import QQ, Poly\n+ >>> from sympy.abc import x\n+ >>> f = Poly(x**2 - 2)\n+ >>> K = QQ.alg_field_from_poly(f)\n+ >>> K.ext.minpoly == f\n+ True\n+ >>> g = Poly(8*x**3 - 6*x - 1)\n+ >>> L = QQ.alg_field_from_poly(g, alias=\"alpha\")\n+ >>> L.ext.minpoly == g\n+ True\n+ >>> L.to_sympy(L([1, 1, 1]))\n+ alpha**2 + alpha + 1\n+\n+ \"\"\"\n+ from sympy.polys.rootoftools import CRootOf\n+ root = CRootOf(poly, root_index)\n+ alpha = AlgebraicNumber(root, alias=alias)\n+ return self.algebraic_field(alpha)\n+\n+ def cyclotomic_field(self, n, alias=\"zeta\", ss=False, gen=None, root_index=-1):\n+ r\"\"\"\n+ Convenience method to construct a cyclotomic field.\n+\n+ Parameters\n+ ==========\n+\n+ n : int\n+ Construct the nth cyclotomic field.\n+ alias : str, optional (default=\"zeta\")\n+ Symbol name for the generator.\n+ ss : boolean, optional (default=False)\n+ If True, append *n* as a subscript on the alias string.\n+ gen : :py:class:`~.Symbol`, optional (default=None)\n+ Desired variable for the cyclotomic polynomial that defines the\n+ field. If ``None``, a dummy variable will be used.\n+ root_index : int, optional (default=-1)\n+ Specifies which root of the polynomial is desired. The ordering is\n+ as defined by the :py:class:`~.ComplexRootOf` class. The default of\n+ ``-1`` selects the root $\\mathrm{e}^{2\\pi i/n}$.\n+\n+ Examples\n+ ========\n+\n+ >>> from sympy import QQ, latex\n+ >>> K = QQ.cyclotomic_field(5)\n+ >>> K.to_sympy(K([-1, 1]))\n+ 1 - zeta\n+ >>> L = QQ.cyclotomic_field(7, ss=True)\n+ >>> a = L.to_sympy(L([-1, 1]))\n+ >>> print(a)\n+ 1 - zeta7\n+ >>> print(latex(a))\n+ 1 - \\zeta_{7}\n+\n+ \"\"\"\n+ from sympy.polys.specialpolys import cyclotomic_poly\n+ if ss:\n+ alias += str(n)\n+ return self.alg_field_from_poly(cyclotomic_poly(n, gen), alias=alias,\n+ root_index=root_index)\n+\n def inject(self, *symbols):\n \"\"\"Inject generators into this domain. \"\"\"\n raise NotImplementedError\ndiff --git a/sympy/polys/domains/tests/test_domains.py b/sympy/polys/domains/tests/test_domains.py\nindex 40b1cb059946..52a83be06d22 100644\n--- a/sympy/polys/domains/tests/test_domains.py\n+++ b/sympy/polys/domains/tests/test_domains.py\n@@ -17,7 +17,9 @@\n from sympy.polys.domains.polynomialring import PolynomialRing\n from sympy.polys.domains.realfield import RealField\n \n+from sympy.polys.numberfields.subfield import field_isomorphism\n from sympy.polys.rings import ring\n+from sympy.polys.specialpolys import cyclotomic_poly\n from sympy.polys.fields import field\n \n from sympy.polys.agca.extensions import FiniteExtension\n@@ -725,6 +727,38 @@ def test_Domain__algebraic_field():\n assert alg.dom == QQ\n \n \n+def test_Domain_alg_field_from_poly():\n+ f = Poly(x**2 - 2)\n+ g = Poly(x**2 - 3)\n+ h = Poly(x**4 - 10*x**2 + 1)\n+\n+ alg = ZZ.alg_field_from_poly(f)\n+ assert alg.ext.minpoly == f\n+ assert alg.dom == QQ\n+\n+ alg = QQ.alg_field_from_poly(f)\n+ assert alg.ext.minpoly == f\n+ assert alg.dom == QQ\n+\n+ alg = alg.alg_field_from_poly(g)\n+ assert alg.ext.minpoly == h\n+ assert alg.dom == QQ\n+\n+\n+def test_Domain_cyclotomic_field():\n+ K = ZZ.cyclotomic_field(12)\n+ assert K.ext.minpoly == Poly(cyclotomic_poly(12))\n+ assert K.dom == QQ\n+\n+ F = QQ.cyclotomic_field(3)\n+ assert F.ext.minpoly == Poly(cyclotomic_poly(3))\n+ assert F.dom == QQ\n+\n+ E = F.cyclotomic_field(4)\n+ assert field_isomorphism(E.ext, K.ext) is not None\n+ assert E.dom == QQ\n+\n+\n def test_PolynomialRing_from_FractionField():\n F, x,y = field(\"x,y\", ZZ)\n R, X,Y = ring(\"x,y\", ZZ)\ndiff --git a/sympy/polys/numberfields/basis.py b/sympy/polys/numberfields/basis.py\nindex 9f414d6b8c19..9cb4b0e0aab4 100644\n--- a/sympy/polys/numberfields/basis.py\n+++ b/sympy/polys/numberfields/basis.py\n@@ -115,15 +115,15 @@ def round_two(T, radicals=None):\n Working through an AlgebraicField:\n \n >>> from sympy import Poly, QQ\n- >>> from sympy.abc import x, theta\n+ >>> from sympy.abc import x\n >>> T = Poly(x ** 3 + x ** 2 - 2 * x + 8)\n- >>> K = QQ.algebraic_field((T, theta))\n+ >>> K = QQ.alg_field_from_poly(T, \"theta\")\n >>> print(K.maximal_order())\n Submodule[[2, 0, 0], [0, 2, 0], [0, 1, 1]]/2\n >>> print(K.discriminant())\n -503\n >>> print(K.integral_basis(fmt='sympy'))\n- [1, theta, theta**2/2 + theta/2]\n+ [1, theta, theta/2 + theta**2/2]\n \n Calling directly:\n \ndiff --git a/sympy/polys/numberfields/primes.py b/sympy/polys/numberfields/primes.py\nindex 38b5dec0b214..dd3f6145cbfc 100644\n--- a/sympy/polys/numberfields/primes.py\n+++ b/sympy/polys/numberfields/primes.py\n@@ -385,11 +385,8 @@ def prime_valuation(I, P):\n ========\n \n >>> from sympy import QQ\n- >>> from sympy.abc import theta\n- >>> from sympy.polys import cyclotomic_poly\n >>> from sympy.polys.numberfields import prime_valuation\n- >>> T = cyclotomic_poly(5)\n- >>> K = QQ.algebraic_field((T, theta))\n+ >>> K = QQ.cyclotomic_field(5)\n >>> P = K.primes_above(5)\n >>> ZK = K.maximal_order()\n >>> print(prime_valuation(25*ZK, P[0]))\ndiff --git a/sympy/polys/numberfields/tests/test_basis.py b/sympy/polys/numberfields/tests/test_basis.py\nindex dfa326cc51c3..c7ff79b7d5f6 100644\n--- a/sympy/polys/numberfields/tests/test_basis.py\n+++ b/sympy/polys/numberfields/tests/test_basis.py\n@@ -1,4 +1,4 @@\n-from sympy.abc import theta, x\n+from sympy.abc import x\n from sympy.core import S\n from sympy.core.numbers import AlgebraicNumber\n from sympy.functions.elementary.miscellaneous import sqrt\n@@ -61,7 +61,7 @@ def test_round_two():\n (x**3 + 15 * x**2 - 9 * x + 13, DM([((1, 6), (1, 3), (1, 6)), (0, 1, 0), (0, 0, 1)], QQ).transpose(), -5292),\n )\n for f, B_exp, d_exp in cases:\n- K = QQ.algebraic_field((f, theta))\n+ K = QQ.alg_field_from_poly(f)\n B = K.maximal_order().QQ_matrix\n d = K.discriminant()\n assert d == d_exp\ndiff --git a/sympy/polys/numberfields/tests/test_primes.py b/sympy/polys/numberfields/tests/test_primes.py\nindex a648f50e7308..09e31e012918 100644\n--- a/sympy/polys/numberfields/tests/test_primes.py\n+++ b/sympy/polys/numberfields/tests/test_primes.py\n@@ -163,7 +163,7 @@ def test_decomp_6():\n def test_decomp_7():\n # Try working through an AlgebraicField\n T = Poly(x ** 3 + x ** 2 - 2 * x + 8)\n- K = QQ.algebraic_field((T, theta))\n+ K = QQ.alg_field_from_poly(T)\n p = 2\n P = K.primes_above(p)\n ZK = K.maximal_order()\n" }
[ { "diff_hunk": "@@ -891,6 +892,90 @@ def algebraic_field(self, *extension):\n r\"\"\"Returns an algebraic field, i.e. `K(\\alpha, \\ldots)`. \"\"\"\n raise DomainError(\"Cannot create algebraic field over %s\" % self)\n \n+ def alg_field_from_poly(self, poly, alias=None, root_index=-1):\n+ r\"\"\"\n+ Convenience method to construct an algebraic extension on a root of a\n+ polynomial, chosen by root index.\n+\n+ Parameters\n+ ==========\n+\n+ poly : :py:class:`~.Poly`\n+ The polynomial whose root generates the extension.\n+ alias : str, optional (default=None)\n+ Symbol name for the generator of the extension.\n+ E.g. \"alpha\" or \"theta\".\n+ root_index : int, optional (default=-1)\n+ Specifies which root of the polynomial is desired. The ordering is\n+ as defined by the :py:class:`~.ComplexRootOf` class. The default of\n+ ``-1`` selects the most natural choice in the common cases of\n+ quadratic and cyclotomic fields (the square root on the positive\n+ real or imaginary axis, resp. $\\mathrm{e}^{2\\pi i/n}$).\n+\n+ Examples\n+ ========\n+\n+ >>> from sympy import QQ, Poly\n+ >>> from sympy.abc import x\n+ >>> f = Poly(x**2 - 2)\n+ >>> K = QQ.alg_field_from_poly(f)\n+ >>> K.ext.minpoly == f\n+ True\n+ >>> g = Poly(8*x**3 - 6*x - 1)\n+ >>> L = QQ.alg_field_from_poly(g, alias=\"alpha\")\n+ >>> L.ext.minpoly == g\n+ True\n+ >>> L.to_sympy(L([1, 1, 1]))\n+ alpha**2 + alpha + 1\n+\n+ \"\"\"\n+ from sympy.polys.rootoftools import CRootOf\n+ root = CRootOf(poly, root_index)\n+ alpha = AlgebraicNumber(root, alias=alias)\n+ return self.algebraic_field(alpha)\n+\n+ def cyclotomic_field(self, n, alias=\"zeta\", ss=False, gen=None, root_index=-1):", "line": null, "original_line": 937, "original_start_line": null, "path": "sympy/polys/domains/domain.py", "start_line": null, "text": "@author:\nChecking this over, I realized the order of the args `ss` and `alias` should be swapped. It will be more common to want to activate the subscript than to want to use a symbol other than `zeta`.\r\n\r\nI'll make this change and (if still no objections) will merge soon." } ]
c55d6f6740aaa7ace48d35be6e3b8970595e2260
diff --git a/sympy/polys/domains/domain.py b/sympy/polys/domains/domain.py index 6228affacbfa..b5185b288d69 100644 --- a/sympy/polys/domains/domain.py +++ b/sympy/polys/domains/domain.py @@ -3,6 +3,7 @@ from typing import Any, Optional, Type +from sympy.core.numbers import AlgebraicNumber from sympy.core import Basic, sympify from sympy.core.sorting import default_sort_key, ordered from sympy.external.gmpy import HAS_GMPY @@ -891,6 +892,90 @@ def algebraic_field(self, *extension): r"""Returns an algebraic field, i.e. `K(\alpha, \ldots)`. """ raise DomainError("Cannot create algebraic field over %s" % self) + def alg_field_from_poly(self, poly, alias=None, root_index=-1): + r""" + Convenience method to construct an algebraic extension on a root of a + polynomial, chosen by root index. + + Parameters + ========== + + poly : :py:class:`~.Poly` + The polynomial whose root generates the extension. + alias : str, optional (default=None) + Symbol name for the generator of the extension. + E.g. "alpha" or "theta". + root_index : int, optional (default=-1) + Specifies which root of the polynomial is desired. The ordering is + as defined by the :py:class:`~.ComplexRootOf` class. The default of + ``-1`` selects the most natural choice in the common cases of + quadratic and cyclotomic fields (the square root on the positive + real or imaginary axis, resp. $\mathrm{e}^{2\pi i/n}$). + + Examples + ======== + + >>> from sympy import QQ, Poly + >>> from sympy.abc import x + >>> f = Poly(x**2 - 2) + >>> K = QQ.alg_field_from_poly(f) + >>> K.ext.minpoly == f + True + >>> g = Poly(8*x**3 - 6*x - 1) + >>> L = QQ.alg_field_from_poly(g, "alpha") + >>> L.ext.minpoly == g + True + >>> L.to_sympy(L([1, 1, 1])) + alpha**2 + alpha + 1 + + """ + from sympy.polys.rootoftools import CRootOf + root = CRootOf(poly, root_index) + alpha = AlgebraicNumber(root, alias=alias) + return self.algebraic_field(alpha) + + def cyclotomic_field(self, n, ss=False, alias="zeta", gen=None, root_index=-1): + r""" + Convenience method to construct a cyclotomic field. + + Parameters + ========== + + n : int + Construct the nth cyclotomic field. + ss : boolean, optional (default=False) + If True, append *n* as a subscript on the alias string. + alias : str, optional (default="zeta") + Symbol name for the generator. + gen : :py:class:`~.Symbol`, optional (default=None) + Desired variable for the cyclotomic polynomial that defines the + field. If ``None``, a dummy variable will be used. + root_index : int, optional (default=-1) + Specifies which root of the polynomial is desired. The ordering is + as defined by the :py:class:`~.ComplexRootOf` class. The default of + ``-1`` selects the root $\mathrm{e}^{2\pi i/n}$. + + Examples + ======== + + >>> from sympy import QQ, latex + >>> K = QQ.cyclotomic_field(5) + >>> K.to_sympy(K([-1, 1])) + 1 - zeta + >>> L = QQ.cyclotomic_field(7, True) + >>> a = L.to_sympy(L([-1, 1])) + >>> print(a) + 1 - zeta7 + >>> print(latex(a)) + 1 - \zeta_{7} + + """ + from sympy.polys.specialpolys import cyclotomic_poly + if ss: + alias += str(n) + return self.alg_field_from_poly(cyclotomic_poly(n, gen), alias=alias, + root_index=root_index) + def inject(self, *symbols): """Inject generators into this domain. """ raise NotImplementedError diff --git a/sympy/polys/domains/tests/test_domains.py b/sympy/polys/domains/tests/test_domains.py index 40b1cb059946..52a83be06d22 100644 --- a/sympy/polys/domains/tests/test_domains.py +++ b/sympy/polys/domains/tests/test_domains.py @@ -17,7 +17,9 @@ from sympy.polys.domains.polynomialring import PolynomialRing from sympy.polys.domains.realfield import RealField +from sympy.polys.numberfields.subfield import field_isomorphism from sympy.polys.rings import ring +from sympy.polys.specialpolys import cyclotomic_poly from sympy.polys.fields import field from sympy.polys.agca.extensions import FiniteExtension @@ -725,6 +727,38 @@ def test_Domain__algebraic_field(): assert alg.dom == QQ +def test_Domain_alg_field_from_poly(): + f = Poly(x**2 - 2) + g = Poly(x**2 - 3) + h = Poly(x**4 - 10*x**2 + 1) + + alg = ZZ.alg_field_from_poly(f) + assert alg.ext.minpoly == f + assert alg.dom == QQ + + alg = QQ.alg_field_from_poly(f) + assert alg.ext.minpoly == f + assert alg.dom == QQ + + alg = alg.alg_field_from_poly(g) + assert alg.ext.minpoly == h + assert alg.dom == QQ + + +def test_Domain_cyclotomic_field(): + K = ZZ.cyclotomic_field(12) + assert K.ext.minpoly == Poly(cyclotomic_poly(12)) + assert K.dom == QQ + + F = QQ.cyclotomic_field(3) + assert F.ext.minpoly == Poly(cyclotomic_poly(3)) + assert F.dom == QQ + + E = F.cyclotomic_field(4) + assert field_isomorphism(E.ext, K.ext) is not None + assert E.dom == QQ + + def test_PolynomialRing_from_FractionField(): F, x,y = field("x,y", ZZ) R, X,Y = ring("x,y", ZZ) diff --git a/sympy/polys/numberfields/basis.py b/sympy/polys/numberfields/basis.py index 9f414d6b8c19..9cb4b0e0aab4 100644 --- a/sympy/polys/numberfields/basis.py +++ b/sympy/polys/numberfields/basis.py @@ -115,15 +115,15 @@ def round_two(T, radicals=None): Working through an AlgebraicField: >>> from sympy import Poly, QQ - >>> from sympy.abc import x, theta + >>> from sympy.abc import x >>> T = Poly(x ** 3 + x ** 2 - 2 * x + 8) - >>> K = QQ.algebraic_field((T, theta)) + >>> K = QQ.alg_field_from_poly(T, "theta") >>> print(K.maximal_order()) Submodule[[2, 0, 0], [0, 2, 0], [0, 1, 1]]/2 >>> print(K.discriminant()) -503 >>> print(K.integral_basis(fmt='sympy')) - [1, theta, theta**2/2 + theta/2] + [1, theta, theta/2 + theta**2/2] Calling directly: diff --git a/sympy/polys/numberfields/primes.py b/sympy/polys/numberfields/primes.py index 38b5dec0b214..dd3f6145cbfc 100644 --- a/sympy/polys/numberfields/primes.py +++ b/sympy/polys/numberfields/primes.py @@ -385,11 +385,8 @@ def prime_valuation(I, P): ======== >>> from sympy import QQ - >>> from sympy.abc import theta - >>> from sympy.polys import cyclotomic_poly >>> from sympy.polys.numberfields import prime_valuation - >>> T = cyclotomic_poly(5) - >>> K = QQ.algebraic_field((T, theta)) + >>> K = QQ.cyclotomic_field(5) >>> P = K.primes_above(5) >>> ZK = K.maximal_order() >>> print(prime_valuation(25*ZK, P[0])) diff --git a/sympy/polys/numberfields/tests/test_basis.py b/sympy/polys/numberfields/tests/test_basis.py index dfa326cc51c3..c7ff79b7d5f6 100644 --- a/sympy/polys/numberfields/tests/test_basis.py +++ b/sympy/polys/numberfields/tests/test_basis.py @@ -1,4 +1,4 @@ -from sympy.abc import theta, x +from sympy.abc import x from sympy.core import S from sympy.core.numbers import AlgebraicNumber from sympy.functions.elementary.miscellaneous import sqrt @@ -61,7 +61,7 @@ def test_round_two(): (x**3 + 15 * x**2 - 9 * x + 13, DM([((1, 6), (1, 3), (1, 6)), (0, 1, 0), (0, 0, 1)], QQ).transpose(), -5292), ) for f, B_exp, d_exp in cases: - K = QQ.algebraic_field((f, theta)) + K = QQ.alg_field_from_poly(f) B = K.maximal_order().QQ_matrix d = K.discriminant() assert d == d_exp diff --git a/sympy/polys/numberfields/tests/test_primes.py b/sympy/polys/numberfields/tests/test_primes.py index a648f50e7308..09e31e012918 100644 --- a/sympy/polys/numberfields/tests/test_primes.py +++ b/sympy/polys/numberfields/tests/test_primes.py @@ -163,7 +163,7 @@ def test_decomp_6(): def test_decomp_7(): # Try working through an AlgebraicField T = Poly(x ** 3 + x ** 2 - 2 * x + 8) - K = QQ.algebraic_field((T, theta)) + K = QQ.alg_field_from_poly(T) p = 2 P = K.primes_above(p) ZK = K.maximal_order()
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "New Feature Additions" }
sympy__sympy-22969@db9a049
sympy/sympy
Python
22,969
physics : Added refractive index in Gaussian Beam Parameters and refactored code
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> Fixes #19169 Closes #19169 #### Brief description of what is fixed or changed According to earlier implementation ,when transmission to different medium would take place Rayleigh length `z_r` would change, while wavelength stays the same. Hence beam waist, `w_0` changed which is not physical. Now as can be verified from the test cases when refractive index is doubled Rayleigh length proportionately gets doubled but beam waist remains the same. Also code /docstring has been refactored and made more understandable for users/contributors. 1. waist is replaced by beam waist which is the fundamental parameter used . All the information I have added/changed is validated by these online resources - https://en.wikipedia.org/wiki/Gaussian_beam https://phyweb.physics.nus.edu.sg/~l2000/pc2193/Experiments/lab.pdf #### Other comments Note : User must be aware of when the medium is changed . Users reading code must recognize what `w` is referring to at different parts of the code . A test has been modified for better understanding of users. #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> * physics.optics * Added refractive index to beam parameters. <!-- END RELEASE NOTES -->
2022-01-31T05:35:02Z
Gaussian Optics / Refractive index not considered Dear sympy maintainers, please correct me, if I am wrong, but I fear that the beam waist in class sympy.physics.optics.gaussopt.BeamParameter is not correctly computed. From the source: def w_0(self): """ The beam waist (minimal radius). See Also ======== w : the beam radius at `1/e^2` intensity Examples ======== >>> from sympy.physics.optics import BeamParameter >>> p = BeamParameter(530e-9, 1, w=1e-3) >>> p.w_0 0.00100000000000000 """ return sqrt(self.z_r/pi*self.wavelen) After transmission through a surface with the refractive index changing, the Rayleigh length z_r would change, while wavelength stays the same. According to this implementation, w_0 changes, which is not physical. If I might help to solve this, I would be happy to contribute. However, I have not a very good understanding of sympy, and this code is interfaced of course with sympy. Best regards, Lukas
I don't know the physics so well but you are of course welcome to correct it. Many contributors can help with the sympy aspects but I'm not sure how many would know the physics for this case. Here the formula implemented is incorrect .This will fix itself once the correct formula is used .The relation between Rayleigh length `z_r` and beam waist `w_0` involves the refractive index too . Check out beam parameters from wikipedia- https://en.wikipedia.org/wiki/Gaussian_beam I'll open up a pull request on this issue soon .
[ { "body": "Dear sympy maintainers,\r\n\r\nplease correct me, if I am wrong, but I fear that the beam waist in \r\n\r\nclass sympy.physics.optics.gaussopt.BeamParameter\r\n\r\nis not correctly computed.\r\n\r\nFrom the source:\r\n\r\ndef w_0(self):\r\n \"\"\"\r\n The beam waist (minimal radius).\r\n\r\n See Also\r\n ========\r\n\r\n w : the beam radius at `1/e^2` intensity\r\n\r\n Examples\r\n ========\r\n\r\n >>> from sympy.physics.optics import BeamParameter\r\n >>> p = BeamParameter(530e-9, 1, w=1e-3)\r\n >>> p.w_0\r\n 0.00100000000000000\r\n \"\"\"\r\n return sqrt(self.z_r/pi*self.wavelen)\r\n\r\nAfter transmission through a surface with the refractive index changing, the Rayleigh length z_r would change, while wavelength stays the same. According to this implementation, w_0 changes, which is not physical.\r\n\r\nIf I might help to solve this, I would be happy to contribute. \r\nHowever, I have not a very good understanding of sympy, and this code is interfaced of course with sympy.\r\n\r\nBest regards,\r\nLukas", "number": 19169, "title": "Gaussian Optics / Refractive index not considered" } ]
d7bddf4f882fe5887638cf3336e7b2f4e989020b
{ "head_commit": "db9a0496afe31ce1f5946fd3283498c9e470b5fc", "head_commit_message": "Positional_Arguments_Fixed", "patch_to_review": "diff --git a/sympy/core/tests/test_args.py b/sympy/core/tests/test_args.py\nindex e027d9ad0f46..c74e4620ec8f 100644\n--- a/sympy/core/tests/test_args.py\n+++ b/sympy/core/tests/test_args.py\n@@ -4912,7 +4912,7 @@ def test_sympy__physics__optics__waves__TWave():\n \n def test_sympy__physics__optics__gaussopt__BeamParameter():\n from sympy.physics.optics import BeamParameter\n- assert _test_args(BeamParameter(530e-9, 1, w=1e-3))\n+ assert _test_args(BeamParameter(530e-9, 1, w=1e-3, n=1))\n \n \n def test_sympy__physics__optics__medium__Medium():\ndiff --git a/sympy/physics/optics/gaussopt.py b/sympy/physics/optics/gaussopt.py\nindex 48d519e92fe0..56037ff8162f 100644\n--- a/sympy/physics/optics/gaussopt.py\n+++ b/sympy/physics/optics/gaussopt.py\n@@ -487,6 +487,7 @@ class BeamParameter(Expr):\n z : the distance to waist, and\n w : the waist, or\n z_r : the rayleigh range.\n+ n : the refractive index of medium.\n \n Examples\n ========\n@@ -506,9 +507,9 @@ class BeamParameter(Expr):\n >>> from sympy.physics.optics import FreeSpace\n >>> fs = FreeSpace(10)\n >>> p1 = fs*p\n- >>> p.w.n()\n+ >>> p.w_z.n()\n 0.00101413072159615\n- >>> p1.w.n()\n+ >>> p1.w_z.n()\n 0.00210803120913829\n \n See Also\n@@ -526,18 +527,19 @@ class BeamParameter(Expr):\n # subclass it. See:\n # https://groups.google.com/d/topic/sympy/7XkU07NRBEs/discussion\n \n- def __new__(cls, wavelen, z, z_r=None, w=None):\n+ def __new__(cls, wavelen, z, z_r=None, w=None, n=1):\n wavelen = sympify(wavelen)\n z = sympify(z)\n+ n = sympify(n)\n \n if z_r is not None and w is None:\n z_r = sympify(z_r)\n elif w is not None and z_r is None:\n- z_r = waist2rayleigh(sympify(w), wavelen)\n- else:\n- raise ValueError('Constructor expects exactly one named argument.')\n+ z_r = waist2rayleigh(sympify(w), wavelen, n)\n+ elif z_r is None and w is None:\n+ raise ValueError('Must specify one of w and z_r.')\n \n- return Expr.__new__(cls, wavelen, z, z_r)\n+ return Expr.__new__(cls, wavelen, z, z_r, n)\n \n @property\n def wavelen(self):\n@@ -551,6 +553,10 @@ def z(self):\n def z_r(self):\n return self.args[2]\n \n+ @property\n+ def n(self):\n+ return self.args[3]\n+\n @property\n def q(self):\n \"\"\"\n@@ -582,9 +588,10 @@ def radius(self):\n return self.z*(1 + (self.z_r/self.z)**2)\n \n @property\n- def w(self):\n+ def w_z(self):\n \"\"\"\n- The beam radius at `1/e^2` intensity.\n+ The radius of the beam w(z), at any position z along the beam.\n+ The beam radius at `1/e^2` intensity (axial value).\n \n See Also\n ========\n@@ -597,7 +604,7 @@ def w(self):\n \n >>> from sympy.physics.optics import BeamParameter\n >>> p = BeamParameter(530e-9, 1, w=1e-3)\n- >>> p.w\n+ >>> p.w_z\n 0.001*sqrt(0.2809/pi**2 + 1)\n \"\"\"\n return self.w_0*sqrt(1 + (self.z/self.z_r)**2)\n@@ -605,12 +612,12 @@ def w(self):\n @property\n def w_0(self):\n \"\"\"\n- The beam waist (minimal radius).\n+ The minimal radius of beam at `1/e^2` intensity (peak value).\n \n See Also\n ========\n \n- w : the beam radius at `1/e^2` intensity\n+ w_z : the beam radius at `1/e^2` intensity (axial value).\n \n Examples\n ========\n@@ -620,7 +627,7 @@ def w_0(self):\n >>> p.w_0\n 0.00100000000000000\n \"\"\"\n- return sqrt(self.z_r/pi*self.wavelen)\n+ return sqrt(self.z_r/(pi*self.n)*self.wavelen)\n \n @property\n def divergence(self):\n@@ -678,7 +685,7 @@ def waist_approximation_limit(self):\n # Utilities\n ###\n \n-def waist2rayleigh(w, wavelen):\n+def waist2rayleigh(w, wavelen, n=1):\n \"\"\"\n Calculate the rayleigh range from the waist of a gaussian beam.\n \n@@ -697,7 +704,7 @@ def waist2rayleigh(w, wavelen):\n pi*w**2/wavelen\n \"\"\"\n w, wavelen = map(sympify, (w, wavelen))\n- return w**2*pi/wavelen\n+ return w**2*n*pi/wavelen\n \n \n def rayleigh2waist(z_r, wavelen):\ndiff --git a/sympy/physics/optics/tests/test_gaussopt.py b/sympy/physics/optics/tests/test_gaussopt.py\nindex ed099d254433..523f7e564208 100644\n--- a/sympy/physics/optics/tests/test_gaussopt.py\n+++ b/sympy/physics/optics/tests/test_gaussopt.py\n@@ -57,8 +57,8 @@ def test_gauss_opt():\n assert streq(N(p.z_r), Float(5.92753330865999))\n fs = FreeSpace(10)\n p1 = fs*p\n- assert streq(N(p.w), Float(0.00101413072159615))\n- assert streq(N(p1.w), Float(0.00210803120913829))\n+ assert streq(N(p.w_z), Float(0.00101413072159615))\n+ assert streq(N(p1.w_z), Float(0.00210803120913829))\n \n w, wavelen = symbols('w wavelen')\n assert waist2rayleigh(w, wavelen) == pi*w**2/wavelen\n@@ -90,8 +90,13 @@ def test_gauss_opt():\n z, l, w = symbols('z l r', positive=True)\n p = BeamParameter(l, z, w=w)\n assert p.radius == z*(pi**2*w**4/(l**2*z**2) + 1)\n- assert p.w == w*sqrt(l**2*z**2/(pi**2*w**4) + 1)\n+ assert p.w_z == w*sqrt(l**2*z**2/(pi**2*w**4) + 1)\n assert p.w_0 == w\n assert p.divergence == l/(pi*w)\n assert p.gouy == atan2(z, pi*w**2/l)\n assert p.waist_approximation_limit == 2*l/pi\n+\n+ p = BeamParameter(530e-9, 1, w=1e-3, n=2)\n+ assert streq(p.q, 1 + 3.77358490566038*I*pi)\n+ assert streq(N(p.z_r), Float(11.8550666173200))\n+ assert streq(N(p.w_0), Float(0.00100000000000000))\n" }
[ { "diff_hunk": "@@ -582,9 +588,10 @@ def radius(self):\n return self.z*(1 + (self.z_r/self.z)**2)\n \n @property\n- def w(self):\n+ def w_z(self):", "line": null, "original_line": 591, "original_start_line": null, "path": "sympy/physics/optics/gaussopt.py", "start_line": null, "text": "@user1:\nPR is not letting me unresolve conversation so I am adding comment here:\r\n\r\nI don't find the word \"spot\" in SymPy -- where is the spot size talked about? It is not yet clear why this should be changed from `w` to `w_z`.\n\n@author:\nI'll unresolve that conversation ,thanks !\n\n@author:\nUnresolved and addressed that change !" } ]
6b59870830f9fdcf01d3b73f41d4225f56910002
diff --git a/sympy/core/tests/test_args.py b/sympy/core/tests/test_args.py index e027d9ad0f46..c74e4620ec8f 100644 --- a/sympy/core/tests/test_args.py +++ b/sympy/core/tests/test_args.py @@ -4912,7 +4912,7 @@ def test_sympy__physics__optics__waves__TWave(): def test_sympy__physics__optics__gaussopt__BeamParameter(): from sympy.physics.optics import BeamParameter - assert _test_args(BeamParameter(530e-9, 1, w=1e-3)) + assert _test_args(BeamParameter(530e-9, 1, w=1e-3, n=1)) def test_sympy__physics__optics__medium__Medium(): diff --git a/sympy/physics/optics/gaussopt.py b/sympy/physics/optics/gaussopt.py index 48d519e92fe0..564436961a4f 100644 --- a/sympy/physics/optics/gaussopt.py +++ b/sympy/physics/optics/gaussopt.py @@ -487,6 +487,7 @@ class BeamParameter(Expr): z : the distance to waist, and w : the waist, or z_r : the rayleigh range. + n : the refractive index of medium. Examples ======== @@ -526,18 +527,19 @@ class BeamParameter(Expr): # subclass it. See: # https://groups.google.com/d/topic/sympy/7XkU07NRBEs/discussion - def __new__(cls, wavelen, z, z_r=None, w=None): + def __new__(cls, wavelen, z, z_r=None, w=None, n=1): wavelen = sympify(wavelen) z = sympify(z) + n = sympify(n) if z_r is not None and w is None: z_r = sympify(z_r) elif w is not None and z_r is None: - z_r = waist2rayleigh(sympify(w), wavelen) - else: - raise ValueError('Constructor expects exactly one named argument.') + z_r = waist2rayleigh(sympify(w), wavelen, n) + elif z_r is None and w is None: + raise ValueError('Must specify one of w and z_r.') - return Expr.__new__(cls, wavelen, z, z_r) + return Expr.__new__(cls, wavelen, z, z_r, n) @property def wavelen(self): @@ -551,6 +553,10 @@ def z(self): def z_r(self): return self.args[2] + @property + def n(self): + return self.args[3] + @property def q(self): """ @@ -584,7 +590,8 @@ def radius(self): @property def w(self): """ - The beam radius at `1/e^2` intensity. + The radius of the beam w(z), at any position z along the beam. + The beam radius at `1/e^2` intensity (axial value). See Also ======== @@ -605,12 +612,12 @@ def w(self): @property def w_0(self): """ - The beam waist (minimal radius). + The minimal radius of beam at `1/e^2` intensity (peak value). See Also ======== - w : the beam radius at `1/e^2` intensity + w : the beam radius at `1/e^2` intensity (axial value). Examples ======== @@ -620,7 +627,7 @@ def w_0(self): >>> p.w_0 0.00100000000000000 """ - return sqrt(self.z_r/pi*self.wavelen) + return sqrt(self.z_r/(pi*self.n)*self.wavelen) @property def divergence(self): @@ -678,7 +685,7 @@ def waist_approximation_limit(self): # Utilities ### -def waist2rayleigh(w, wavelen): +def waist2rayleigh(w, wavelen, n=1): """ Calculate the rayleigh range from the waist of a gaussian beam. @@ -697,7 +704,7 @@ def waist2rayleigh(w, wavelen): pi*w**2/wavelen """ w, wavelen = map(sympify, (w, wavelen)) - return w**2*pi/wavelen + return w**2*n*pi/wavelen def rayleigh2waist(z_r, wavelen): diff --git a/sympy/physics/optics/tests/test_gaussopt.py b/sympy/physics/optics/tests/test_gaussopt.py index ed099d254433..5271f3cbb69c 100644 --- a/sympy/physics/optics/tests/test_gaussopt.py +++ b/sympy/physics/optics/tests/test_gaussopt.py @@ -87,11 +87,16 @@ def test_gauss_opt(): w_i**2/w_o**2 - sqrt(w_i**2/w_o**2 - pi**2*w_i**4/(f**2*l**2)))/w_i**2 assert conjugate_gauss_beams(l, w_i, w_o, f=f)[2] == f - z, l, w = symbols('z l r', positive=True) - p = BeamParameter(l, z, w=w) - assert p.radius == z*(pi**2*w**4/(l**2*z**2) + 1) - assert p.w == w*sqrt(l**2*z**2/(pi**2*w**4) + 1) - assert p.w_0 == w - assert p.divergence == l/(pi*w) - assert p.gouy == atan2(z, pi*w**2/l) + z, l, w_0 = symbols('z l w_0', positive=True) + p = BeamParameter(l, z, w=w_0) + assert p.radius == z*(pi**2*w_0**4/(l**2*z**2) + 1) + assert p.w == w_0*sqrt(l**2*z**2/(pi**2*w_0**4) + 1) + assert p.w_0 == w_0 + assert p.divergence == l/(pi*w_0) + assert p.gouy == atan2(z, pi*w_0**2/l) assert p.waist_approximation_limit == 2*l/pi + + p = BeamParameter(530e-9, 1, w=1e-3, n=2) + assert streq(p.q, 1 + 3.77358490566038*I*pi) + assert streq(N(p.z_r), Float(11.8550666173200)) + assert streq(N(p.w_0), Float(0.00100000000000000))
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Bug Fixes" }
xonsh__xonsh-5838@13f05f5
xonsh/xonsh
Python
5,838
feat: #5745 - send Ctrl+C event on Windows instead of forceful terminate
<!--- Thanks for opening a PR on xonsh! Please do this: 1. Include a news file with your PR (https://xon.sh/devguide.html#changelog). 2. Add the documentation for your feature into `/docs`. 3. Add the example of usage or before-after behavior. 4. Mention the issue that this PR is addressing e.g. `#1234`. --> Resolve #5745. The existing behavior [was introduced in 2015](https://github.com/xonsh/xonsh/commit/3aa37682f9441bb7c8bfb492325c88e0900ed327#diff-75c83180c6250bfa7c142ed214f8c71e7704872f3b0c63b167895cf0ae892989R106-R107) ~~and I suspect was because there was no equivalent to [signal.CTRL_C_EVENT](https://docs.python.org/3/library/signal.html#signal.CTRL_C_EVENT) at the time in Python 2.7, it was introduced in Python 3.2.~~ Xonsh only supported 3.4+ at the time and [signal.CTRL_C_EVENT](https://docs.python.org/3/library/signal.html#signal.CTRL_C_EVENT) was already available; it's unclear why the existing implementation choice was made by @adqm . # Before behavior: When pressing Ctrl+C on Windows a forceful terminate was sent using the taskkill command. ``` localhost@ ssh remote remote$ hello <Press Ctrl+C> Return 1 localhost@ ``` # After behavior: When pressing Ctrl+C on Windows a [signal.CTRL_C_EVENT](https://docs.python.org/3/library/signal.html#signal.CTRL_C_EVENT) will be sent instead. ``` localhost@ ssh remote remote$ hello^C remote$ ``` ## For community ⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
2025-05-04T00:08:53Z
Pressing Ctrl-C in Windows cancels a subprocess ## Current Behavior <!--- For general xonsh issues, please try to replicate the failure using `xonsh --no-rc --no-env`. Short, reproducible code snippets are highly appreciated. You can use `$XONSH_SHOW_TRACEBACK=1`, `$XONSH_TRACE_SUBPROC=2`, or `$XONSH_DEBUG=1` to collect more information about the failure. --> When in a Xonsh interactive session running a subprocess such as ssh, pressing Ctrl-C will cause the subprocess to cancel instead of forwarding the SIGINT to the subprocess. ``` localhost @ ssh remotehost remotehost $ man grep ... Matcher Selection -E, --extended-regexp Interpret PATTERN as an extended regular expression (ERE, see below). Manual page grep(1) line 1 (press h for help or q to quit) ... <Press Ctrl-C> Return 1 <Xonsh exits SSH session with exit code: 1> ``` ## Expected Behavior <!--- What you expect and what is your real life use case. --> The SIGINT should be forwarded to the subprocess instead of canceling the subprocess. ## xonfig <details> ```xsh @ xonfig +-----------------------------+---------------------------+ | xonsh | 0.19.0 | | Python | 3.13.1 | | PLY | 3.11 | | have readline | False | | prompt toolkit | 3.0.48 | | shell type | prompt_toolkit | | history backend | sqlite | | pygments | 2.18.0 | | on posix | False | | on linux | False | | on darwin | False | | on windows | True | | on cygwin | False | | on msys2 | False | | is superuser | False | | default encoding | utf-8 | | xonsh encoding | utf-8 | | encoding errors | surrogateescape | | xontrib 1 | base16_shell | | xontrib 2 | prompt_bar | | xontrib 3 | pygitstatus | | xontrib 4 | vox | | xontrib 5 | voxapi | | xontrib 6 | commands | | RC file 1 | C:\Users\XXXXXX\.xonshrc | | UPDATE_OS_ENVIRON | False | | XONSH_CAPTURE_ALWAYS | False | | XONSH_SUBPROC_OUTPUT_FORMAT | stream_lines | | THREAD_SUBPROCS | True | | XONSH_CACHE_SCRIPTS | True | +-----------------------------+---------------------------+ ``` </details> ## For community ⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
I couldn't reproduce this, unfortunately. I ssh'd into a couple of different Ubuntu servers. Ran top, or `man grep`. Ctrl+C cancelled top and man. ``` xonfig +-----------------------------+---------------------+ | xonsh | 0.19.0 | | Git SHA | c059ac23 | | Commit Date | Dec 9 14:41:03 2024 | | Python | 3.13.1 | | PLY | 3.11 | | have readline | False | | prompt toolkit | None | | shell type | readline | | history backend | json | | pygments | None | | on posix | False | | on linux | False | | on darwin | False | | on windows | True | | on cygwin | False | | on msys2 | False | | is superuser | True | | default encoding | utf-8 | | xonsh encoding | utf-8 | | encoding errors | surrogateescape | | xontrib | [] | | RC file | [] | | UPDATE_OS_ENVIRON | False | | XONSH_CAPTURE_ALWAYS | False | | XONSH_SUBPROC_OUTPUT_FORMAT | stream_lines | | THREAD_SUBPROCS | True | | XONSH_CACHE_SCRIPTS | True | +-----------------------------+---------------------+ ``` Related, I've been able to reproduce this without getting ssh involved, the following example uses less that's bundled with Git for Windows 2.41.0.windows.3 ``` pipx run --no-cache --python "C:\Users\XXXXXX\.pyenv\pyenv-win\versions\3.13.1\python.exe" "xonsh[full]" --no-rc @ less --help ... ESC-} ^RightArrow Right to last column displayed. ESC-{ ^LeftArrow Left to first column. HELP -- Press RETURN for more, or q when done : <Press Ctrl-C> Return 1 ``` What if you run xonsh without pipx? I realize arrow keys also get messed up afterwards, pressing `<Up Arrow>` on the keyboard appears to insert the characters `OA`. ``` <Launch conhost.exe> > C:\Users\XXXXXX\.pyenv\pyenv-win\versions\3.13.1\python.exe -m venv --upgrade-deps xonsh-5745 > xonsh-5745\Scripts\activate.bat > pip install --no-cache "xonsh[full]" ... Successfully installed prompt-toolkit-3.0.48 pygments-2.18.0 pyperclip-1.9.0 setproctitle-1.3.4 ujson-5.10.0 wcwidth-0.2.13 xonsh-0.19.0 > where xonsh C:\TMP\xonsh-5745\Scripts\xonsh.exe > xonsh --no-rc @ less --help ... g < ESC-< * Go to first line in file (or line N). G > ESC-> * Go to last line in file (or line N). p % * Go to beginning of file (or N percent into file). HELP -- Press RETURN for more, or q when done <Press Ctrl+C> [1] @ xonfig +-----------------------------+-----------------+ | xonsh | 0.19.0 | | Python | 3.13.1 | | PLY | 3.11 | | have readline | False | | prompt toolkit | 3.0.48 | | shell type | prompt_toolkit | | history backend | json | | pygments | 2.18.0 | | on posix | False | | on linux | False | | on darwin | False | | on windows | True | | on cygwin | False | | on msys2 | False | | is superuser | False | | default encoding | utf-8 | | xonsh encoding | utf-8 | | encoding errors | surrogateescape | | xontrib | [] | | RC file | [] | | UPDATE_OS_ENVIRON | False | | XONSH_CAPTURE_ALWAYS | False | | XONSH_SUBPROC_OUTPUT_FORMAT | stream_lines | | THREAD_SUBPROCS | True | | XONSH_CACHE_SCRIPTS | True | +-----------------------------+-----------------+ @ ``` What terminal emulator are you using? Hi Kyle. I followed your steps, just with a standard Python 3.13.1, not one via pyenv-win. Ctrl+C just quits `less --help` (this doesn't happen in Bash alone, but is what I'd expect?) and messes up the arrow keys for me too in the Xonsh session (giving OA and OB), and also makes anything I type in the parent bash session invisible after quitting Xonsh. Andy, I've reproduced this behaviour in Windows Terminal, Alacritty and WezTerm. I copied this line from the Windows Terminal Bash profile: `C:/Program Files/Git/bin/bash.exe -i -l` and pasted it into the Alacritty and WezTerm config files (adjusting the latter into Lua). So I think the upstream bash is in interactive mode. Is it? I observe several similar issues - https://www.google.com/search?q=windows+git+bash+interactive+mode - may be it will help. > What terminal emulator are you using? I've reproduced this in Windows Terminal and Conhost My Windows Terminal Xonsh profile ```json { "commandline": "C:\\Users\\XXXX\\.local\\bin\\xonsh.exe -DXONSH_PROMPT_HIGHLIGHT_EXECUTABLE=1", "guid": "{b7af3958-5b9a-4e3f-ad60-de1f84338908}", "hidden": false, "icon": "C:\\Users\\XXXX\\Pictures\\conch_pixel-128.png", "name": "xonsh", "opacity": 85, "startingDirectory": null, "useAcrylic": false }, ``` Is there a good native Windows executable to use as a test case to isolate if Git for Windows programs are effected only? `C:\Windows\System32\more.com` quits on `<Ctrl+C>` as part of expected behavior while commands like *nix's less and ssh do not.
[ { "body": "## Current Behavior\r\n<!---\r\nFor general xonsh issues, please try to replicate the failure using `xonsh --no-rc --no-env`.\r\nShort, reproducible code snippets are highly appreciated.\r\nYou can use `$XONSH_SHOW_TRACEBACK=1`, `$XONSH_TRACE_SUBPROC=2`, or `$XONSH_DEBUG=1`\r\nto collect more information about the failure.\r\n-->\r\nWhen in a Xonsh interactive session running a subprocess such as ssh, pressing Ctrl-C will cause the subprocess to cancel instead of forwarding the SIGINT to the subprocess.\r\n\r\n```\r\nlocalhost @ ssh remotehost\r\nremotehost $ man grep\r\n...\r\n Matcher Selection\r\n -E, --extended-regexp\r\n Interpret PATTERN as an extended regular expression (ERE, see below).\r\n\r\n Manual page grep(1) line 1 (press h for help or q to quit)\r\n...\r\n<Press Ctrl-C>\r\n Return 1\r\n<Xonsh exits SSH session with exit code: 1>\r\n```\r\n\r\n## Expected Behavior\r\n<!--- What you expect and what is your real life use case. -->\r\n\r\nThe SIGINT should be forwarded to the subprocess instead of canceling the subprocess.\r\n\r\n## xonfig\r\n\r\n<details>\r\n\r\n```xsh\r\n@ xonfig\r\n+-----------------------------+---------------------------+\r\n| xonsh | 0.19.0 |\r\n| Python | 3.13.1 |\r\n| PLY | 3.11 |\r\n| have readline | False |\r\n| prompt toolkit | 3.0.48 |\r\n| shell type | prompt_toolkit |\r\n| history backend | sqlite |\r\n| pygments | 2.18.0 |\r\n| on posix | False |\r\n| on linux | False |\r\n| on darwin | False |\r\n| on windows | True |\r\n| on cygwin | False |\r\n| on msys2 | False |\r\n| is superuser | False |\r\n| default encoding | utf-8 |\r\n| xonsh encoding | utf-8 |\r\n| encoding errors | surrogateescape |\r\n| xontrib 1 | base16_shell |\r\n| xontrib 2 | prompt_bar |\r\n| xontrib 3 | pygitstatus |\r\n| xontrib 4 | vox |\r\n| xontrib 5 | voxapi |\r\n| xontrib 6 | commands |\r\n| RC file 1 | C:\\Users\\XXXXXX\\.xonshrc |\r\n| UPDATE_OS_ENVIRON | False |\r\n| XONSH_CAPTURE_ALWAYS | False |\r\n| XONSH_SUBPROC_OUTPUT_FORMAT | stream_lines |\r\n| THREAD_SUBPROCS | True |\r\n| XONSH_CACHE_SCRIPTS | True |\r\n+-----------------------------+---------------------------+\r\n```\r\n\r\n</details>\r\n\r\n## For community\r\n⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**\r\n", "number": 5745, "title": "Pressing Ctrl-C in Windows cancels a subprocess" } ]
5ecdcae86d8d430f125b61a387cf6bb4ab877940
{ "head_commit": "13f05f5aa80aa0dbb6c8cdf6177d05430907555c", "head_commit_message": "#5745 - send Ctrl+C event on Windows instead of forceful terminate", "patch_to_review": "diff --git a/xonsh/procs/jobs.py b/xonsh/procs/jobs.py\nindex 0c41a33fac..3befb9a894 100644\n--- a/xonsh/procs/jobs.py\n+++ b/xonsh/procs/jobs.py\n@@ -214,6 +214,9 @@ def _kill(job):\n stderr=subprocess.STDOUT,\n )\n \n+ def _ctrl_c(job):\n+ os.kill(job[\"obj\"].pid, signal.CTRL_C_EVENT)\n+\n _hup = _kill # there is no equivalent of SIGHUP on Windows\n \n def ignore_sigtstp():\n@@ -239,10 +242,7 @@ def wait_for_active_job(last_task=None, backgrounded=False, return_error=False):\n except subprocess.TimeoutExpired:\n pass\n except KeyboardInterrupt:\n- try:\n- _kill(active_task)\n- except subprocess.CalledProcessError:\n- pass # ignore error if process closed before we got here\n+ _ctrl_c(active_task)\n return wait_for_active_job(last_task=active_task)\n \n else:\n" }
[ { "diff_hunk": "@@ -239,10 +242,7 @@ def wait_for_active_job(last_task=None, backgrounded=False, return_error=False):\n except subprocess.TimeoutExpired:\n pass\n except KeyboardInterrupt:\n- try:\n- _kill(active_task)\n- except subprocess.CalledProcessError:\n- pass # ignore error if process closed before we got here\n+ _ctrl_c(active_task)", "line": null, "original_line": 245, "original_start_line": null, "path": "xonsh/procs/jobs.py", "start_line": null, "text": "@user1:\nI actually don't think you need to send *anything*. Both the Python REPL and NeoVim (installed by winget) do their CTRL-C handling just fine on their own, tested both in Windows Terminal and in Command Prompt (cmd.exe).\r\n\r\nWhen you do this `_ctrl_c()` here, processes get a duplicate CTRL-C in short succession. In case of Python 3.13 from the Windows Store it sends it into an infinite loop of KeyboardInterrupts.\r\n\r\nJust saying `pass` here works fine for me.\n\n@author:\nValidated locally, thanks! 👍" } ]
cbcdcd2f4cb4c06c9a244e23212e0e69fb099bbe
diff --git a/xonsh/procs/jobs.py b/xonsh/procs/jobs.py index 0c41a33fac..8a16e36a99 100644 --- a/xonsh/procs/jobs.py +++ b/xonsh/procs/jobs.py @@ -234,15 +234,8 @@ def wait_for_active_job(last_task=None, backgrounded=False, return_error=False): proc = active_task["obj"] _continue(active_task) while proc.returncode is None: - try: + with contextlib.suppress(subprocess.TimeoutExpired, KeyboardInterrupt): proc.wait(0.01) - except subprocess.TimeoutExpired: - pass - except KeyboardInterrupt: - try: - _kill(active_task) - except subprocess.CalledProcessError: - pass # ignore error if process closed before we got here return wait_for_active_job(last_task=active_task) else:
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Bug Fixes" }
xonsh__xonsh-5744@d70ddfc
xonsh/xonsh
Python
5,744
try just using CaseSenssitiveDict from requests to see if it works
closes #5627 replaces #5633 option 1: just import `CaseInsensitiveDict` from `requests.strucures` pros: - not much typing cons: - replaces one dependency with another - requests is not typed natively so either has to be excluded from mypy, or also import `types-requests` as an additional dependency Option 2: hard code `CaseInsensitiveDict` by overriding python native dict. below is a possibility. pros: behavior is then native in xonsh cons: more code to maintain ``` class CaseInsensitiveDict(Dict[str, Any]): def __init__(self, *args, **kwargs): super().__init__() self._store = {} self.update(*args, **kwargs) def __setitem__(self, key, value): # Store the key in lowercase but preserve the original case for display self._store[key.lower()] = key super().__setitem__(key.lower(), value) def __getitem__(self, key): return super().__getitem__(key.lower()) def __delitem__(self, key): del self._store[key.lower()] super().__delitem__(key.lower()) def __contains__(self, key): return key.lower() in self._store def get(self, key, default=None): return super().get(key.lower(), default) def update(self, *args, **kwargs): for k, v in dict(*args, **kwargs).items(): self[k] = v def keys(self): # Return the original keys with their original casing return (self._store[k] for k in self._store) def items(self): return ((self._store[k], self[k]) for k in self._store) def __repr__(self): return f"{self.__class__.__name__}({dict(self.items())})" def copy(self): return CaseInsensitiveDict(self.items()) ``` ## For community ⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
2024-12-07T04:01:11Z
problems importing case_insensitive_dict ## Current Behavior <!--- The Regolith package imports Xonsh in its __init__ and when I rebuilt my env I go an import error: --> Traceback (if applicable): <details> ```xsh Traceback (most recent call last): File "C:\Users\simon\miniconda3\envs\r312\Scripts\regolith", line 2, in <module> from regolith.main import main File "C:\Users\simon\dev\regolith\src\regolith\__init__.py", line 6, in <module> XSH.load(execer=Execer()) File "C:\Users\simon\miniconda3\envs\r312\Lib\site-packages\xonsh\built_ins.py", line 633, in load from xonsh.commands_cache import CommandsCache File "C:\Users\simon\miniconda3\envs\r312\Lib\site-packages\xonsh\commands_cache.py", line 27, in <module> from case_insensitive_dict import CaseInsensitiveDict as CacheDict ModuleNotFoundError: No module named 'case_insensitive_dict' ``` I was able to install `case_insensitive_dict` from pip (it wasn't found in conda-forge or conda defaults) but then I still got an import error: ```xsh $ regolith --version C:\Users\simon\dev\regolith\src\regolith\version.py:38: UserWarning: Package metadata not found. warn("Package metadata not found.") Traceback (most recent call last): File "C:\Users\simon\miniconda3\envs\r312\Scripts\regolith", line 2, in <module> from regolith.main import main File "C:\Users\simon\dev\regolith\src\regolith\__init__.py", line 14, in <module> XSH.load(execer=Execer()) File "C:\Users\simon\miniconda3\envs\r312\Lib\site-packages\xonsh\built_ins.py", line 633, in load from xonsh.commands_cache import CommandsCache File "C:\Users\simon\miniconda3\envs\r312\Lib\site-packages\xonsh\commands_cache.py", line 27, in <module> from case_insensitive_dict import CaseInsensitiveDict as CacheDict ImportError: cannot import name 'CaseInsensitiveDict' from 'case_insensitive_dict' (C:\Users\simon\miniconda3\envs\r312\Lib\site-packages\case_insensitive_dict\__init__.py) ``` Any help with this would be appreciated. </details> ## Expected Behavior <!--- I would like regolith to run again without this importerror --> ## xonfig <details> ```xsh bash: xonfig: command not found ``` </details> ## For community ⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
sorry, some stuff got cut off there in my back and forth. I recently created a new conda env and got xonsh 0.18.2 py312h2e8e312_0 and got this error. There was no such error in my previous env (which was also python 3.12 but built some time before) Hey! I'm not the Windows user but I see there is `case_insensitive_dict` package in pypi. What if you install it? (https://pypi.org/project/case-insensitive-dictionary/) Thanks @anki-code . I tried that and then I got the import error (please see the second stack dump in the post for the full stack dump). ``` from case_insensitive_dict import CaseInsensitiveDict as CacheDict ImportError: cannot import name 'CaseInsensitiveDict' from 'case_insensitive_dict' (C:\Users\simon\miniconda3\envs\r312\Lib\site-packages\case_insensitive_dict\__init__.py)``` If you install xonsh 0.16.0 is it solves the issue? ```xsh xpip install git+https://github.com/xonsh/[email protected] # restart xonsh ``` I just see no any installations of `case_insensitive_dict` in the xonsh code. Just [import](https://github.com/search?q=repo%3Axonsh%2Fxonsh%20case_insensitive_dict&type=code). It's interesting how it worked before... @anki-code yes, I can confirm, going back to 0.16.0 cured the problem and allowed it to run. It will be cool to catch the difference. PR with fix is welcome! I will look into it. All I know atm is that 0.16.0 it ran, and it gave the error in 0.17.0 and 0.18.0. But I have a starting point for digging in to it. We can remove this external dependency. Really, https://github.com/xonsh/xonsh/blob/31e7c4204ad41368aa0b47c18292f77c823c575b/xonsh/commands_cache.py#L26-L29 We can remove this completely and replace `CacheDict` to `dict` in the code. @sbillinge if you can prepare the PR it will be cool. > Really, > > https://github.com/xonsh/xonsh/blob/31e7c4204ad41368aa0b47c18292f77c823c575b/xonsh/commands_cache.py#L26-L29 > > We can remove this completely and replace `CacheDict` to `dict` in the code. > > @sbillinge if you can prepare the PR it will be cool. Yes, I am happy to do that. I can do it tonight or tomorrow. Meetings today...:(
[ { "body": "## Current Behavior\r\n<!---\r\nThe Regolith package imports Xonsh in its __init__ and when I rebuilt my env I go an import error:\r\n-->\r\n\r\nTraceback (if applicable):\r\n\r\n<details>\r\n\r\n```xsh\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\simon\\miniconda3\\envs\\r312\\Scripts\\regolith\", line 2, in <module>\r\n from regolith.main import main\r\n File \"C:\\Users\\simon\\dev\\regolith\\src\\regolith\\__init__.py\", line 6, in <module>\r\n XSH.load(execer=Execer())\r\n File \"C:\\Users\\simon\\miniconda3\\envs\\r312\\Lib\\site-packages\\xonsh\\built_ins.py\", line 633, in load\r\n from xonsh.commands_cache import CommandsCache\r\n File \"C:\\Users\\simon\\miniconda3\\envs\\r312\\Lib\\site-packages\\xonsh\\commands_cache.py\", line 27, in <module>\r\n from case_insensitive_dict import CaseInsensitiveDict as CacheDict\r\nModuleNotFoundError: No module named 'case_insensitive_dict'\r\n```\r\n\r\nI was able to install `case_insensitive_dict` from pip (it wasn't found in conda-forge or conda defaults) but then I still got an import error:\r\n\r\n```xsh\r\n$ regolith --version\r\nC:\\Users\\simon\\dev\\regolith\\src\\regolith\\version.py:38: UserWarning: Package metadata not found.\r\n warn(\"Package metadata not found.\")\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\simon\\miniconda3\\envs\\r312\\Scripts\\regolith\", line 2, in <module>\r\n from regolith.main import main\r\n File \"C:\\Users\\simon\\dev\\regolith\\src\\regolith\\__init__.py\", line 14, in <module>\r\n XSH.load(execer=Execer())\r\n File \"C:\\Users\\simon\\miniconda3\\envs\\r312\\Lib\\site-packages\\xonsh\\built_ins.py\", line 633, in load\r\n from xonsh.commands_cache import CommandsCache\r\n File \"C:\\Users\\simon\\miniconda3\\envs\\r312\\Lib\\site-packages\\xonsh\\commands_cache.py\", line 27, in <module>\r\n from case_insensitive_dict import CaseInsensitiveDict as CacheDict\r\nImportError: cannot import name 'CaseInsensitiveDict' from 'case_insensitive_dict' (C:\\Users\\simon\\miniconda3\\envs\\r312\\Lib\\site-packages\\case_insensitive_dict\\__init__.py)\r\n```\r\nAny help with this would be appreciated.\r\n\r\n</details>\r\n\r\n## Expected Behavior\r\n<!--- I would like regolith to run again without this importerror -->\r\n\r\n## xonfig\r\n\r\n<details>\r\n\r\n```xsh\r\nbash: xonfig: command not found\r\n```\r\n\r\n</details>\r\n\r\n## For community\r\n⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**\r\n", "number": 5627, "title": "problems importing case_insensitive_dict" } ]
0791372f30ac89bbf5cb2078e47f842598d5206d
{ "head_commit": "d70ddfc8b4ed0cc96cf3d388321b32e25c2cc689", "head_commit_message": "news", "patch_to_review": "diff --git a/news/remove-caseinsdict.rst b/news/remove-caseinsdict.rst\nnew file mode 100644\nindex 0000000000..8d1df06b0a\n--- /dev/null\n+++ b/news/remove-caseinsdict.rst\n@@ -0,0 +1,23 @@\n+**Added:**\n+\n+* <news item>\n+\n+**Changed:**\n+\n+* <news item>\n+\n+**Deprecated:**\n+\n+* <news item>\n+\n+**Removed:**\n+\n+* `case_insensitive_dictionary` dependency\n+\n+**Fixed:**\n+\n+* <news item>\n+\n+**Security:**\n+\n+* <news item>\ndiff --git a/pyproject.toml b/pyproject.toml\nindex fa6e2688be..0359f900ce 100644\n--- a/pyproject.toml\n+++ b/pyproject.toml\n@@ -15,9 +15,7 @@ authors = [{ name = \"Anthony Scopatz\" }, { email = \"[email protected]\" }]\n maintainers = [{ name = \"Xonsh Community\" }, { email = \"[email protected]\" }]\n license = { text = \"BSD 2-Clause License\" }\n requires-python = \">=3.9\"\n-dependencies = [\n- \"case-insensitive-dictionary; platform_system=='Windows'\",\n-]\n+dependencies = [\"requests; platform_system=='Windows'\",]\n \n [tool.setuptools.dynamic]\n version = {attr = \"xonsh.__version__\"}\ndiff --git a/xonsh/commands_cache.py b/xonsh/commands_cache.py\nindex 0e81915840..f3f2df94dd 100644\n--- a/xonsh/commands_cache.py\n+++ b/xonsh/commands_cache.py\n@@ -24,7 +24,7 @@\n )\n \n if ON_WINDOWS:\n- from case_insensitive_dict import CaseInsensitiveDict as CacheDict\n+ from requests.structures import CaseInsensitiveDict as CacheDict # type: ignore\n else:\n CacheDict = dict\n \n" }
[ { "diff_hunk": "@@ -15,9 +15,7 @@ authors = [{ name = \"Anthony Scopatz\" }, { email = \"[email protected]\" }]\n maintainers = [{ name = \"Xonsh Community\" }, { email = \"[email protected]\" }]\n license = { text = \"BSD 2-Clause License\" }\n requires-python = \">=3.9\"\n-dependencies = [\n- \"case-insensitive-dictionary; platform_system=='Windows'\",\n-]\n+dependencies = [\"requests; platform_system=='Windows'\",]", "line": null, "original_line": 18, "original_start_line": null, "path": "pyproject.toml", "start_line": null, "text": "@user1:\nCould you just use the code directly? It would be a single class and the implementation is simple\n\n@author:\nYes, I like that better too. Get rid of this dependency..... I will do that and push. I have to run for a plane. I will try and push that at the airport." } ]
77355e5923b24b45986943c4be53d5f67a158bbc
diff --git a/news/remove-caseinsdict.rst b/news/remove-caseinsdict.rst new file mode 100644 index 0000000000..196e04f0fd --- /dev/null +++ b/news/remove-caseinsdict.rst @@ -0,0 +1,24 @@ +**Added:** + +* <news item> + +**Changed:** + +* replaced `case_insensitive_dictionary` dependency with local + `CaseInsensitiveDict` class + +**Deprecated:** + +* <news item> + +**Removed:** + +* <news item> + +**Fixed:** + +* <news item> + +**Security:** + +* <news item> diff --git a/pyproject.toml b/pyproject.toml index fa6e2688be..6323236986 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -15,9 +15,7 @@ authors = [{ name = "Anthony Scopatz" }, { email = "[email protected]" }] maintainers = [{ name = "Xonsh Community" }, { email = "[email protected]" }] license = { text = "BSD 2-Clause License" } requires-python = ">=3.9" -dependencies = [ - "case-insensitive-dictionary; platform_system=='Windows'", -] +dependencies = [] [tool.setuptools.dynamic] version = {attr = "xonsh.__version__"} diff --git a/tests/test_commands_cache.py b/tests/test_commands_cache.py index b8825ba1bd..d90dec63c3 100644 --- a/tests/test_commands_cache.py +++ b/tests/test_commands_cache.py @@ -8,6 +8,7 @@ from xonsh.commands_cache import ( SHELL_PREDICTOR_PARSER, + CaseInsensitiveDict, CommandsCache, _Commands, executables_in, @@ -306,3 +307,77 @@ def test_executables_in(xession): else: result = set(executables_in(test_path)) assert expected == result + + +def test_caseinsdict_constructor(): + actual = CaseInsensitiveDict({"key1": "val1", "Key2": "Val2"}) + assert isinstance(actual, CaseInsensitiveDict) + assert actual["key1"] == "val1" + assert actual["Key2"] == "Val2" + + +def test_caseinsdict_getitem(): + actual = CaseInsensitiveDict({"Key1": "Val1"}) + assert actual["Key1"] == "Val1" + assert actual["key1"] == "Val1" + + +def test_caseinsdict_setitem(): + actual = CaseInsensitiveDict({"Key1": "Val1"}) + actual["Key1"] = "Val2" + assert actual["Key1"] == "Val2" + assert actual["key1"] == "Val2" + actual["key1"] = "Val3" + assert actual["Key1"] == "Val3" + assert actual["key1"] == "Val3" + + +def test_caseinsdict_delitem(): + actual = CaseInsensitiveDict({"Key1": "Val1", "Key2": "Val2"}) + del actual["Key1"] + assert actual == CaseInsensitiveDict({"Key2": "Val2"}) + del actual["key2"] + assert actual == CaseInsensitiveDict({}) + + +def test_caseinsdict_contains(): + actual = CaseInsensitiveDict({"Key1": "Val1"}) + assert actual.__contains__("Key1") + assert actual.__contains__("key1") + assert not actual.__contains__("key2") + + +def test_caseinsdict_get(): + actual = CaseInsensitiveDict({"Key1": "Val1"}) + assert actual.get("Key1") == "Val1" + assert actual.get("key1") == "Val1" + assert actual.get("key2", "no val") == "no val" + assert actual.get("key1", "no val") == "Val1" + + +def test_caseinsdict_update(): + actual = CaseInsensitiveDict({"Key1": "Val1"}) + actual.update({"Key2": "Val2"}) + assert actual["key2"] == "Val2" + + +def test_caseinsdict_keys(): + actual = CaseInsensitiveDict({"Key1": "Val1"}) + assert next(actual.keys()) == "Key1" + + +def test_caseinsdict_items(): + actual = CaseInsensitiveDict({"Key1": "Val1"}) + assert next(actual.items()) == ("Key1", "Val1") + + +def test_caseinsdict_repr(): + actual = CaseInsensitiveDict({"Key1": "Val1"}) + assert actual.__repr__() == "CaseInsensitiveDict({'Key1': 'Val1'})" + + +def test_caseinsdict_copy(): + initial = CaseInsensitiveDict({"Key1": "Val1"}) + actual = initial.copy() + assert actual == initial + assert id(actual) != id(initial) diff --git a/xonsh/commands_cache.py b/xonsh/commands_cache.py index 0e81915840..bfd50f859e 100644 --- a/xonsh/commands_cache.py +++ b/xonsh/commands_cache.py @@ -23,8 +23,52 @@ is_executable_in_windows, ) + +class CaseInsensitiveDict(dict[tp.Any, tp.Any]): + def __init__(self, *args, **kwargs): + super().__init__() + self._store = {} + self.update(*args, **kwargs) + + def __setitem__(self, key, value): + # Store the key in lowercase but preserve the original case for display + self._store[key.casefold()] = key + super().__setitem__(key.casefold(), value) + + def __getitem__(self, key): + return super().__getitem__(key.casefold()) + + def __delitem__(self, key): + del self._store[key.casefold()] + super().__delitem__(key.casefold()) + + def __contains__(self, key): + return key.casefold() in self._store + + def get(self, key, default=None): + return super().get(key.casefold(), default) + + def update(self, *args, **kwargs): + for k, v in dict(*args, **kwargs).items(): + self[k] = v + + def keys(self): + # Return the original keys with their original casing + return (self._store[k] for k in self._store) + + def items(self): + return ((self._store[k], self[k]) for k in self._store) + + def __repr__(self): + return f"{self.__class__.__name__}({dict(self.items())})" + + def copy(self): + return CaseInsensitiveDict(self.items()) + + +CacheDict: tp.Union[type[CaseInsensitiveDict], type[dict]] if ON_WINDOWS: - from case_insensitive_dict import CaseInsensitiveDict as CacheDict + CacheDict = CaseInsensitiveDict else: CacheDict = dict
{ "difficulty": "low", "estimated_review_effort": 3, "problem_domain": "Dependency Updates & Env Compatibility" }
xonsh__xonsh-5491@0390a62
xonsh/xonsh
Python
5,491
Do not load `~/.xonshrc` in not interactive mode
### Motivation In #5099 was introduced the logic where all RC files are loaded in interactive and non-interactive modes. This logic is not working good for home based `~/.xonshrc` file. First of all `~/.xonshrc` is the buffer of settings focused on interactive mode. Most tools with integration with xonsh (e.g. `conda`, `zoxide`, etc) offer to just do `echo "init_tool()" >> ~/.xonshrc` (`conda` has around 20 lines of init code) to start using the tool and users are doing this without any doubts. But because of after 5099 `~/.xonshrc` is executed in non-interactive mode the adding code to it leads to unexpected and unintended side effects: * If you run a script with shebang (e.g. `#!/usr/bin/env xonsh` or [xonsh-awesome-cli-app](https://github.com/anki-code/xonsh-awesome-cli-app)) or just from `xonsh script.xsh` the code will be unexpected and unintended slower. * If you're using xonsh-based tools (e.g. you install them using pip) and run them in environment that has no packages that initiated in `~/.xonshrc` you will see epic errors. * Additional context: * Bash and Zsh do not load `~/.bashrc` and `~/.zshrc` in non-interactive mode by the same reasons. * We have welcome message `Create ~/.xonshrc file manually or use xonfig to suppress the welcome message` and we don't want to make the process of creating this file complex. All of this leads to bad unexpected and unintended experience. This PR is to fix this. ### Expectation By doing this fix we assume that experienced user who wants to build good repeatable run control files will use another ways to create config files and this has [reflection in docs](https://github.com/xonsh/xonsh/blob/8860f2bd5273d5f3fc08ccf6be6af8163bfec0bd/docs/xonshrc.rst). In the nutshell if you want to create the RC files that affect every run of code you should use one or many of these ways: * Cross-desktop group (XDG) compliant `~/.config/xonsh/rc.xsh` control file. * The system-wide control file `/etc/xonsh/xonshrc` for Linux and OSX and in `%ALLUSERSPROFILE%\xonsh\xonshrc` on Windows. It controls options that are applied to all users of Xonsh on a given system. * The home-based directory `~/.config/xonsh/rc.d/` and system `/etc/xonsh/rc.d/` can contain .xsh files. They will be executed at startup in order. This allows for drop-in configuration where your configuration can be split across scripts and common and local configurations more easily separated. In your configs you need to check `$XONSH_INTERACTIVE` and `$XONSH_LOGIN` explicitly. ### Before `~/.xonshrc` is used in non-interactive mode. ```xsh echo "echo RC" >> ~/.xonshrc cd /tmp echo "echo Script" > script.xsh xonsh script.xsh # RC # Script ``` ```xsh cd /tmp echo 'echo RC' >> ~/.xonshrc echo '#!/usr/bin/env xonsh' > myscript chmod +x myscript ./myscript # RC ``` ### After `~/.xonshrc` is not used in non-interactive mode. Use `-i` if you need it. ```xsh echo "echo RC" >> ~/.xonshrc cd /tmp echo "echo Script" > script.xsh xonsh script.xsh # Script xonsh -i script.xsh # RC # Script ``` ```xsh cd /tmp echo 'echo RC' >> ~/.xonshrc echo '#!/usr/bin/env xonsh' > myscript chmod +x myscript ./myscript ``` Closes #5488 #4096 #5496 ### Fun I want to leave here some nice representation of how it works in bash/sh/zsh from [twitter post](https://twitter.com/paxx39/status/1742768007154479109): ![image](https://github.com/xonsh/xonsh/assets/1708680/cd7b3803-483f-4d5d-bf9d-baa61c794f68) [How it works in xonsh](https://github.com/xonsh/xonsh/blob/20081b5a5179afa82b587e0bd3795b72caa2a2a5/docs/xonshrc.rst): * Non-interactive -> `$XONSHRC` and `$XONSHRC_DIR` * Interactive -> The same + `~/.xonshrc` And all logic inside files ;) ## For community ⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
2024-06-11T16:31:01Z
Unintended loading `xonshrc` file in script mode My xonshrc is pretty silent and I suddenly noticed that xonshrc is loaded in script mode: ```xsh cd /tmp echo 'echo RC' >> ~/.xonshrc echo '#!/usr/bin/env xonsh' > myscript chmod +x myscript ./myscript # RC xonsh myscript.xsh # RC ``` When I run any xonsh script I have no any expectation that my huge `.xonshrc` with features for prompt will be loaded. Bash/zsh also don't load RC files when you run script: ```xsh cd /tmp echo 'echo RC' >> ~/.bashrc echo 'echo 1' > script.sh bash script.sh # 1 # No RC echo 'echo RC' >> ~/.zshrc echo 'echo 1' > script.zsh zsh script.zsh # 1 # No RC ``` ## For community ⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
I think this is the result of #5099 (@cosineFish @bestlem). @gforsyth I think we need to fix/revert this in 0.17.0 because: * All xonsh scripts are affected. * If you have many libs in xonshrc and want to run xonsh script in env you will face with unintended errors about missing libs. It's bad experience. The user own xonshrc that is mostly for using it in prompt must NOT be loaded in script mode. > The user own xonshrc that is mostly for using it in prompt must NOT be loaded in script mode. I mean, that's the bash approach, but `zsh` and `fish` do load those files in script mode. A user can run using `--no-rc` to avoid loading their `xonshrc` file @gforsyth 1. zsh does not do this (see example in the first message). 2. I believe that if I have CLI xonsh `script.xsh` with shebang I should not do anything with `--no-rc`. I expect that it just works. See also [xonsh-awesome-cli-app](https://github.com/anki-code/xonsh-awesome-cli-app). So I'm completely for disabling load `~/.xonshrc` for scripts accordingly with expectations and bash, zsh behavior. If user wants to create xonshrc to load forever on user base level we already have `~/.config/xonsh/rc.xsh` ([doc](https://xon.sh/xonshrc.html)). ah, right, `zsh` does load `zshenv` though -- I don't want to get into having a million special cased startup files, though. I'm fine with this either way. If we do disable loading `rc` files in script mode, then we should ensure that a user can force loading them by passing `-i` I understand the idea from [here](https://github.com/xonsh/xonsh/issues/4096#issuecomment-792035141) to have this in every xonshrc file: ```xsh if $XONSH_INTERACTIVE: # ... ``` It looks clear BUT you forget about software around (e.g. `conda`, `zoxide`, `xonsh wizard`, `xonsh web`, etc) that just do: ```xsh echo "some_init_code()" >> ~/.xonshrc ``` And this kind of tools don't know about `$XONSH_INTERACTIVE`. And when they add init code into xonshrc directly ALL scripts will be affected by this and will produce errors and slow timings! This is why I think it's a not so good idea @gforsyth . We need more complex behavior. I'm thinking about at least push out `~/.xonshrc` to interactive mode. And one moment. I want to remember you all that we have already stepped on this rake when we had loading history on every xonsh run - #4178. Before this fix we observed all kinds of unexpected errors and bad experience with scripts. Do not do unexpected unintended work undercover. > I understand the idea from [here](https://github.com/xonsh/xonsh/issues/4096#issuecomment-792035141) to have this in every xonshrc file: > > ```python > if $XONSH_INTERACTIVE: > # ... > ``` > > It looks clear BUT you forget about software around (e.g. `conda`, `zoxide`, `xonsh wizard`, `xonsh web`, etc) that just do: > > ```python > echo "some_init_code()" >> ~/.xonshrc > ``` > > And this kind of tools don't know about `$XONSH_INTERACTIVE`. And when they add init code into xonshrc directly ALL scripts will be affected by this and will produce errors and slow timings! This is why I think it's a not so good idea @gforsyth . > > We need more complex behavior. I'm thinking about at least push out `~/.xonshrc` to interactive mode. This is exactly how fish does it - load the same config.fish in all cases but have test for --is-interactive and --is-login Why is this not an issue for fish? @bestlem I think it's issue for fish. Let's consider the situation. I want to write script `myscript` using fish and then I want to apply it for 1000 files. Why I need to load fish RC file with setting up fancy prompt 1000 times? If the answer - "you must know about checking for interactive usage and add checking in the RC file or add --no-rc in your script" - I treat this as "you must know everything in the world" and I don't like this. Second thing is from practice. All home `~/.*rc` files xonsh/bash/zsh are ordinarily unmanageable trash on most of systems I saw and applying the code from there to every execution of scripts is the shoot in the leg. I'm ok to keep the rest of RC places like it works now but not for `~/.xonshrc`. I'm in preparing the PR. Looking at my fish config item and fzf look at --is-interactive others don't affect script usage. In my experience I use my ~/.zshenv, config.fish etc to enforce a sane set of constant stuff across all my environments. and the PATH and environment variables will differ between Windows / WSL / MacOS/ Solaris /NeXT / Linux and if my machine is in London or Chicago when travelling or remote logging in. I don't what to have to make each script I write deal with all those cases when I can with any other shell just change one or two files for each OS. The environment configuration is not the job of my script. Actually I have been commenting on SO question that you must understand what each line in your shell config files does. If you don't then leave that line out. With experience you don't have to know the details of what that line does but you do have to understand its effect on your environment. There MUST be a way to run xonsh for scripts loading ~/.xonshrc - if you don't want it then there is --no-rc which you can put in the shebang of your script. The major problem before was that there was --no-rc but setting XONSH_INTERACTIVE was being done in the wrong place so I think if you loaded a config file ir set it to true in all cases. @bestlem I'm thinking about `~/.xonshrc` as an unmanageable buffer for random configurations. If you are experienced user who knows life you will use `~/.config/` and I assume you know what you are doing. So this is why in the PR I will make `~/.xonshrc` active only of interactive mode to prevent unexpected and unintended using in scripts. The rest behavior will stay as implemented. (JFYI I'm completely against of adding `--no-rc` to shebangs by default. This decision completely annihilates the value of `/etc/xonsh/xonshrc` and others. At the end we will have tons of tools with `--no-rc` in shebangs. What a hell will be.)
[ { "body": "My xonshrc is pretty silent and I suddenly noticed that xonshrc is loaded in script mode:\r\n\r\n```xsh\r\ncd /tmp\r\necho 'echo RC' >> ~/.xonshrc\r\necho '#!/usr/bin/env xonsh' > myscript\r\nchmod +x myscript\r\n\r\n./myscript\r\n# RC\r\n\r\nxonsh myscript.xsh\r\n# RC\r\n```\r\n\r\nWhen I run any xonsh script I have no any expectation that my huge `.xonshrc` with features for prompt will be loaded.\r\n\r\nBash/zsh also don't load RC files when you run script:\r\n\r\n```xsh\r\ncd /tmp\r\necho 'echo RC' >> ~/.bashrc\r\necho 'echo 1' > script.sh\r\nbash script.sh\r\n# 1 # No RC\r\n\r\necho 'echo RC' >> ~/.zshrc\r\necho 'echo 1' > script.zsh\r\nzsh script.zsh\r\n# 1 # No RC\r\n```\r\n\r\n## For community\r\n⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**\r\n", "number": 5488, "title": "Unintended loading `xonshrc` file in script mode" } ]
0d28a6731b1ea830bf5be355b43fe96318fd7020
{ "head_commit": "0390a62b62be60b36cc985f22d7d3b720b7661d1", "head_commit_message": "docs", "patch_to_review": "diff --git a/docs/xonshrc.rst b/docs/xonshrc.rst\nindex 0dda09bf6a..ae5cc25485 100644\n--- a/docs/xonshrc.rst\n+++ b/docs/xonshrc.rst\n@@ -2,26 +2,30 @@ Run Control File\n =========================\n Xonsh allows you to customize your shell behavior with run control files, called \"xonshrc\" files.\n These files are written either in the Xonsh language (a superset of Python) or in Python and are executed\n-exactly once at startup, only when running in interactive mode.\n+exactly once at startup.\n \n The control file usually contains:\n \n-* Assignment statements setting `environment variables <envvars.html>`_. This includes standard OS environment variables that affect other programs and many that Xonsh uses for itself.\n-* ``xonfig`` commands to load selected add-ins (\"`xontribs<tutorial_xontrib.html#loading-xontribs>`\")\n-* Xonsh function definitions\n+* Assignment statements setting `environment variables <envvars.html>`_. This includes standard OS environment variables that affect other programs and many that Xonsh uses for itself.\n+* ``xontrib`` commands to load selected add-ins (\"`xontribs<tutorial_xontrib.html#loading-xontribs>`\").\n+* Xonsh function definitions.\n * `Alias definitions <aliases.html>`_, many of which invoke the above functions with specified arguments.\n \n-The system-wide ``xonshrc`` file controls options that are applied to all users of Xonsh on a given system.\n-You can create this file in ``/etc/xonsh/xonshrc`` for Linux and OSX and in ``%ALLUSERSPROFILE%\\xonsh\\xonshrc`` on Windows.\n+First of all, you need to know about the home directory ``~/.xonshrc`` control file. This file is commonly used to put configurations for the user interactive prompt and it is executed automatically only for interactive Xonsh sessions.\n \n-Xonsh also allows a per-user run control file in your home directory, either\n-directly in the home directory at ``~/.xonshrc`` or, for XDG compliance, at ``~/.config/xonsh/rc.xsh``.\n-The options set per user override settings in the system-wide control file.\n+There are also a few places where Xonsh looks for run control files. These files will be executed automatically in both interactive and non-interactive modes, and you need to use the `$XONSH_INTERACTIVE <envvars.html#xonsh-interactive>`_ and `$XONSH_LOGIN <envvars.html#xonsh-login>`_ environment variables to determine what code you want to execute in each mode. Here is the list of run control files and directories:\n+\n+* Cross-desktop group (XDG) compliant ``~/.config/xonsh/rc.xsh`` control file.\n+* The system-wide control file ``/etc/xonsh/xonshrc`` for Linux and OSX and in ``%ALLUSERSPROFILE%\\xonsh\\xonshrc`` on Windows. It controls options that are applied to all users of Xonsh on a given system.\n+* The home-based directory ``~/.config/xonsh/rc.d/`` and system ``/etc/xonsh/rc.d/`` can contain ``.xsh`` files. They will be executed at startup in order. This allows for drop-in configuration where your configuration can be split across scripts and common and local configurations more easily separated.\n+\n+In addition:\n \n-Xonsh also supports configuration directories, from which all ``.xsh`` files will be sourced in order.\n-This allows for drop-in configuration where your configuration can be split across scripts and common\n-and local configuration more easily separated. By default, if the directory ``~/.config/xonsh/rc.d``\n-exists, any ``*.xsh`` files within will be sourced at startup.\n+* Use ``xonsh --no-rc`` to prevent using control files.\n+* Use ``xonsh --rc snail.xsh`` to run only a certain control file.\n+* Use ``xonsh -i script.xsh`` to run xonsh in interactive mode with loading all possible control files.\n+\n+The options set per user override settings in the system-wide control file.\n \n Xonsh provides 2 wizards to create your own \"xonshrc\". ``xonfig web`` provides basic settings, and ``xonfig wizard``\n steps you through all the available options.\ndiff --git a/xonsh/environ.py b/xonsh/environ.py\nindex a234d920f6..14118086f5 100644\n--- a/xonsh/environ.py\n+++ b/xonsh/environ.py\n@@ -1156,7 +1156,9 @@ class GeneralSetting(Xettings):\n )\n XONSH_INTERACTIVE = Var.with_default(\n True,\n- \"``True`` if xonsh is running interactively, and ``False`` otherwise.\",\n+ \"``True`` if xonsh is running interactively, and ``False`` otherwise. \"\n+ \"It's highly recommended to use this variable in your ``xonshrc`` files \"\n+ \"to split the code execution for interactive and non-interactive modes.\",\n is_configurable=False,\n )\n XONSH_LOGIN = Var.with_default(\ndiff --git a/xonsh/main.py b/xonsh/main.py\nindex cb710e6921..d44a4db0d3 100644\n--- a/xonsh/main.py\n+++ b/xonsh/main.py\n@@ -308,6 +308,16 @@ def _get_rc_files(shell_kwargs: dict, args, env):\n # otherwise, get the RC files from XONSHRC, and RC dirs from XONSHRC_DIR\n rc = env.get(\"XONSHRC\")\n rcd = env.get(\"XONSHRC_DIR\")\n+\n+ if not env.get(\"XONSH_INTERACTIVE\", False):\n+ \"\"\"\n+ Home ``~/.xonshrc`` file has special meaning and history. The ecosystem around shells treat this kind of files\n+ as the place where interactive tools and configs may be added. To avoid unintended and unexpected affection\n+ of this file on non interactive behavior we remote this file in non interactive mode e.g. script with shebang.\n+ \"\"\"\n+ home_xonshrc = os.path.expanduser(\"~/.xonshrc\")\n+ rc = tuple(c for c in rc if c != home_xonshrc)\n+\n return rc, rcd\n \n \n" }
[ { "diff_hunk": "@@ -308,6 +308,16 @@ def _get_rc_files(shell_kwargs: dict, args, env):\n # otherwise, get the RC files from XONSHRC, and RC dirs from XONSHRC_DIR\n rc = env.get(\"XONSHRC\")\n rcd = env.get(\"XONSHRC_DIR\")\n+\n+ if not env.get(\"XONSH_INTERACTIVE\", False):\n+ \"\"\"\n+ Home ``~/.xonshrc`` file has special meaning and history. The ecosystem around shells treat this kind of files\n+ as the place where interactive tools and configs may be added. To avoid unintended and unexpected affection\n+ of this file on non interactive behavior we remote this file in non interactive mode e.g. script with shebang.\n+ \"\"\"\n+ home_xonshrc = os.path.expanduser(\"~/.xonshrc\")", "line": null, "original_line": 318, "original_start_line": null, "path": "xonsh/main.py", "start_line": null, "text": "@user1:\nDoes this work on Windows? Maybe instead you can do\r\n```suggestion\r\n home_xonshrc = str((Path(\"~\") / \".xonshrc\").expanduser())\r\n```\n\n@author:\nI've added tests for this but it looks [Github CI is down](https://www.githubstatus.com/). The line you highlighted I copied from `default_xonshrc()` for consistency so I think it should work but if no I will use your proposal, thanks!" } ]
20081b5a5179afa82b587e0bd3795b72caa2a2a5
diff --git a/docs/xonshrc.rst b/docs/xonshrc.rst index 0dda09bf6a..e53231393e 100644 --- a/docs/xonshrc.rst +++ b/docs/xonshrc.rst @@ -2,26 +2,30 @@ Run Control File ========================= Xonsh allows you to customize your shell behavior with run control files, called "xonshrc" files. These files are written either in the Xonsh language (a superset of Python) or in Python and are executed -exactly once at startup, only when running in interactive mode. +exactly once at startup. The control file usually contains: -* Assignment statements setting `environment variables <envvars.html>`_. This includes standard OS environment variables that affect other programs and many that Xonsh uses for itself. -* ``xonfig`` commands to load selected add-ins ("`xontribs<tutorial_xontrib.html#loading-xontribs>`") -* Xonsh function definitions +* Assignment statements setting `environment variables <envvars.html>`_. This includes standard OS environment variables that affect other programs and many that Xonsh uses for itself. +* ``xontrib`` commands to load selected add-ins ("`xontribs<tutorial_xontrib.html#loading-xontribs>`"). +* Xonsh function definitions. * `Alias definitions <aliases.html>`_, many of which invoke the above functions with specified arguments. -The system-wide ``xonshrc`` file controls options that are applied to all users of Xonsh on a given system. -You can create this file in ``/etc/xonsh/xonshrc`` for Linux and OSX and in ``%ALLUSERSPROFILE%\xonsh\xonshrc`` on Windows. +First of all, you need to know about the home directory ``~/.xonshrc`` control file. This file is commonly used to put configurations for the user interactive prompt and it is executed automatically only for interactive Xonsh sessions. -Xonsh also allows a per-user run control file in your home directory, either -directly in the home directory at ``~/.xonshrc`` or, for XDG compliance, at ``~/.config/xonsh/rc.xsh``. -The options set per user override settings in the system-wide control file. +There are also a few places where Xonsh looks for run control files. These files will be executed automatically in both interactive and non-interactive modes, and you need to use the `$XONSH_INTERACTIVE <envvars.html#xonsh-interactive>`_ and `$XONSH_LOGIN <envvars.html#xonsh-login>`_ environment variables to determine what code you want to execute in each mode. Here is the list of run control files and directories: + +* Cross-desktop group (XDG) compliant ``~/.config/xonsh/rc.xsh`` control file. +* The system-wide control file ``/etc/xonsh/xonshrc`` for Linux and OSX and in ``%ALLUSERSPROFILE%\xonsh\xonshrc`` on Windows. It controls options that are applied to all users of Xonsh on a given system. +* The home-based directory ``~/.config/xonsh/rc.d/`` and system ``/etc/xonsh/rc.d/`` can contain ``.xsh`` files. They will be executed at startup in order. This allows for drop-in configuration where your configuration can be split across scripts and common and local configurations more easily separated. + +In addition: -Xonsh also supports configuration directories, from which all ``.xsh`` files will be sourced in order. -This allows for drop-in configuration where your configuration can be split across scripts and common -and local configuration more easily separated. By default, if the directory ``~/.config/xonsh/rc.d`` -exists, any ``*.xsh`` files within will be sourced at startup. +* Use ``xonsh --no-rc`` to prevent using control files. +* Use ``xonsh --rc snail.xsh`` to run only a certain control file. +* Use ``xonsh -i script.xsh`` to run xonsh in interactive mode with loading all possible control files. + +The options set per user override settings in the system-wide control file. Xonsh provides 2 wizards to create your own "xonshrc". ``xonfig web`` provides basic settings, and ``xonfig wizard`` steps you through all the available options. @@ -168,6 +172,7 @@ The following is a real-world example of such a file. .. include:: xonshrc.xsh :code: xonsh +See also `xontrib-rc-awesome <https://github.com/anki-code/xontrib-rc-awesome>`_. Real world sample rc.py ------------------------- diff --git a/news/xonshrc.rst b/news/xonshrc.rst new file mode 100644 index 0000000000..cbaf634756 --- /dev/null +++ b/news/xonshrc.rst @@ -0,0 +1,23 @@ +**Added:** + +* <news item> + +**Changed:** + +* The home based ``~/.xonshrc`` will not be executed in non-interactive mode (#5491). + +**Deprecated:** + +* <news item> + +**Removed:** + +* <news item> + +**Fixed:** + +* Windows: fixed path to RC file in ``xonfig web``. + +**Security:** + +* <news item> diff --git a/tests/test_integrations.py b/tests/test_integrations.py index 38fb0835a9..a8e37a4bd4 100644 --- a/tests/test_integrations.py +++ b/tests/test_integrations.py @@ -50,8 +50,9 @@ def run_xonsh( single_command=False, interactive=False, path=None, - add_args: list = None, + args=None, timeout=20, + add_env=None, ): env = dict(os.environ) if path is None: @@ -65,20 +66,29 @@ def run_xonsh( env["PROMPT"] = "" # disable ansi escape codes env["TERM"] = "linux" + + if add_env: + env |= add_env + xonsh = shutil.which("xonsh", path=PATH) - args = [xonsh, "--no-rc"] + popen_args = [xonsh] + if not args: + popen_args += ["--no-rc"] + else: + popen_args += args if interactive: - args.append("-i") + popen_args.append("-i") + if cmd and isinstance(cmd, str) and not cmd.endswith("\n"): + # In interactive mode we need to emulate "Press Enter". + cmd += "\n" if single_command: - args += ["-c", cmd] + popen_args += ["-c", cmd] input = None else: input = cmd - if add_args: - args += add_args proc = sp.Popen( - args, + popen_args, env=env, stdin=stdin, stdout=stdout, @@ -692,6 +702,8 @@ def _callme(args): @pytest.mark.flaky(reruns=3, reruns_delay=2) def test_script(case): script, exp_out, exp_rtn = case + if ON_DARWIN: + script = script.replace("tests/bin", str(Path(__file__).parent / "bin")) out, err, rtn = run_xonsh(script) if callable(exp_out): assert exp_out(out) @@ -1257,7 +1269,7 @@ def test_main_d(): assert out == "json\n" out, err, ret = run_xonsh( - add_args=["-DXONSH_HISTORY_BACKEND='dummy'"], + args=["--no-rc", "-DXONSH_HISTORY_BACKEND='dummy'"], cmd="print($XONSH_HISTORY_BACKEND)", single_command=True, ) @@ -1335,3 +1347,53 @@ def test_alias_stability_exception(): re.MULTILINE | re.DOTALL, ) assert "Bad file descriptor" not in out + + [email protected](reruns=3, reruns_delay=2) +def test_rc_no_xonshrc_for_non_interactive(tmpdir): + """Testing no ``~/.xonshrc`` execution in non-interactive commands (#5491).""" + rc_dir = tmpdir.mkdir("rc_dir") + user_home_dir = tmpdir.mkdir("user_home") + user_not_home_dir = tmpdir.mkdir("user_not_home") + + (rc_dir / "rc_dir.xsh").write_text("echo RC_DIR", encoding="utf8") + (user_home_dir / ".xonshrc").write_text("echo RC_HOME", encoding="utf8") + user_home_rc_path_crossplatform = str( + (Path(user_home_dir) / ".xonshrc").expanduser() + ) + (user_not_home_rc := user_not_home_dir / "rc.xsh").write_text( + "echo RC_NOT_HOME", encoding="utf8" + ) + xonshrc_files = [str(user_not_home_rc), str(user_home_rc_path_crossplatform)] + xonshrc_dir = [str(rc_dir)] + + # Here `eval()` is needed in Windows case where the path contains `:` symbol (e.g. `C:\\path`) and treated as list delimiter. + args = [ + f'-DHOME="{str(user_home_dir)}"', + f'-DXONSHRC="{os.pathsep.join(xonshrc_files)}"', + f'-DXONSHRC_DIR="{os.pathsep.join(xonshrc_dir)}"', + ] + cmd = "print(42+42)" + add_env = {"HOME": str(user_home_dir)} + out, err, ret = run_xonsh(cmd=cmd, interactive=False, args=args, add_env=add_env) + exp = ".*RC_NOT_HOME.*RC_DIR.*84.*" + assert re.match( + exp, + out, + re.MULTILINE | re.DOTALL, + ), f"Expected: {exp!r},\nResult: {out!r},\nargs={args!r}" + + out, err, ret = run_xonsh( + cmd=cmd + "\n", interactive=True, args=args, add_env=add_env + ) + + exp = ".*RC_NOT_HOME.*RC_HOME.*RC_DIR.*" + if not ON_WINDOWS: + # On Windows we well have `NoConsoleScreenBufferError` in interactive mode so avoid checking interactive output of the command. + exp += "84.*" + + assert re.match( + exp, + out, + re.MULTILINE | re.DOTALL, + ), f"Expected: {exp!r},\nResult: {out!r},\nargs={args!r}" diff --git a/tests/test_pipelines.py b/tests/test_pipelines.py index 24f646f020..1cf35d9659 100644 --- a/tests/test_pipelines.py +++ b/tests/test_pipelines.py @@ -74,6 +74,7 @@ def patched_events(monkeypatch, xonsh_events, xonsh_session): ("!(echo hi | grep x)", "", "", ""), ), ) [email protected](reruns=3, reruns_delay=2) def test_command_pipeline_capture(cmdline, stdout, stderr, raw_stdout, xonsh_execer): pipeline: CommandPipeline = xonsh_execer.eval(cmdline) assert pipeline.out == stdout diff --git a/xonsh/environ.py b/xonsh/environ.py index a234d920f6..a8844cbff4 100644 --- a/xonsh/environ.py +++ b/xonsh/environ.py @@ -15,6 +15,7 @@ import typing as tp import warnings from collections import ChainMap +from pathlib import Path import xonsh.prompt.base as prompt from xonsh import __version__ as XONSH_VERSION @@ -624,6 +625,11 @@ def xonshconfig(env): return xc +def get_home_xonshrc_path(): + """Cross-platform implementation of getting ``~/.xonshrc`` path.""" + return str((Path("~") / ".xonshrc").expanduser()) + + @default_value def default_xonshrc(env) -> "tuple[str, ...]": """ @@ -633,7 +639,7 @@ def default_xonshrc(env) -> "tuple[str, ...]": dxrc = ( os.path.join(xonsh_sys_config_dir(env), "xonshrc"), os.path.join(xonsh_config_dir(env), "rc.xsh"), - os.path.expanduser("~/.xonshrc"), + get_home_xonshrc_path(), ) # Check if old config file exists and issue warning old_config_filename = xonshconfig(env) @@ -1156,7 +1162,9 @@ class GeneralSetting(Xettings): ) XONSH_INTERACTIVE = Var.with_default( True, - "``True`` if xonsh is running interactively, and ``False`` otherwise.", + "``True`` if xonsh is running interactively, and ``False`` otherwise. " + "It's highly recommended to use this variable in your ``xonshrc`` files " + "to split the code execution for interactive and non-interactive modes.", is_configurable=False, ) XONSH_LOGIN = Var.with_default( diff --git a/xonsh/main.py b/xonsh/main.py index cb710e6921..478ff944a6 100644 --- a/xonsh/main.py +++ b/xonsh/main.py @@ -13,7 +13,7 @@ from xonsh import __version__ from xonsh.built_ins import XSH from xonsh.codecache import run_code_with_cache, run_script_with_cache -from xonsh.environ import make_args_env, xonshrc_context +from xonsh.environ import get_home_xonshrc_path, make_args_env, xonshrc_context from xonsh.events import events from xonsh.execer import Execer from xonsh.imphooks import install_import_hooks @@ -308,6 +308,16 @@ def _get_rc_files(shell_kwargs: dict, args, env): # otherwise, get the RC files from XONSHRC, and RC dirs from XONSHRC_DIR rc = env.get("XONSHRC") rcd = env.get("XONSHRC_DIR") + + if not env.get("XONSH_INTERACTIVE", False): + """ + Home based ``~/.xonshrc`` file has special meaning and history. The ecosystem around shells treats this kind of files + as the place where interactive tools can add configs. To avoid unintended and unexpected affection + of this file to non-interactive behavior we remove this file in non-interactive mode e.g. script with shebang. + """ + home_xonshrc = get_home_xonshrc_path() + rc = tuple(c for c in rc if c != home_xonshrc) + return rc, rcd diff --git a/xonsh/webconfig/file_writes.py b/xonsh/webconfig/file_writes.py index 256adf9a59..1b1032966b 100644 --- a/xonsh/webconfig/file_writes.py +++ b/xonsh/webconfig/file_writes.py @@ -4,6 +4,8 @@ import re import typing as tp +from xonsh.environ import get_home_xonshrc_path + def write_value(value: str, _) -> str: return f"{value!r}" @@ -50,7 +52,7 @@ def config_to_xonsh( yield suffix -RC_FILE = "~/.xonshrc" +RC_FILE = get_home_xonshrc_path() def insert_into_xonshrc(
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Bug Fixes" }
xonsh__xonsh-5473@bc8da38
xonsh/xonsh
Python
5,473
Alias that returns modified command
### Motivation After deep dive into [How make sudo expand aliases?](https://github.com/xonsh/xonsh/issues/2618) I understood that we want to modify command before execution but we can't achieve this because all we can is to use callable alias to wrap the command into it. But: * Callable alias is a complex "process wrapper" and using it with subprocess operators has requirements on managing threading, capturing and std manually. Implementing this is [high bar](https://github.com/xonsh/xonsh/issues/2893) for inexperienced users who just want to play with command itself. * We need the way between simple string aliases and callable aliases that will works good with alias resolving. I implemented this way as "Alias that returns modified command". ### Before No way to create alias that does logic and returns modified command without using callable alias wrapper. ### After ```xsh @aliases.register @aliases.return_command def _vi(args): """Universal vi editor.""" if $(which vim 2>/dev/null): return ['vim'] + args else: return ['vi'] + args ``` ```xsh @aliases.register @aliases.return_command def _xsudo(args): return ['sudo', '--', *aliases.eval_alias(args)] aliases['cow'] = 'echo mooo' xsudo cow tooo # Password: # mooo tooo xsudo cow tooo | grep m # mooo tooo ``` Closes #2618 #2893 ### Notes * It's uncomfortable work with aliases object when we have dict aliases for Windows in tests. We need to change it some day: #5452 ## For community ⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
2024-06-01T20:20:36Z
Question: how make sudo expand aliases? What is the official way to make sudo expand aliases, like `alias sudo='sudo '` on bash? This is what I tried: ```xsh $ACTUAL_SUDO = $(which sudo) def sudo_expanding_aliases(args): $ACTUAL_SUDO @(aliases.eval_alias(args)) aliases['sudo'] = sudo_expanding_aliases del sudo_expanding_aliases ``` This works, but it's kinda verbose and pollutes the environment. ```xsh def sudo_expanding_aliases_clojure(): actual_sudo = $(which sudo) def sudo_expanding_aliases(args): @(actual_sudo) @(aliases.eval_alias(args)) return sudo_expanding_aliases aliases['sudo'] = sudo_expanding_aliases_clojure() del sudo_expanding_aliases ``` That one is a syntax error around the `return sudo_expanding_aliases` and I have no idea why. It's also even more verbose and ugly. Am I thinking in the correct direction? Is it a good idea in general? Am I doing the expansion right? How can I reduce environmental pollution? ## For community ⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
[ { "body": "What is the official way to make sudo expand aliases, like `alias sudo='sudo '` on bash?\r\n\r\nThis is what I tried:\r\n```xsh\r\n$ACTUAL_SUDO = $(which sudo)\r\ndef sudo_expanding_aliases(args):\r\n $ACTUAL_SUDO @(aliases.eval_alias(args))\r\naliases['sudo'] = sudo_expanding_aliases\r\ndel sudo_expanding_aliases\r\n```\r\n\r\nThis works, but it's kinda verbose and pollutes the environment.\r\n\r\n```xsh\r\ndef sudo_expanding_aliases_clojure():\r\n actual_sudo = $(which sudo)\r\n def sudo_expanding_aliases(args):\r\n @(actual_sudo) @(aliases.eval_alias(args))\r\n return sudo_expanding_aliases\r\naliases['sudo'] = sudo_expanding_aliases_clojure()\r\ndel sudo_expanding_aliases\r\n```\r\n\r\nThat one is a syntax error around the `return sudo_expanding_aliases` and I have no idea why. It's also even more verbose and ugly.\r\n\r\nAm I thinking in the correct direction? Is it a good idea in general? Am I doing the expansion right? How can I reduce environmental pollution?\r\n\r\n## For community\r\n⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**\r\n", "number": 2618, "title": "Question: how make sudo expand aliases?" } ]
645c335fc6cf3088feb6e657b298d01929e23e55
{ "head_commit": "bc8da384998acf4a236baa4c8063df5d7e807c05", "head_commit_message": "Merge branch 'main' into command_alias", "patch_to_review": "diff --git a/docs/tutorial.rst b/docs/tutorial.rst\nindex 30cf8a1eaa..67b7ff92b5 100644\n--- a/docs/tutorial.rst\n+++ b/docs/tutorial.rst\n@@ -1275,7 +1275,7 @@ functions. If you don't know what these do, you probably don't need them.\n \n \n Aliases\n-==============================\n+=======\n Another important xonsh built-in is the ``aliases`` mapping. This is\n like a dictionary that affects how subprocess commands are run. If you are\n familiar with the Bash ``alias`` built-in, this is similar. Alias command\n@@ -1305,6 +1305,44 @@ If you were to run ``gco feature-fabulous`` with the above aliases in effect,\n the command would reduce to ``['git', 'checkout', 'feature-fabulous']`` before\n being executed.\n \n+Alias to modify command\n+-----------------------\n+\n+The best way to modify command on the fly is to use alias that returns modified command.\n+One of the most interesting application is expanding an aliases:\n+\n+.. code-block:: xonshcon\n+\n+ >>> @aliases.register\n+ ... @aliases.return_command\n+ ... def _xsudo(args):\n+ ... \"\"\"Sudo with expanding aliases.\"\"\"\n+ ... return ['sudo', '--', *aliases.eval_alias(args)]\n+ ...\n+ >>> aliases['install'] = \"apt install cowsay\"\n+ >>> xsudo install\n+ # Password:\n+ # Install cowsay\n+\n+Or implement logic to run the right command:\n+\n+.. code-block:: xonshcon\n+\n+ >>> @aliases.register\n+ ... @aliases.return_command\n+ ... def _vi(args):\n+ ... \"\"\"Universal vi editor.\"\"\"\n+ ... if $(which vim 2>/dev/null):\n+ ... return ['vim'] + args\n+ ... else:\n+ ... return ['vi'] + args\n+ ...\n+ >>> vi file\n+\n+\n+ExecAlias\n+---------\n+\n If the string is representing a block of xonsh code, the alias will be registered\n as an ``ExecAlias``, which is a callable alias. This block of code will then be\n executed whenever the alias is run. The arguments are available in the list ``$args``\ndiff --git a/news/alias_return_cmd.rst b/news/alias_return_cmd.rst\nnew file mode 100644\nindex 0000000000..2167c7d26a\n--- /dev/null\n+++ b/news/alias_return_cmd.rst\n@@ -0,0 +1,23 @@\n+**Added:**\n+\n+* Added ``@aliases.return_command`` decorator to eliminate the need to wrap the logic for modifying command into callable alias wrapper (#5473).\n+\n+**Changed:**\n+\n+* <news item>\n+\n+**Deprecated:**\n+\n+* <news item>\n+\n+**Removed:**\n+\n+* <news item>\n+\n+**Fixed:**\n+\n+* <news item>\n+\n+**Security:**\n+\n+* <news item>\ndiff --git a/tests/procs/test_specs.py b/tests/procs/test_specs.py\nindex 1cf74dbc5b..b307fc070a 100644\n--- a/tests/procs/test_specs.py\n+++ b/tests/procs/test_specs.py\n@@ -183,6 +183,7 @@ def test_interrupted_process_returncode(xonsh_session, captured, interactive):\n \n \n @skip_if_on_windows\[email protected](reruns=3, reruns_delay=1)\n def test_proc_raise_subproc_error(xonsh_session):\n xonsh_session.env[\"RAISE_SUBPROC_ERROR\"] = False\n \n@@ -469,3 +470,110 @@ def alias(cls, args, stdin, stdout):\n xession.aliases[\"alias_with_partial_args\"] = Class.alias\n out = run_subproc([[\"alias_with_partial_args\"]], captured=\"stdout\")\n assert out == \"ok\"\n+\n+\n+def test_alias_return_command_alone(xession):\n+ @xession.aliases.register(\"wakka\")\n+ @xession.aliases.return_command\n+ def _wakka(args):\n+ return [\"echo\"] + args\n+\n+ cmds = [\n+ [\"wakka\"],\n+ ]\n+ spec = cmds_to_specs(cmds, captured=\"object\")[-1]\n+ assert spec.cmd == [\"echo\"]\n+ assert spec.alias_name == \"wakka\"\n+\n+\n+def test_alias_return_command_alone_args(xession):\n+ @xession.aliases.register(\"wakka\")\n+ @xession.aliases.return_command\n+ def _wakka(args):\n+ return [\"echo\", \"e0\", \"e1\"] + args\n+\n+ cmds = [\n+ [\"wakka\", \"0\", \"1\"],\n+ ]\n+ spec = cmds_to_specs(cmds, captured=\"object\")[-1]\n+ assert spec.cmd == [\"echo\", \"e0\", \"e1\", \"0\", \"1\"]\n+ assert spec.alias_name == \"wakka\"\n+\n+\n+def test_alias_return_command_chain(xession):\n+ xession.aliases[\"foreground\"] = \"midground f0 f1\"\n+\n+ @xession.aliases.register(\"midground\")\n+ @xession.aliases.return_command\n+ def _midground(args):\n+ return [\"ground\", \"m0\", \"m1\"] + args\n+\n+ xession.aliases[\"ground\"] = \"background g0 g1\"\n+ xession.aliases[\"background\"] = \"echo b0 b1\"\n+\n+ cmds = [\n+ [\"foreground\", \"0\", \"1\"],\n+ ]\n+ spec = cmds_to_specs(cmds, captured=\"object\")[-1]\n+ assert spec.cmd == [\n+ \"echo\",\n+ \"b0\",\n+ \"b1\",\n+ \"g0\",\n+ \"g1\",\n+ \"m0\",\n+ \"m1\",\n+ \"f0\",\n+ \"f1\",\n+ \"0\",\n+ \"1\",\n+ ]\n+ assert spec.alias_name == \"foreground\"\n+\n+\n+def test_alias_return_command_chain_spec_modifiers(xession):\n+ xession.aliases[\"foreground\"] = \"midground f0 f1\"\n+\n+ xession.aliases[\"xunthread\"] = SpecAttrModifierAlias(\n+ {\"threadable\": False, \"force_threadable\": False}\n+ )\n+\n+ @xession.aliases.register(\"midground\")\n+ @xession.aliases.return_command\n+ def _midground(args):\n+ return [\"ground\", \"m0\", \"m1\"]\n+\n+ xession.aliases[\"ground\"] = \"background g0 g1\"\n+ xession.aliases[\"background\"] = \"xunthread echo b0 b1\"\n+\n+ cmds = [\n+ [\"foreground\", \"0\", \"1\"],\n+ ]\n+ spec = cmds_to_specs(cmds, captured=\"object\")[-1]\n+ assert spec.cmd == [\"echo\", \"b0\", \"b1\", \"g0\", \"g1\", \"m0\", \"m1\"]\n+ assert spec.alias_name == \"foreground\"\n+ assert spec.threadable is False\n+\n+\n+def test_alias_return_command_eval_inside(xession):\n+ xession.aliases[\"xthread\"] = SpecAttrModifierAlias(\n+ {\"threadable\": True, \"force_threadable\": True}\n+ )\n+\n+ @xession.aliases.register(\"xsudo\")\n+ @xession.aliases.return_command\n+ def _midground(args, spec_modifiers=None):\n+ return [\n+ \"sudo\",\n+ *xession.aliases.eval_alias(args, spec_modifiers=spec_modifiers),\n+ ]\n+\n+ xession.aliases[\"cmd\"] = \"xthread echo 1\"\n+\n+ cmds = [\n+ [\"xsudo\", \"cmd\"],\n+ ]\n+ spec = cmds_to_specs(cmds, captured=\"object\")[-1]\n+ assert spec.cmd == [\"sudo\", \"echo\", \"1\"]\n+ assert spec.alias_name == \"xsudo\"\n+ assert spec.threadable is True\ndiff --git a/tests/test_aliases.py b/tests/test_aliases.py\nindex a2c0e45a34..b672a180c7 100644\n--- a/tests/test_aliases.py\n+++ b/tests/test_aliases.py\n@@ -6,7 +6,7 @@\n \n import pytest\n \n-from xonsh.aliases import Aliases, ExecAlias\n+from xonsh.aliases import Aliases, ExecAlias, run_alias_by_params\n \n \n def cd(args, stdin=None):\n@@ -53,10 +53,17 @@ def test_eval_recursive(xession):\n assert ales.get(\"color_ls\") == [\"ls\", \"- -\", \"--color=true\"]\n \n \n+def test_eval_callable(xession):\n+ ales = make_aliases()\n+ resolved = ales.get([\"cd\", \"tmp\"])\n+ assert callable(resolved[0])\n+ assert isinstance(resolved[1], str)\n+\n+\n def test_eval_recursive_callable_partial(xonsh_execer, xession):\n ales = make_aliases()\n xession.env[\"HOME\"] = os.path.expanduser(\"~\")\n- assert ales.get(\"indirect_cd\")([\"arg2\", \"arg3\"]) == [\"..\", \"arg2\", \"arg3\"]\n+ assert ales.get([\"indirect_cd\", \"arg2\", \"arg3\"])[1:] == [\"..\", \"arg2\", \"arg3\"]\n \n \n def _return_to_sender_all(args, stdin, stdout, stderr, spec, stack):\n@@ -74,9 +81,11 @@ def _return_to_sender_all(args, stdin, stdout, stderr, spec, stack):\n \n def test_recursive_callable_partial_all(xession):\n ales = Aliases({\"rtn\": _return_to_sender_all, \"rtn-recurse\": [\"rtn\", \"arg1\"]})\n- alias = ales.get(\"rtn-recurse\")\n+ alias = ales.get(\"rtn-recurse\")[0]\n assert callable(alias)\n- args, obs = alias([\"arg2\"], stdin=\"a\", stdout=\"b\", stderr=\"c\", spec=\"d\", stack=\"e\")\n+ args, obs = alias(\n+ [\"arg1\", \"arg2\"], stdin=\"a\", stdout=\"b\", stderr=\"c\", spec=\"d\", stack=\"e\"\n+ )\n assert args == [\"arg1\", \"arg2\"]\n assert len(obs) == 5\n exp = {\"stdin\": \"a\", \"stdout\": \"b\", \"stderr\": \"c\", \"spec\": \"d\", \"stack\": \"e\"}\n@@ -89,9 +98,9 @@ def _return_to_sender_handles(args, stdin, stdout, stderr):\n \n def test_recursive_callable_partial_handles(xession):\n ales = Aliases({\"rtn\": _return_to_sender_handles, \"rtn-recurse\": [\"rtn\", \"arg1\"]})\n- alias = ales.get(\"rtn-recurse\")\n+ alias = ales.get(\"rtn-recurse\")[0]\n assert callable(alias)\n- args, obs = alias([\"arg2\"], stdin=\"a\", stdout=\"b\", stderr=\"c\")\n+ args, obs = alias([\"arg1\", \"arg2\"], stdin=\"a\", stdout=\"b\", stderr=\"c\")\n assert args == [\"arg1\", \"arg2\"]\n assert len(obs) == 3\n exp = {\"stdin\": \"a\", \"stdout\": \"b\", \"stderr\": \"c\"}\n@@ -104,7 +113,7 @@ def _return_to_sender_none():\n \n def test_recursive_callable_partial_none(xession):\n ales = Aliases({\"rtn\": _return_to_sender_none, \"rtn-recurse\": [\"rtn\"]})\n- alias = ales.get(\"rtn-recurse\")\n+ alias = ales.get(\"rtn-recurse\")[0]\n assert callable(alias)\n args, obs = alias()\n assert args == \"wakka\"\n@@ -214,3 +223,26 @@ def with_options(): ...\n def _private(): ...\n \n assert set(aliases) == {\"debug\", \"name\", \"private\"}\n+\n+\n+def test_run_alias_by_params():\n+ def alias_named_params(args, stdout):\n+ return (args, stdout)\n+\n+ def alias_named_params_rev(stdout, args):\n+ return (args, stdout)\n+\n+ def alias_list_params(a, i, o, e):\n+ return (a, i, o, e)\n+\n+ assert run_alias_by_params(alias_named_params, {\"args\": 1, \"stdout\": 2}) == (1, 2)\n+ assert run_alias_by_params(alias_named_params_rev, {\"args\": 1, \"stdout\": 2}) == (\n+ 1,\n+ 2,\n+ )\n+ assert run_alias_by_params(alias_list_params, {\"args\": 1, \"stderr\": 4}) == (\n+ 1,\n+ None,\n+ None,\n+ 4,\n+ )\ndiff --git a/xonsh/aliases.py b/xonsh/aliases.py\nindex bc0e4b0703..721e7e40d8 100644\n--- a/xonsh/aliases.py\n+++ b/xonsh/aliases.py\n@@ -1,7 +1,6 @@\n \"\"\"Aliases for the xonsh shell.\"\"\"\n \n import argparse\n-import collections.abc as cabc\n import functools\n import inspect\n import os\n@@ -10,6 +9,8 @@\n import sys\n import types\n import typing as tp\n+from collections import OrderedDict\n+from collections import abc as cabc\n \n import xonsh.completers._aliases as xca\n import xonsh.history.main as xhm\n@@ -42,6 +43,7 @@\n argvquote,\n escape_windows_cmd_string,\n print_color,\n+ print_exception,\n strip_simple_quotes,\n swap_values,\n to_repr_pretty_,\n@@ -59,8 +61,9 @@ def EXEC_ALIAS_RE():\n class FuncAlias:\n \"\"\"Provides a callable alias for xonsh commands.\"\"\"\n \n- attributes_show = [\"__xonsh_threadable__\", \"__xonsh_capturable__\"]\n+ attributes_show = [\"__xonsh_threadable__\", \"__xonsh_capturable__\", \"return_command\"]\n attributes_inherit = attributes_show + [\"__doc__\"]\n+ return_command = False\n \n def __init__(self, name, func=None):\n self.__name__ = self.name = name\n@@ -79,12 +82,27 @@ def __repr__(self):\n return f\"FuncAlias({repr(r)})\"\n \n def __call__(\n- self, args=None, stdin=None, stdout=None, stderr=None, spec=None, stack=None\n+ self,\n+ args=None,\n+ stdin=None,\n+ stdout=None,\n+ stderr=None,\n+ spec=None,\n+ stack=None,\n+ spec_modifiers=None,\n ):\n- func_args = [args, stdin, stdout, stderr, spec, stack][\n- : len(inspect.signature(self.func).parameters)\n- ]\n- return self.func(*func_args)\n+ return run_alias_by_params(\n+ self.func,\n+ {\n+ \"args\": args,\n+ \"stdin\": stdin,\n+ \"stdout\": stdout,\n+ \"stderr\": stderr,\n+ \"spec\": spec,\n+ \"stack\": stack,\n+ \"spec_modifiers\": spec_modifiers,\n+ },\n+ )\n \n \n class Aliases(cabc.MutableMapping):\n@@ -132,28 +150,18 @@ def wrapper(func):\n \n return wrapper\n \n- def get(self, key, default=None, spec_modifiers=None):\n- \"\"\"Returns the (possibly modified) value. If the key is not present,\n- then `default` is returned.\n- If the value is callable, it is returned without modification. If it\n- is an iterable of strings it will be evaluated recursively to expand\n- other aliases, resulting in a new list or a \"partially applied\"\n- callable.\n- \"\"\"\n- spec_modifiers = spec_modifiers if spec_modifiers is not None else []\n- val = self._raw.get(key)\n- if val is None:\n- return default\n- elif isinstance(val, cabc.Iterable) or callable(val):\n- return self.eval_alias(\n- val, seen_tokens={key}, spec_modifiers=spec_modifiers\n- )\n- else:\n- msg = \"alias of {!r} has an inappropriate type: {!r}\"\n- raise TypeError(msg.format(key, val))\n+ def return_command(self, f):\n+ \"\"\"Decorator that switches alias from returning result to return in new command for execution.\"\"\"\n+ f.return_command = True\n+ return f\n \n def eval_alias(\n- self, value, seen_tokens=frozenset(), acc_args=(), spec_modifiers=None\n+ self,\n+ value,\n+ seen_tokens=frozenset(),\n+ acc_args=(),\n+ spec_modifiers=None,\n+ found_return_command=None,\n ):\n \"\"\"\n \"Evaluates\" the alias ``value``, by recursively looking up the leftmost\n@@ -166,6 +174,9 @@ def eval_alias(\n ``[\"-al\", \"arg\"]``.\n \"\"\"\n spec_modifiers = spec_modifiers if spec_modifiers is not None else []\n+ found_return_command = (\n+ found_return_command if found_return_command is not None else []\n+ )\n # Beware of mutability: default values for keyword args are evaluated\n # only once.\n if (\n@@ -182,8 +193,19 @@ def eval_alias(\n break\n value = value[i:]\n \n+ if callable(value) and getattr(value, \"return_command\", False):\n+ try:\n+ found_return_command.append(value.__name__)\n+ value = value(acc_args, spec_modifiers=spec_modifiers)\n+ acc_args = []\n+ except Exception as e:\n+ print_exception(f\"Exception inside alias {value}: {e}\")\n+ return None\n+ if not len(value):\n+ raise ValueError(\"return_command alias: zero arguments.\")\n+\n if callable(value):\n- return partial_eval_alias(value, acc_args=acc_args)\n+ return [value] + list(acc_args)\n else:\n expand_path = XSH.expand_path\n token, *rest = map(expand_path, value)\n@@ -203,8 +225,65 @@ def eval_alias(\n seen_tokens,\n acc_args,\n spec_modifiers=spec_modifiers,\n+ found_return_command=found_return_command,\n )\n \n+ def get(\n+ self,\n+ key,\n+ default=None,\n+ spec_modifiers=None,\n+ found_return_command=None,\n+ ):\n+ \"\"\"\n+ Returns list that represent command with resolved aliases.\n+ The ``key`` can be string with alias name or list for a command.\n+ In the first position will be the resolved command name or callable alias.\n+ If the key is not present, then `default` is returned.\n+\n+ ``spec_modifiers`` is the list of SpecModifier objects that found during\n+ resolving aliases (#5443).\n+ ``found_return_command`` is the list of aliases with return command functionality that found\n+ during resolving aliases (#5473).\n+\n+ Note! The return value is always list because during resolving\n+ we can find return_command alias that can completely replace\n+ command and add new arguments.\n+ \"\"\"\n+ spec_modifiers = spec_modifiers if spec_modifiers is not None else []\n+ found_return_command = (\n+ found_return_command if found_return_command is not None else []\n+ )\n+ args = []\n+ if isinstance(key, list):\n+ args = key[1:]\n+ key = key[0]\n+ val = self._raw.get(key)\n+ if callable(val) and getattr(val, \"return_command\", False):\n+ try:\n+ found_return_command.append(val.__name__)\n+ val = val(args, spec_modifiers=spec_modifiers)\n+ args = []\n+ except Exception as e:\n+ print_exception(f\"Exception inside alias {key!r}: {e}\")\n+ return None\n+ if not len(val):\n+ raise ValueError(\"return_command alias: zero arguments.\")\n+\n+ if val is None:\n+ return default\n+ elif isinstance(val, cabc.Iterable) or callable(val):\n+ return self.eval_alias(\n+ val,\n+ seen_tokens={key},\n+ spec_modifiers=spec_modifiers,\n+ acc_args=args,\n+ found_return_command=found_return_command,\n+ )\n+ else:\n+ msg = \"alias of {!r} has an inappropriate type: {!r}\"\n+ raise TypeError(msg.format(key, val))\n+\n def expand_alias(self, line: str, cursor_index: int) -> str:\n \"\"\"Expands any aliases present in line if alias does not point to a\n builtin function and if alias is only a single command.\n@@ -408,6 +487,21 @@ def __call__(\n return self.f(args, stdin, stdout, stderr, spec, stack)\n \n \n+class PartialEvalAlias7(PartialEvalAliasBase):\n+ def __call__(\n+ self,\n+ args,\n+ stdin=None,\n+ stdout=None,\n+ stderr=None,\n+ spec=None,\n+ stack=None,\n+ spec_modifiers=None,\n+ ):\n+ args = list(self.acc_args) + args\n+ return self.f(args, stdin, stdout, stderr, spec, stack, spec_modifiers)\n+\n+\n PARTIAL_EVAL_ALIASES = (\n PartialEvalAlias0,\n PartialEvalAlias1,\n@@ -416,6 +510,7 @@ def __call__(\n PartialEvalAlias4,\n PartialEvalAlias5,\n PartialEvalAlias6,\n+ PartialEvalAlias7,\n )\n \n \n@@ -436,13 +531,50 @@ def partial_eval_alias(f, acc_args=()):\n numargs += 1\n elif name in ALIAS_KWARG_NAMES and param.kind == param.KEYWORD_ONLY:\n numargs += 1\n- if numargs < 7:\n+ if numargs < 8:\n return PARTIAL_EVAL_ALIASES[numargs](f, acc_args=acc_args)\n else:\n- e = \"Expected proxy with 6 or fewer arguments for {}, not {}\"\n+ e = \"Expected proxy with 7 or fewer arguments for {}, not {}\"\n raise XonshError(e.format(\", \".join(ALIAS_KWARG_NAMES), numargs))\n \n \n+def run_alias_by_params(func: tp.Callable, params: dict[str, tp.Any]):\n+ \"\"\"\n+ Run alias function based on signature and params.\n+ If function param names are in alias signature fill them.\n+ If function params have unknown names fill using alias signature order.\n+ \"\"\"\n+ alias_params = OrderedDict(\n+ {\n+ \"args\": None,\n+ \"stdin\": None,\n+ \"stdout\": None,\n+ \"stderr\": None,\n+ \"spec\": None,\n+ \"stack\": None,\n+ \"spec_modifiers\": None,\n+ }\n+ )\n+ alias_params |= params\n+ sign = inspect.signature(func)\n+ func_params = sign.parameters.items()\n+ kwargs = {\n+ name: alias_params[name] for name, p in func_params if name in alias_params\n+ }\n+\n+ if len(kwargs) != len(func_params):\n+ # There is unknown param. Switch to positional mode.\n+ kwargs = OrderedDict()\n+ vals = list(alias_params.values())\n+ len_vals = len(vals)\n+ i = 0\n+ for namep in func_params:\n+ kwargs[namep[0]] = vals[i]\n+ if (i := i + 1) == len_vals:\n+ break\n+ return func(**kwargs)\n+\n+\n #\n # Actual aliases below\n #\ndiff --git a/xonsh/completers/_aliases.py b/xonsh/completers/_aliases.py\nindex 6aeb069dcf..2723710e68 100644\n--- a/xonsh/completers/_aliases.py\n+++ b/xonsh/completers/_aliases.py\n@@ -158,7 +158,7 @@ def complete_aliases(command: CommandContext):\n if cmd not in XSH.aliases:\n # only complete aliases\n return\n- alias = XSH.aliases.get(cmd) # type: ignore\n+ alias = XSH.aliases.get(cmd)[0] # type: ignore\n \n completer = getattr(alias, \"xonsh_complete\", None)\n if not completer:\ndiff --git a/xonsh/procs/specs.py b/xonsh/procs/specs.py\nindex 6df5c96d6a..492a1d2be4 100644\n--- a/xonsh/procs/specs.py\n+++ b/xonsh/procs/specs.py\n@@ -408,6 +408,7 @@ def __init__(\n self.args = _flatten_cmd_redirects(cmd)\n self.alias = None\n self.alias_name = None\n+ self.alias_return_command = None\n self.alias_stack = XSH.env.get(\"__ALIAS_STACK\", \"\").split(\":\")\n self.binary_loc = None\n self.is_proxy = False\n@@ -707,7 +708,6 @@ def resolve_redirects(self):\n def resolve_alias(self):\n \"\"\"Sets alias in command, if applicable.\"\"\"\n cmd0 = self.cmd[0]\n- spec_modifiers = []\n if cmd0 in self.alias_stack:\n # Disabling the alias resolving to prevent infinite loop in call stack\n # and futher using binary_loc to resolve the alias name.\n@@ -715,19 +715,34 @@ def resolve_alias(self):\n return\n \n if callable(cmd0):\n- alias = cmd0\n+ self.alias = cmd0\n else:\n+ found_spec_modifiers = []\n+ found_return_command = []\n if isinstance(XSH.aliases, dict):\n # Windows tests\n alias = XSH.aliases.get(cmd0, None)\n+ if alias is not None:\n+ alias = alias + self.cmd[1:]\n else:\n- alias = XSH.aliases.get(cmd0, None, spec_modifiers=spec_modifiers)\n+ alias = XSH.aliases.get(\n+ self.cmd,\n+ None,\n+ spec_modifiers=found_spec_modifiers,\n+ found_return_command=found_return_command,\n+ )\n if alias is not None:\n self.alias_name = cmd0\n- self.alias = alias\n- if spec_modifiers:\n- for mod in spec_modifiers:\n- self.add_spec_modifier(mod)\n+ if callable(alias[0]):\n+ self.alias = alias[0]\n+ self.cmd = [cmd0] + alias[1:]\n+ else:\n+ self.alias = alias\n+\n+ self.alias_return_command = bool(found_return_command)\n+ if found_spec_modifiers:\n+ for mod in found_spec_modifiers:\n+ self.add_spec_modifier(mod)\n \n def resolve_binary_loc(self):\n \"\"\"Sets the binary location\"\"\"\n@@ -765,8 +780,7 @@ def resolve_executable_commands(self):\n self.cmd.pop(0)\n return\n else:\n- self.cmd = alias + self.cmd[1:]\n- # resolve any redirects the aliases may have applied\n+ self.cmd = alias\n self.resolve_redirects()\n if self.binary_loc is None:\n return\n@@ -971,7 +985,13 @@ def _trace_specs(trace_mode, specs, cmds, captured):\n }\n p |= {\n a: getattr(s, a, None)\n- for a in [\"alias_name\", \"binary_loc\", \"threadable\", \"background\"]\n+ for a in [\n+ \"alias_name\",\n+ \"alias\",\n+ \"binary_loc\",\n+ \"threadable\",\n+ \"background\",\n+ ]\n }\n if trace_mode == 3:\n p |= {\n" }
[ { "diff_hunk": "@@ -59,8 +61,9 @@ def EXEC_ALIAS_RE():\n class FuncAlias:\n \"\"\"Provides a callable alias for xonsh commands.\"\"\"\n \n- attributes_show = [\"__xonsh_threadable__\", \"__xonsh_capturable__\"]\n+ attributes_show = [\"__xonsh_threadable__\", \"__xonsh_capturable__\", \"return_command\"]\n attributes_inherit = attributes_show + [\"__doc__\"]\n+ return_command = False", "line": null, "original_line": 66, "original_start_line": null, "path": "xonsh/aliases.py", "start_line": null, "text": "@user1:\nI try to avoid using boolean flags. They're pretty limiting and under-descriptive. In this case, for example, the reader knows that `return_command = True` means to return the command, but they might be left wondering, \"what is returned if `return_command = False`?\"\r\n\r\nConsider instead:\r\n\r\n```\r\nreturn_what : Literal[\"command\", \"result\"] = \"result\"\r\n```\r\n\r\nThis approach is more descriptive, moves the selector (command or not) out of the variable name and into the value, and is more extensible should a new use-case arise for another variant of what to return.\n\n@author:\nAwesome! Fixed. Thanks!" }, { "diff_hunk": "@@ -436,13 +531,50 @@ def partial_eval_alias(f, acc_args=()):\n numargs += 1\n elif name in ALIAS_KWARG_NAMES and param.kind == param.KEYWORD_ONLY:\n numargs += 1\n- if numargs < 7:\n+ if numargs < 8:\n return PARTIAL_EVAL_ALIASES[numargs](f, acc_args=acc_args)\n else:\n- e = \"Expected proxy with 6 or fewer arguments for {}, not {}\"\n+ e = \"Expected proxy with 7 or fewer arguments for {}, not {}\"\n raise XonshError(e.format(\", \".join(ALIAS_KWARG_NAMES), numargs))\n \n \n+def run_alias_by_params(func: tp.Callable, params: dict[str, tp.Any]):\n+ \"\"\"\n+ Run alias function based on signature and params.\n+ If function param names are in alias signature fill them.\n+ If function params have unknown names fill using alias signature order.\n+ \"\"\"\n+ alias_params = OrderedDict(\n+ {\n+ \"args\": None,\n+ \"stdin\": None,\n+ \"stdout\": None,\n+ \"stderr\": None,\n+ \"spec\": None,\n+ \"stack\": None,\n+ \"spec_modifiers\": None,\n+ }\n+ )\n+ alias_params |= params\n+ sign = inspect.signature(func)\n+ func_params = sign.parameters.items()\n+ kwargs = {\n+ name: alias_params[name] for name, p in func_params if name in alias_params\n+ }\n+\n+ if len(kwargs) != len(func_params):\n+ # There is unknown param. Switch to positional mode.\n+ kwargs = OrderedDict()\n+ vals = list(alias_params.values())\n+ len_vals = len(vals)\n+ i = 0\n+ for namep in func_params:\n+ kwargs[namep[0]] = vals[i]\n+ if (i := i + 1) == len_vals:\n+ break", "line": null, "original_line": 574, "original_start_line": 567, "path": "xonsh/aliases.py", "start_line": null, "text": "@user1:\nI'm pretty sure this behavior is expressable as a single expression\r\n\r\n```suggestion\r\n kwargs = {\r\n namep[0]: val\r\n for namep, val in zip(func_params, alias_params.values())\r\n }\r\n```\r\n\r\nor\r\n\r\n```suggestion\r\n kwargs = dict(zip(map(operator.itemgetter(0), func_params), alias_params.values()))\r\n```\r\n\r\nAnd of course, you can use `OrderedDict` if you want, but with Python 3.6+, the std dict will suffice." } ]
9a5ecab454c0fb01a6d3d47fadb7a7eb9ae5d8bc
diff --git a/docs/tutorial.rst b/docs/tutorial.rst index 30cf8a1eaa..0a39737e93 100644 --- a/docs/tutorial.rst +++ b/docs/tutorial.rst @@ -1275,7 +1275,7 @@ functions. If you don't know what these do, you probably don't need them. Aliases -============================== +======= Another important xonsh built-in is the ``aliases`` mapping. This is like a dictionary that affects how subprocess commands are run. If you are familiar with the Bash ``alias`` built-in, this is similar. Alias command @@ -1305,6 +1305,44 @@ If you were to run ``gco feature-fabulous`` with the above aliases in effect, the command would reduce to ``['git', 'checkout', 'feature-fabulous']`` before being executed. +Alias to modify command +----------------------- + +The best way to modify command on the fly is to use alias that returns modified command. +One of the most interesting application is expanding an alias: + +.. code-block:: xonshcon + + >>> @aliases.register + ... @aliases.return_command + ... def _xsudo(args): + ... """Sudo with expanding aliases.""" + ... return ['sudo', '--', *aliases.eval_alias(args)] + ... + >>> aliases['install'] = "apt install cowsay" + >>> xsudo install + # Password: + # Install cowsay + +Or implement logic to run the right command: + +.. code-block:: xonshcon + + >>> @aliases.register + ... @aliases.return_command + ... def _vi(args): + ... """Universal vi editor.""" + ... if $(which vim 2>/dev/null): + ... return ['vim'] + args + ... else: + ... return ['vi'] + args + ... + >>> vi file + + +ExecAlias +--------- + If the string is representing a block of xonsh code, the alias will be registered as an ``ExecAlias``, which is a callable alias. This block of code will then be executed whenever the alias is run. The arguments are available in the list ``$args`` diff --git a/news/alias_return_cmd.rst b/news/alias_return_cmd.rst new file mode 100644 index 0000000000..2167c7d26a --- /dev/null +++ b/news/alias_return_cmd.rst @@ -0,0 +1,23 @@ +**Added:** + +* Added ``@aliases.return_command`` decorator to eliminate the need to wrap the logic for modifying command into callable alias wrapper (#5473). + +**Changed:** + +* <news item> + +**Deprecated:** + +* <news item> + +**Removed:** + +* <news item> + +**Fixed:** + +* <news item> + +**Security:** + +* <news item> diff --git a/tests/procs/test_specs.py b/tests/procs/test_specs.py index 1cf74dbc5b..b307fc070a 100644 --- a/tests/procs/test_specs.py +++ b/tests/procs/test_specs.py @@ -183,6 +183,7 @@ def test_interrupted_process_returncode(xonsh_session, captured, interactive): @skip_if_on_windows [email protected](reruns=3, reruns_delay=1) def test_proc_raise_subproc_error(xonsh_session): xonsh_session.env["RAISE_SUBPROC_ERROR"] = False @@ -469,3 +470,110 @@ def alias(cls, args, stdin, stdout): xession.aliases["alias_with_partial_args"] = Class.alias out = run_subproc([["alias_with_partial_args"]], captured="stdout") assert out == "ok" + + +def test_alias_return_command_alone(xession): + @xession.aliases.register("wakka") + @xession.aliases.return_command + def _wakka(args): + return ["echo"] + args + + cmds = [ + ["wakka"], + ] + spec = cmds_to_specs(cmds, captured="object")[-1] + assert spec.cmd == ["echo"] + assert spec.alias_name == "wakka" + + +def test_alias_return_command_alone_args(xession): + @xession.aliases.register("wakka") + @xession.aliases.return_command + def _wakka(args): + return ["echo", "e0", "e1"] + args + + cmds = [ + ["wakka", "0", "1"], + ] + spec = cmds_to_specs(cmds, captured="object")[-1] + assert spec.cmd == ["echo", "e0", "e1", "0", "1"] + assert spec.alias_name == "wakka" + + +def test_alias_return_command_chain(xession): + xession.aliases["foreground"] = "midground f0 f1" + + @xession.aliases.register("midground") + @xession.aliases.return_command + def _midground(args): + return ["ground", "m0", "m1"] + args + + xession.aliases["ground"] = "background g0 g1" + xession.aliases["background"] = "echo b0 b1" + + cmds = [ + ["foreground", "0", "1"], + ] + spec = cmds_to_specs(cmds, captured="object")[-1] + assert spec.cmd == [ + "echo", + "b0", + "b1", + "g0", + "g1", + "m0", + "m1", + "f0", + "f1", + "0", + "1", + ] + assert spec.alias_name == "foreground" + + +def test_alias_return_command_chain_spec_modifiers(xession): + xession.aliases["foreground"] = "midground f0 f1" + + xession.aliases["xunthread"] = SpecAttrModifierAlias( + {"threadable": False, "force_threadable": False} + ) + + @xession.aliases.register("midground") + @xession.aliases.return_command + def _midground(args): + return ["ground", "m0", "m1"] + + xession.aliases["ground"] = "background g0 g1" + xession.aliases["background"] = "xunthread echo b0 b1" + + cmds = [ + ["foreground", "0", "1"], + ] + spec = cmds_to_specs(cmds, captured="object")[-1] + assert spec.cmd == ["echo", "b0", "b1", "g0", "g1", "m0", "m1"] + assert spec.alias_name == "foreground" + assert spec.threadable is False + + +def test_alias_return_command_eval_inside(xession): + xession.aliases["xthread"] = SpecAttrModifierAlias( + {"threadable": True, "force_threadable": True} + ) + + @xession.aliases.register("xsudo") + @xession.aliases.return_command + def _midground(args, spec_modifiers=None): + return [ + "sudo", + *xession.aliases.eval_alias(args, spec_modifiers=spec_modifiers), + ] + + xession.aliases["cmd"] = "xthread echo 1" + + cmds = [ + ["xsudo", "cmd"], + ] + spec = cmds_to_specs(cmds, captured="object")[-1] + assert spec.cmd == ["sudo", "echo", "1"] + assert spec.alias_name == "xsudo" + assert spec.threadable is True diff --git a/tests/test_aliases.py b/tests/test_aliases.py index a2c0e45a34..b672a180c7 100644 --- a/tests/test_aliases.py +++ b/tests/test_aliases.py @@ -6,7 +6,7 @@ import pytest -from xonsh.aliases import Aliases, ExecAlias +from xonsh.aliases import Aliases, ExecAlias, run_alias_by_params def cd(args, stdin=None): @@ -53,10 +53,17 @@ def test_eval_recursive(xession): assert ales.get("color_ls") == ["ls", "- -", "--color=true"] +def test_eval_callable(xession): + ales = make_aliases() + resolved = ales.get(["cd", "tmp"]) + assert callable(resolved[0]) + assert isinstance(resolved[1], str) + + def test_eval_recursive_callable_partial(xonsh_execer, xession): ales = make_aliases() xession.env["HOME"] = os.path.expanduser("~") - assert ales.get("indirect_cd")(["arg2", "arg3"]) == ["..", "arg2", "arg3"] + assert ales.get(["indirect_cd", "arg2", "arg3"])[1:] == ["..", "arg2", "arg3"] def _return_to_sender_all(args, stdin, stdout, stderr, spec, stack): @@ -74,9 +81,11 @@ def _return_to_sender_all(args, stdin, stdout, stderr, spec, stack): def test_recursive_callable_partial_all(xession): ales = Aliases({"rtn": _return_to_sender_all, "rtn-recurse": ["rtn", "arg1"]}) - alias = ales.get("rtn-recurse") + alias = ales.get("rtn-recurse")[0] assert callable(alias) - args, obs = alias(["arg2"], stdin="a", stdout="b", stderr="c", spec="d", stack="e") + args, obs = alias( + ["arg1", "arg2"], stdin="a", stdout="b", stderr="c", spec="d", stack="e" + ) assert args == ["arg1", "arg2"] assert len(obs) == 5 exp = {"stdin": "a", "stdout": "b", "stderr": "c", "spec": "d", "stack": "e"} @@ -89,9 +98,9 @@ def _return_to_sender_handles(args, stdin, stdout, stderr): def test_recursive_callable_partial_handles(xession): ales = Aliases({"rtn": _return_to_sender_handles, "rtn-recurse": ["rtn", "arg1"]}) - alias = ales.get("rtn-recurse") + alias = ales.get("rtn-recurse")[0] assert callable(alias) - args, obs = alias(["arg2"], stdin="a", stdout="b", stderr="c") + args, obs = alias(["arg1", "arg2"], stdin="a", stdout="b", stderr="c") assert args == ["arg1", "arg2"] assert len(obs) == 3 exp = {"stdin": "a", "stdout": "b", "stderr": "c"} @@ -104,7 +113,7 @@ def _return_to_sender_none(): def test_recursive_callable_partial_none(xession): ales = Aliases({"rtn": _return_to_sender_none, "rtn-recurse": ["rtn"]}) - alias = ales.get("rtn-recurse") + alias = ales.get("rtn-recurse")[0] assert callable(alias) args, obs = alias() assert args == "wakka" @@ -214,3 +223,26 @@ def with_options(): ... def _private(): ... assert set(aliases) == {"debug", "name", "private"} + + +def test_run_alias_by_params(): + def alias_named_params(args, stdout): + return (args, stdout) + + def alias_named_params_rev(stdout, args): + return (args, stdout) + + def alias_list_params(a, i, o, e): + return (a, i, o, e) + + assert run_alias_by_params(alias_named_params, {"args": 1, "stdout": 2}) == (1, 2) + assert run_alias_by_params(alias_named_params_rev, {"args": 1, "stdout": 2}) == ( + 1, + 2, + ) + assert run_alias_by_params(alias_list_params, {"args": 1, "stderr": 4}) == ( + 1, + None, + None, + 4, + ) diff --git a/xonsh/aliases.py b/xonsh/aliases.py index bc0e4b0703..14038bb063 100644 --- a/xonsh/aliases.py +++ b/xonsh/aliases.py @@ -1,15 +1,17 @@ """Aliases for the xonsh shell.""" import argparse -import collections.abc as cabc import functools import inspect +import operator import os import re import shutil import sys import types import typing as tp +from collections import abc as cabc +from typing import Literal import xonsh.completers._aliases as xca import xonsh.history.main as xhm @@ -42,6 +44,7 @@ argvquote, escape_windows_cmd_string, print_color, + print_exception, strip_simple_quotes, swap_values, to_repr_pretty_, @@ -59,8 +62,9 @@ def EXEC_ALIAS_RE(): class FuncAlias: """Provides a callable alias for xonsh commands.""" - attributes_show = ["__xonsh_threadable__", "__xonsh_capturable__"] + attributes_show = ["__xonsh_threadable__", "__xonsh_capturable__", "return_what"] attributes_inherit = attributes_show + ["__doc__"] + return_what: Literal["command", "result"] = "result" def __init__(self, name, func=None): self.__name__ = self.name = name @@ -79,12 +83,27 @@ def __repr__(self): return f"FuncAlias({repr(r)})" def __call__( - self, args=None, stdin=None, stdout=None, stderr=None, spec=None, stack=None + self, + args=None, + stdin=None, + stdout=None, + stderr=None, + spec=None, + stack=None, + spec_modifiers=None, ): - func_args = [args, stdin, stdout, stderr, spec, stack][ - : len(inspect.signature(self.func).parameters) - ] - return self.func(*func_args) + return run_alias_by_params( + self.func, + { + "args": args, + "stdin": stdin, + "stdout": stdout, + "stderr": stderr, + "spec": spec, + "stack": stack, + "spec_modifiers": spec_modifiers, + }, + ) class Aliases(cabc.MutableMapping): @@ -132,28 +151,17 @@ def wrapper(func): return wrapper - def get(self, key, default=None, spec_modifiers=None): - """Returns the (possibly modified) value. If the key is not present, - then `default` is returned. - If the value is callable, it is returned without modification. If it - is an iterable of strings it will be evaluated recursively to expand - other aliases, resulting in a new list or a "partially applied" - callable. - """ - spec_modifiers = spec_modifiers if spec_modifiers is not None else [] - val = self._raw.get(key) - if val is None: - return default - elif isinstance(val, cabc.Iterable) or callable(val): - return self.eval_alias( - val, seen_tokens={key}, spec_modifiers=spec_modifiers - ) - else: - msg = "alias of {!r} has an inappropriate type: {!r}" - raise TypeError(msg.format(key, val)) + def return_command(self, f): + """Decorator that switches alias from returning result to return in new command for execution.""" + f.return_what = "command" + return f def eval_alias( - self, value, seen_tokens=frozenset(), acc_args=(), spec_modifiers=None + self, + value, + seen_tokens=frozenset(), + acc_args=(), + spec_modifiers=None, ): """ "Evaluates" the alias ``value``, by recursively looking up the leftmost @@ -182,8 +190,18 @@ def eval_alias( break value = value[i:] + if callable(value) and getattr(value, "return_what", "result") == "command": + try: + value = value(acc_args, spec_modifiers=spec_modifiers) + acc_args = [] + except Exception as e: + print_exception(f"Exception inside alias {value}: {e}") + return None + if not len(value): + raise ValueError("return_command alias: zero arguments.") + if callable(value): - return partial_eval_alias(value, acc_args=acc_args) + return [value] + list(acc_args) else: expand_path = XSH.expand_path token, *rest = map(expand_path, value) @@ -205,6 +223,54 @@ def eval_alias( spec_modifiers=spec_modifiers, ) + def get( + self, + key, + default=None, + spec_modifiers=None, + ): + """ + Returns list that represent command with resolved aliases. + The ``key`` can be string with alias name or list for a command. + In the first position will be the resolved command name or callable alias. + If the key is not present, then `default` is returned. + + ``spec_modifiers`` is the list of SpecModifier objects that found during + resolving aliases (#5443). + + Note! The return value is always list because during resolving + we can find return_command alias that can completely replace + command and add new arguments. + """ + spec_modifiers = spec_modifiers if spec_modifiers is not None else [] + args = [] + if isinstance(key, list): + args = key[1:] + key = key[0] + val = self._raw.get(key) + if callable(val) and getattr(val, "return_what", "result") == "command": + try: + val = val(args, spec_modifiers=spec_modifiers) + args = [] + except Exception as e: + print_exception(f"Exception inside alias {key!r}: {e}") + return None + if not len(val): + raise ValueError("return_command alias: zero arguments.") + + if val is None: + return default + elif isinstance(val, cabc.Iterable) or callable(val): + return self.eval_alias( + val, + seen_tokens={key}, + spec_modifiers=spec_modifiers, + acc_args=args, + ) + else: + msg = "alias of {!r} has an inappropriate type: {!r}" + raise TypeError(msg.format(key, val)) + def expand_alias(self, line: str, cursor_index: int) -> str: """Expands any aliases present in line if alias does not point to a builtin function and if alias is only a single command. @@ -408,6 +474,21 @@ def __call__( return self.f(args, stdin, stdout, stderr, spec, stack) +class PartialEvalAlias7(PartialEvalAliasBase): + def __call__( + self, + args, + stdin=None, + stdout=None, + stderr=None, + spec=None, + stack=None, + spec_modifiers=None, + ): + args = list(self.acc_args) + args + return self.f(args, stdin, stdout, stderr, spec, stack, spec_modifiers) + + PARTIAL_EVAL_ALIASES = ( PartialEvalAlias0, PartialEvalAlias1, @@ -416,6 +497,7 @@ def __call__( PartialEvalAlias4, PartialEvalAlias5, PartialEvalAlias6, + PartialEvalAlias7, ) @@ -436,13 +518,43 @@ def partial_eval_alias(f, acc_args=()): numargs += 1 elif name in ALIAS_KWARG_NAMES and param.kind == param.KEYWORD_ONLY: numargs += 1 - if numargs < 7: + if numargs < 8: return PARTIAL_EVAL_ALIASES[numargs](f, acc_args=acc_args) else: - e = "Expected proxy with 6 or fewer arguments for {}, not {}" + e = "Expected proxy with 7 or fewer arguments for {}, not {}" raise XonshError(e.format(", ".join(ALIAS_KWARG_NAMES), numargs)) +def run_alias_by_params(func: tp.Callable, params: dict[str, tp.Any]): + """ + Run alias function based on signature and params. + If function param names are in alias signature fill them. + If function params have unknown names fill using alias signature order. + """ + alias_params = { + "args": None, + "stdin": None, + "stdout": None, + "stderr": None, + "spec": None, + "stack": None, + "spec_modifiers": None, + } + alias_params |= params + sign = inspect.signature(func) + func_params = sign.parameters.items() + kwargs = { + name: alias_params[name] for name, p in func_params if name in alias_params + } + + if len(kwargs) != len(func_params): + # There is unknown param. Switch to positional mode. + kwargs = dict( + zip(map(operator.itemgetter(0), func_params), alias_params.values()) + ) + return func(**kwargs) + + # # Actual aliases below # diff --git a/xonsh/completers/_aliases.py b/xonsh/completers/_aliases.py index 6aeb069dcf..2723710e68 100644 --- a/xonsh/completers/_aliases.py +++ b/xonsh/completers/_aliases.py @@ -158,7 +158,7 @@ def complete_aliases(command: CommandContext): if cmd not in XSH.aliases: # only complete aliases return - alias = XSH.aliases.get(cmd) # type: ignore + alias = XSH.aliases.get(cmd)[0] # type: ignore completer = getattr(alias, "xonsh_complete", None) if not completer: diff --git a/xonsh/procs/specs.py b/xonsh/procs/specs.py index 6df5c96d6a..4f504682f9 100644 --- a/xonsh/procs/specs.py +++ b/xonsh/procs/specs.py @@ -705,29 +705,42 @@ def resolve_redirects(self): self.cmd = new_cmd def resolve_alias(self): - """Sets alias in command, if applicable.""" + """Resolving alias and setting up command.""" cmd0 = self.cmd[0] - spec_modifiers = [] if cmd0 in self.alias_stack: # Disabling the alias resolving to prevent infinite loop in call stack - # and futher using binary_loc to resolve the alias name. + # and further using binary_loc to resolve the alias name. self.alias = None return if callable(cmd0): - alias = cmd0 + self.alias = cmd0 else: + found_spec_modifiers = [] if isinstance(XSH.aliases, dict): # Windows tests alias = XSH.aliases.get(cmd0, None) + if alias is not None: + alias = alias + self.cmd[1:] else: - alias = XSH.aliases.get(cmd0, None, spec_modifiers=spec_modifiers) + alias = XSH.aliases.get( + self.cmd, + None, + spec_modifiers=found_spec_modifiers, + ) if alias is not None: self.alias_name = cmd0 - self.alias = alias - if spec_modifiers: - for mod in spec_modifiers: - self.add_spec_modifier(mod) + if callable(alias[0]): + # E.g. `alias == [FuncAlias({'name': 'cd'}), '/tmp']` + self.alias = alias[0] + self.cmd = [cmd0] + alias[1:] + else: + # E.g. `alias == ['ls', '-la']` + self.alias = alias + + if found_spec_modifiers: + for mod in found_spec_modifiers: + self.add_spec_modifier(mod) def resolve_binary_loc(self): """Sets the binary location""" @@ -765,8 +778,7 @@ def resolve_executable_commands(self): self.cmd.pop(0) return else: - self.cmd = alias + self.cmd[1:] - # resolve any redirects the aliases may have applied + self.cmd = alias self.resolve_redirects() if self.binary_loc is None: return @@ -971,7 +983,13 @@ def _trace_specs(trace_mode, specs, cmds, captured): } p |= { a: getattr(s, a, None) - for a in ["alias_name", "binary_loc", "threadable", "background"] + for a in [ + "alias_name", + "alias", + "binary_loc", + "threadable", + "background", + ] } if trace_mode == 3: p |= {
{ "difficulty": "medium", "estimated_review_effort": 4, "problem_domain": "Bug Fixes" }
sympy__sympy-22840@2446f4c
sympy/sympy
Python
22,840
Peabody/cse matrixsymbol
Treat matrix symbols and subelements as atomic instead of redefining them as individual variables. fixes #11991 by reviving PR #13185 Thanks @peabody124 for original fix, hopefully with release notes we can merge this @czgdp1807 (couldn't push directly to that PR since I'm not a maintainer) <!-- BEGIN RELEASE NOTES --> * simplify * CSE now treats matrix symbols and subelements as atomic in cse <!-- END RELEASE NOTES -->
2022-01-11T17:34:54Z
cse() has strange behaviour for MatrixSymbol indexing Example: ```python import sympy as sp from pprint import pprint def sub_in_matrixsymbols(exp, matrices): for matrix in matrices: for i in range(matrix.shape[0]): for j in range(matrix.shape[1]): name = "%s_%d_%d" % (matrix.name, i, j) sym = sp.symbols(name) exp = exp.subs(sym, matrix[i, j]) return exp def t44(name): return sp.Matrix(4, 4, lambda i, j: sp.symbols('%s_%d_%d' % (name, i, j))) # Construct matrices of symbols that work with our # expressions. (MatrixSymbols does not.) a = t44("a") b = t44("b") # Set up expression. This is a just a simple example. e = a * b # Put in matrixsymbols. (Gives array-input in codegen.) e2 = sub_in_matrixsymbols(e, [sp.MatrixSymbol("a", 4, 4), sp.MatrixSymbol("b", 4, 4)]) cse_subs, cse_reduced = sp.cse(e2) pprint((cse_subs, cse_reduced)) # Codegen, etc.. print "\nccode:" for sym, expr in cse_subs: constants, not_c, c_expr = sympy.printing.ccode( expr, human=False, assign_to=sympy.printing.ccode(sym), ) assert not constants, constants assert not not_c, not_c print "%s\n" % c_expr ``` This gives the following output: ``` ([(x0, a), (x1, x0[0, 0]), (x2, b), (x3, x2[0, 0]), (x4, x0[0, 1]), (x5, x2[1, 0]), (x6, x0[0, 2]), (x7, x2[2, 0]), (x8, x0[0, 3]), (x9, x2[3, 0]), (x10, x2[0, 1]), (x11, x2[1, 1]), (x12, x2[2, 1]), (x13, x2[3, 1]), (x14, x2[0, 2]), (x15, x2[1, 2]), (x16, x2[2, 2]), (x17, x2[3, 2]), (x18, x2[0, 3]), (x19, x2[1, 3]), (x20, x2[2, 3]), (x21, x2[3, 3]), (x22, x0[1, 0]), (x23, x0[1, 1]), (x24, x0[1, 2]), (x25, x0[1, 3]), (x26, x0[2, 0]), (x27, x0[2, 1]), (x28, x0[2, 2]), (x29, x0[2, 3]), (x30, x0[3, 0]), (x31, x0[3, 1]), (x32, x0[3, 2]), (x33, x0[3, 3])], [Matrix([ [ x1*x3 + x4*x5 + x6*x7 + x8*x9, x1*x10 + x11*x4 + x12*x6 + x13*x8, x1*x14 + x15*x4 + x16*x6 + x17*x8, x1*x18 + x19*x4 + x20*x6 + x21*x8], [x22*x3 + x23*x5 + x24*x7 + x25*x9, x10*x22 + x11*x23 + x12*x24 + x13*x25, x14*x22 + x15*x23 + x16*x24 + x17*x25, x18*x22 + x19*x23 + x20*x24 + x21*x25], [x26*x3 + x27*x5 + x28*x7 + x29*x9, x10*x26 + x11*x27 + x12*x28 + x13*x29, x14*x26 + x15*x27 + x16*x28 + x17*x29, x18*x26 + x19*x27 + x20*x28 + x21*x29], [x3*x30 + x31*x5 + x32*x7 + x33*x9, x10*x30 + x11*x31 + x12*x32 + x13*x33, x14*x30 + x15*x31 + x16*x32 + x17*x33, x18*x30 + x19*x31 + x20*x32 + x21*x33]])]) ccode: x0[0] = a[0]; x0[1] = a[1]; x0[2] = a[2]; x0[3] = a[3]; x0[4] = a[4]; x0[5] = a[5]; x0[6] = a[6]; x0[7] = a[7]; x0[8] = a[8]; x0[9] = a[9]; x0[10] = a[10]; x0[11] = a[11]; x0[12] = a[12]; x0[13] = a[13]; x0[14] = a[14]; x0[15] = a[15]; x1 = x0[0]; x2[0] = b[0]; x2[1] = b[1]; x2[2] = b[2]; x2[3] = b[3]; x2[4] = b[4]; x2[5] = b[5]; x2[6] = b[6]; x2[7] = b[7]; x2[8] = b[8]; x2[9] = b[9]; x2[10] = b[10]; x2[11] = b[11]; x2[12] = b[12]; x2[13] = b[13]; x2[14] = b[14]; x2[15] = b[15]; x3 = x2[0]; x4 = x0[1]; x5 = x2[4]; x6 = x0[2]; x7 = x2[8]; x8 = x0[3]; x9 = x2[12]; x10 = x2[1]; x11 = x2[5]; x12 = x2[9]; x13 = x2[13]; x14 = x2[2]; x15 = x2[6]; x16 = x2[10]; x17 = x2[14]; x18 = x2[3]; x19 = x2[7]; x20 = x2[11]; x21 = x2[15]; x22 = x0[4]; x23 = x0[5]; x24 = x0[6]; x25 = x0[7]; x26 = x0[8]; x27 = x0[9]; x28 = x0[10]; x29 = x0[11]; x30 = x0[12]; x31 = x0[13]; x32 = x0[14]; x33 = x0[15]; ``` `x0` and `x2` are just copies of the matrices `a` and `b`, respectively.
Can you create a very simple example using MatrixSymbol and the expected output that you'd like to see? I think one would expect the output to be similar to the following (except for the expression returned by CSE being a matrix where the individual elements are terms as defined by matrix multiplication, that is, unchanged by `cse()`). ```py import sympy as sp from pprint import pprint import sympy.printing.ccode def print_ccode(assign_to, expr): constants, not_c, c_expr = sympy.printing.ccode( expr, human=False, assign_to=assign_to, ) assert not constants, constants assert not not_c, not_c print "%s" % c_expr a = sp.MatrixSymbol("a", 4, 4) b = sp.MatrixSymbol("b", 4, 4) # Set up expression. This is a just a simple example. e = a * b print "\nexpr:" print e cse_subs, cse_reduced = sp.cse(e) print "\ncse(expr):" pprint((cse_subs, cse_reduced)) # Codegen. print "\nccode:" for sym, expr in cse_subs: print_ccode(sympy.printing.ccode(sym), expr) assert len(cse_reduced) == 1 print_ccode(sympy.printing.ccode(sp.symbols("result")), cse_reduced[0]) ``` Gives the output: ``` expr: a*b cse(expr): ([], [a*b]) ccode: result[0] = a[0]*b[0] + a[1]*b[4] + a[2]*b[8] + a[3]*b[12]; result[1] = a[0]*b[1] + a[1]*b[5] + a[2]*b[9] + a[3]*b[13]; result[2] = a[0]*b[2] + a[1]*b[6] + a[2]*b[10] + a[3]*b[14]; result[3] = a[0]*b[3] + a[1]*b[7] + a[2]*b[11] + a[3]*b[15]; result[4] = a[4]*b[0] + a[5]*b[4] + a[6]*b[8] + a[7]*b[12]; result[5] = a[4]*b[1] + a[5]*b[5] + a[6]*b[9] + a[7]*b[13]; result[6] = a[4]*b[2] + a[5]*b[6] + a[6]*b[10] + a[7]*b[14]; result[7] = a[4]*b[3] + a[5]*b[7] + a[6]*b[11] + a[7]*b[15]; result[8] = a[8]*b[0] + a[9]*b[4] + a[10]*b[8] + a[11]*b[12]; result[9] = a[8]*b[1] + a[9]*b[5] + a[10]*b[9] + a[11]*b[13]; result[10] = a[8]*b[2] + a[9]*b[6] + a[10]*b[10] + a[11]*b[14]; result[11] = a[8]*b[3] + a[9]*b[7] + a[10]*b[11] + a[11]*b[15]; result[12] = a[12]*b[0] + a[13]*b[4] + a[14]*b[8] + a[15]*b[12]; result[13] = a[12]*b[1] + a[13]*b[5] + a[14]*b[9] + a[15]*b[13]; result[14] = a[12]*b[2] + a[13]*b[6] + a[14]*b[10] + a[15]*b[14]; result[15] = a[12]*b[3] + a[13]*b[7] + a[14]*b[11] + a[15]*b[15]; ``` Thanks. Note that it doesn't look like cse is well tested (i.e. designed) for MatrixSymbols based on the unit tests: https://github.com/sympy/sympy/blob/master/sympy/simplify/tests/test_cse.py#L315. Those tests don't really prove that it works as desired. So this definitely needs to be fixed. The first part works as expected: ``` In [1]: import sympy as sm In [2]: M = sm.MatrixSymbol('M', 3, 3) In [3]: B = sm.MatrixSymbol('B', 3, 3) In [4]: M * B Out[4]: M*B In [5]: sm.cse(M * B) Out[5]: ([], [M*B]) ``` For the ccode of an expression of MatrixSymbols, I would not expect it to print the results as you have them. MatrixSymbols should map to a matrix algebra library like BLAS and LINPACK. But Matrix, on the other hand, should do what you expect. Note how this works: ``` In [8]: M = sm.Matrix(3, 3, lambda i, j: sm.Symbol('M_{}{}'.format(i, j))) In [9]: M Out[9]: Matrix([ [M_00, M_01, M_02], [M_10, M_11, M_12], [M_20, M_21, M_22]]) In [10]: B = sm.Matrix(3, 3, lambda i, j: sm.Symbol('B_{}{}'.format(i, j))) In [11]: B Out[11]: Matrix([ [B_00, B_01, B_02], [B_10, B_11, B_12], [B_20, B_21, B_22]]) In [12]: M * B Out[12]: Matrix([ [B_00*M_00 + B_10*M_01 + B_20*M_02, B_01*M_00 + B_11*M_01 + B_21*M_02, B_02*M_00 + B_12*M_01 + B_22*M_02], [B_00*M_10 + B_10*M_11 + B_20*M_12, B_01*M_10 + B_11*M_11 + B_21*M_12, B_02*M_10 + B_12*M_11 + B_22*M_12], [B_00*M_20 + B_10*M_21 + B_20*M_22, B_01*M_20 + B_11*M_21 + B_21*M_22, B_02*M_20 + B_12*M_21 + B_22*M_22]]) In [13]: sm.cse(M * B) Out[13]: ([], [Matrix([ [B_00*M_00 + B_10*M_01 + B_20*M_02, B_01*M_00 + B_11*M_01 + B_21*M_02, B_02*M_00 + B_12*M_01 + B_22*M_02], [B_00*M_10 + B_10*M_11 + B_20*M_12, B_01*M_10 + B_11*M_11 + B_21*M_12, B_02*M_10 + B_12*M_11 + B_22*M_12], [B_00*M_20 + B_10*M_21 + B_20*M_22, B_01*M_20 + B_11*M_21 + B_21*M_22, B_02*M_20 + B_12*M_21 + B_22*M_22]])]) In [17]: print(sm.ccode(M * B, assign_to=sm.MatrixSymbol('E', 3, 3))) E[0] = B_00*M_00 + B_10*M_01 + B_20*M_02; E[1] = B_01*M_00 + B_11*M_01 + B_21*M_02; E[2] = B_02*M_00 + B_12*M_01 + B_22*M_02; E[3] = B_00*M_10 + B_10*M_11 + B_20*M_12; E[4] = B_01*M_10 + B_11*M_11 + B_21*M_12; E[5] = B_02*M_10 + B_12*M_11 + B_22*M_12; E[6] = B_00*M_20 + B_10*M_21 + B_20*M_22; E[7] = B_01*M_20 + B_11*M_21 + B_21*M_22; E[8] = B_02*M_20 + B_12*M_21 + B_22*M_22; ``` But in order to get a single input argument from codegen it cannot be different symbols, and if you replace each symbol with a `MatrixSymbol[i, j]` then `cse()` starts doing the above non-optiimizations for some reason. As far as I know, `codegen` does not work with Matrix or MatrixSymbol's in any meaningful way. There are related issues: #11456 #4367 #10522 In general, there needs to be work done in the code generators to properly support matrices. As a work around, I suggest using `ccode` and a custom template to get the result you want.
[ { "body": "Example: \r\n```python\r\nimport sympy as sp\r\nfrom pprint import pprint\r\n\r\n\r\ndef sub_in_matrixsymbols(exp, matrices):\r\n for matrix in matrices:\r\n for i in range(matrix.shape[0]):\r\n for j in range(matrix.shape[1]):\r\n name = \"%s_%d_%d\" % (matrix.name, i, j)\r\n sym = sp.symbols(name)\r\n exp = exp.subs(sym, matrix[i, j])\r\n return exp\r\n\r\n\r\ndef t44(name):\r\n return sp.Matrix(4, 4, lambda i, j: sp.symbols('%s_%d_%d' % (name, i, j)))\r\n\r\n\r\n# Construct matrices of symbols that work with our\r\n# expressions. (MatrixSymbols does not.)\r\na = t44(\"a\")\r\nb = t44(\"b\")\r\n\r\n# Set up expression. This is a just a simple example.\r\ne = a * b\r\n\r\n# Put in matrixsymbols. (Gives array-input in codegen.)\r\ne2 = sub_in_matrixsymbols(e, [sp.MatrixSymbol(\"a\", 4, 4), sp.MatrixSymbol(\"b\", 4, 4)])\r\ncse_subs, cse_reduced = sp.cse(e2)\r\npprint((cse_subs, cse_reduced))\r\n\r\n# Codegen, etc..\r\nprint \"\\nccode:\"\r\nfor sym, expr in cse_subs:\r\n constants, not_c, c_expr = sympy.printing.ccode(\r\n expr,\r\n human=False,\r\n assign_to=sympy.printing.ccode(sym),\r\n )\r\n assert not constants, constants\r\n assert not not_c, not_c\r\n print \"%s\\n\" % c_expr\r\n\r\n```\r\n\r\nThis gives the following output:\r\n\r\n```\r\n([(x0, a),\r\n (x1, x0[0, 0]),\r\n (x2, b),\r\n (x3, x2[0, 0]),\r\n (x4, x0[0, 1]),\r\n (x5, x2[1, 0]),\r\n (x6, x0[0, 2]),\r\n (x7, x2[2, 0]),\r\n (x8, x0[0, 3]),\r\n (x9, x2[3, 0]),\r\n (x10, x2[0, 1]),\r\n (x11, x2[1, 1]),\r\n (x12, x2[2, 1]),\r\n (x13, x2[3, 1]),\r\n (x14, x2[0, 2]),\r\n (x15, x2[1, 2]),\r\n (x16, x2[2, 2]),\r\n (x17, x2[3, 2]),\r\n (x18, x2[0, 3]),\r\n (x19, x2[1, 3]),\r\n (x20, x2[2, 3]),\r\n (x21, x2[3, 3]),\r\n (x22, x0[1, 0]),\r\n (x23, x0[1, 1]),\r\n (x24, x0[1, 2]),\r\n (x25, x0[1, 3]),\r\n (x26, x0[2, 0]),\r\n (x27, x0[2, 1]),\r\n (x28, x0[2, 2]),\r\n (x29, x0[2, 3]),\r\n (x30, x0[3, 0]),\r\n (x31, x0[3, 1]),\r\n (x32, x0[3, 2]),\r\n (x33, x0[3, 3])],\r\n [Matrix([\r\n[ x1*x3 + x4*x5 + x6*x7 + x8*x9, x1*x10 + x11*x4 + x12*x6 + x13*x8, x1*x14 + x15*x4 + x16*x6 + x17*x8, x1*x18 + x19*x4 + x20*x6 + x21*x8],\r\n[x22*x3 + x23*x5 + x24*x7 + x25*x9, x10*x22 + x11*x23 + x12*x24 + x13*x25, x14*x22 + x15*x23 + x16*x24 + x17*x25, x18*x22 + x19*x23 + x20*x24 + x21*x25],\r\n[x26*x3 + x27*x5 + x28*x7 + x29*x9, x10*x26 + x11*x27 + x12*x28 + x13*x29, x14*x26 + x15*x27 + x16*x28 + x17*x29, x18*x26 + x19*x27 + x20*x28 + x21*x29],\r\n[x3*x30 + x31*x5 + x32*x7 + x33*x9, x10*x30 + x11*x31 + x12*x32 + x13*x33, x14*x30 + x15*x31 + x16*x32 + x17*x33, x18*x30 + x19*x31 + x20*x32 + x21*x33]])])\r\n\r\nccode:\r\nx0[0] = a[0];\r\nx0[1] = a[1];\r\nx0[2] = a[2];\r\nx0[3] = a[3];\r\nx0[4] = a[4];\r\nx0[5] = a[5];\r\nx0[6] = a[6];\r\nx0[7] = a[7];\r\nx0[8] = a[8];\r\nx0[9] = a[9];\r\nx0[10] = a[10];\r\nx0[11] = a[11];\r\nx0[12] = a[12];\r\nx0[13] = a[13];\r\nx0[14] = a[14];\r\nx0[15] = a[15];\r\nx1 = x0[0];\r\nx2[0] = b[0];\r\nx2[1] = b[1];\r\nx2[2] = b[2];\r\nx2[3] = b[3];\r\nx2[4] = b[4];\r\nx2[5] = b[5];\r\nx2[6] = b[6];\r\nx2[7] = b[7];\r\nx2[8] = b[8];\r\nx2[9] = b[9];\r\nx2[10] = b[10];\r\nx2[11] = b[11];\r\nx2[12] = b[12];\r\nx2[13] = b[13];\r\nx2[14] = b[14];\r\nx2[15] = b[15];\r\nx3 = x2[0];\r\nx4 = x0[1];\r\nx5 = x2[4];\r\nx6 = x0[2];\r\nx7 = x2[8];\r\nx8 = x0[3];\r\nx9 = x2[12];\r\nx10 = x2[1];\r\nx11 = x2[5];\r\nx12 = x2[9];\r\nx13 = x2[13];\r\nx14 = x2[2];\r\nx15 = x2[6];\r\nx16 = x2[10];\r\nx17 = x2[14];\r\nx18 = x2[3];\r\nx19 = x2[7];\r\nx20 = x2[11];\r\nx21 = x2[15];\r\nx22 = x0[4];\r\nx23 = x0[5];\r\nx24 = x0[6];\r\nx25 = x0[7];\r\nx26 = x0[8];\r\nx27 = x0[9];\r\nx28 = x0[10];\r\nx29 = x0[11];\r\nx30 = x0[12];\r\nx31 = x0[13];\r\nx32 = x0[14];\r\nx33 = x0[15];\r\n```\r\n\r\n`x0` and `x2` are just copies of the matrices `a` and `b`, respectively.", "number": 11991, "title": "cse() has strange behaviour for MatrixSymbol indexing" } ]
d822fcba181155b85ff2b29fe525adbafb22b448
{ "head_commit": "2446f4c1208b0bd8539b156ac00b3489d1c65807", "head_commit_message": "Update sympy/simplify/cse_main.py", "patch_to_review": "diff --git a/sympy/simplify/cse_main.py b/sympy/simplify/cse_main.py\nindex d649dd02a952..aa3199f5c005 100644\n--- a/sympy/simplify/cse_main.py\n+++ b/sympy/simplify/cse_main.py\n@@ -567,6 +567,7 @@ def tree_cse(exprs, symbols, opt_subs=None, order='canonical', ignore=()):\n Substitutions containing any Symbol from ``ignore`` will be ignored.\n \"\"\"\n from sympy.matrices.expressions import MatrixExpr, MatrixSymbol, MatMul, MatAdd\n+ from sympy.matrices.expressions.matexpr import MatrixElement\n from sympy.polys.rootoftools import RootOf\n \n if opt_subs is None:\n@@ -586,7 +587,10 @@ def _find_repeated(expr):\n if isinstance(expr, RootOf):\n return\n \n- if isinstance(expr, Basic) and (expr.is_Atom or expr.is_Order):\n+ if isinstance(expr, Basic) and (\n+ expr.is_Atom or\n+ expr.is_Order or\n+ isinstance(expr, (MatrixSymbol, MatrixElement)):\n if expr.is_Symbol:\n excluded_symbols.add(expr)\n return\ndiff --git a/sympy/simplify/tests/test_cse.py b/sympy/simplify/tests/test_cse.py\nindex eb9cc231316b..ed6c97d72f8f 100644\n--- a/sympy/simplify/tests/test_cse.py\n+++ b/sympy/simplify/tests/test_cse.py\n@@ -347,6 +347,10 @@ def test_cse_MatrixSymbol():\n B = MatrixSymbol(\"B\", n, n)\n assert cse(B) == ([], [B])\n \n+ assert cse(A[0] * A[0]) == ([], [A[0]*A[0]])\n+\n+ assert cse(A[0,0]*A[0,1] + A[0,0]*A[0,1]*A[0,2]) == ([(x0, A[0, 0]*A[0, 1])], [x0*A[0, 2] + x0])\n+\n def test_cse_MatrixExpr():\n A = MatrixSymbol('A', 3, 3)\n y = MatrixSymbol('y', 3, 1)\ndiff --git a/sympy/utilities/tests/test_codegen.py b/sympy/utilities/tests/test_codegen.py\nindex 9d90e3d8a439..f90d5eaa68b3 100644\n--- a/sympy/utilities/tests/test_codegen.py\n+++ b/sympy/utilities/tests/test_codegen.py\n@@ -531,26 +531,9 @@ def test_multidim_c_argument_cse():\n '#include \"test.h\"\\n'\n \"#include <math.h>\\n\"\n \"void c(double *A, double *b, double *out) {\\n\"\n- \" double x0[9];\\n\"\n- \" x0[0] = A[0];\\n\"\n- \" x0[1] = A[1];\\n\"\n- \" x0[2] = A[2];\\n\"\n- \" x0[3] = A[3];\\n\"\n- \" x0[4] = A[4];\\n\"\n- \" x0[5] = A[5];\\n\"\n- \" x0[6] = A[6];\\n\"\n- \" x0[7] = A[7];\\n\"\n- \" x0[8] = A[8];\\n\"\n- \" double x1[3];\\n\"\n- \" x1[0] = b[0];\\n\"\n- \" x1[1] = b[1];\\n\"\n- \" x1[2] = b[2];\\n\"\n- \" const double x2 = x1[0];\\n\"\n- \" const double x3 = x1[1];\\n\"\n- \" const double x4 = x1[2];\\n\"\n- \" out[0] = x2*x0[0] + x3*x0[1] + x4*x0[2];\\n\"\n- \" out[1] = x2*x0[3] + x3*x0[4] + x4*x0[5];\\n\"\n- \" out[2] = x2*x0[6] + x3*x0[7] + x4*x0[8];\\n\"\n+ \" out[0] = A[0]*b[0] + A[1]*b[1] + A[2]*b[2];\\n\"\n+ \" out[1] = A[3]*b[0] + A[4]*b[1] + A[5]*b[2];\\n\"\n+ \" out[2] = A[6]*b[0] + A[7]*b[1] + A[8]*b[2];\\n\"\n \"}\\n\"\n )\n assert code == expected\n" }
[ { "diff_hunk": "@@ -586,7 +587,10 @@ def _find_repeated(expr):\n if isinstance(expr, RootOf):\n return\n \n- if isinstance(expr, Basic) and (expr.is_Atom or expr.is_Order):\n+ if isinstance(expr, Basic) and (\n+ expr.is_Atom or\n+ expr.is_Order or\n+ isinstance(expr, (MatrixSymbol, MatrixElement)):", "line": null, "original_line": 593, "original_start_line": null, "path": "sympy/simplify/cse_main.py", "start_line": null, "text": "@user1:\n```suggestion\r\n isinstance(expr, (MatrixSymbol, MatrixElement))):\r\n```" } ]
86d13ad378a99a134b271a5b1793a55cb1af478a
diff --git a/sympy/simplify/cse_main.py b/sympy/simplify/cse_main.py index d649dd02a952..88e3f9335e8b 100644 --- a/sympy/simplify/cse_main.py +++ b/sympy/simplify/cse_main.py @@ -567,6 +567,7 @@ def tree_cse(exprs, symbols, opt_subs=None, order='canonical', ignore=()): Substitutions containing any Symbol from ``ignore`` will be ignored. """ from sympy.matrices.expressions import MatrixExpr, MatrixSymbol, MatMul, MatAdd + from sympy.matrices.expressions.matexpr import MatrixElement from sympy.polys.rootoftools import RootOf if opt_subs is None: @@ -586,7 +587,10 @@ def _find_repeated(expr): if isinstance(expr, RootOf): return - if isinstance(expr, Basic) and (expr.is_Atom or expr.is_Order): + if isinstance(expr, Basic) and ( + expr.is_Atom or + expr.is_Order or + isinstance(expr, (MatrixSymbol, MatrixElement))): if expr.is_Symbol: excluded_symbols.add(expr) return diff --git a/sympy/simplify/tests/test_cse.py b/sympy/simplify/tests/test_cse.py index eb9cc231316b..ed6c97d72f8f 100644 --- a/sympy/simplify/tests/test_cse.py +++ b/sympy/simplify/tests/test_cse.py @@ -347,6 +347,10 @@ def test_cse_MatrixSymbol(): B = MatrixSymbol("B", n, n) assert cse(B) == ([], [B]) + assert cse(A[0] * A[0]) == ([], [A[0]*A[0]]) + + assert cse(A[0,0]*A[0,1] + A[0,0]*A[0,1]*A[0,2]) == ([(x0, A[0, 0]*A[0, 1])], [x0*A[0, 2] + x0]) + def test_cse_MatrixExpr(): A = MatrixSymbol('A', 3, 3) y = MatrixSymbol('y', 3, 1) diff --git a/sympy/utilities/tests/test_codegen.py b/sympy/utilities/tests/test_codegen.py index 9d90e3d8a439..f90d5eaa68b3 100644 --- a/sympy/utilities/tests/test_codegen.py +++ b/sympy/utilities/tests/test_codegen.py @@ -531,26 +531,9 @@ def test_multidim_c_argument_cse(): '#include "test.h"\n' "#include <math.h>\n" "void c(double *A, double *b, double *out) {\n" - " double x0[9];\n" - " x0[0] = A[0];\n" - " x0[1] = A[1];\n" - " x0[2] = A[2];\n" - " x0[3] = A[3];\n" - " x0[4] = A[4];\n" - " x0[5] = A[5];\n" - " x0[6] = A[6];\n" - " x0[7] = A[7];\n" - " x0[8] = A[8];\n" - " double x1[3];\n" - " x1[0] = b[0];\n" - " x1[1] = b[1];\n" - " x1[2] = b[2];\n" - " const double x2 = x1[0];\n" - " const double x3 = x1[1];\n" - " const double x4 = x1[2];\n" - " out[0] = x2*x0[0] + x3*x0[1] + x4*x0[2];\n" - " out[1] = x2*x0[3] + x3*x0[4] + x4*x0[5];\n" - " out[2] = x2*x0[6] + x3*x0[7] + x4*x0[8];\n" + " out[0] = A[0]*b[0] + A[1]*b[1] + A[2]*b[2];\n" + " out[1] = A[3]*b[0] + A[4]*b[1] + A[5]*b[2];\n" + " out[2] = A[6]*b[0] + A[7]*b[1] + A[8]*b[2];\n" "}\n" ) assert code == expected
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Bug Fixes" }
sympy__sympy-22841@000db05
sympy/sympy
Python
22,841
watch for Add->Mul in as_numer_denom
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> fixes #22837 #### Brief description of what is fixed or changed ```python formerly >>> Add(0,(x+y)/z/-2,evaluate=0).as_numer_denom() (z*(x + y - 1) + 1, 2*z) now >>> Add(0,(x+y)/z/-2,evaluate=0).as_numer_denom() (-x - y, 2*z) ``` #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> NO ENTRY <!-- END RELEASE NOTES -->
2022-01-11T18:48:08Z
Solve simplest algebraic equations with dummy parameter Very strange behaviour of `solve` command in SymPy, when I try to solve quadratic equation with parameter `a` (which is dummy here, because solution doesn't depends on it): ``` from sympy import * x, a = symbols("x a") eq = Eq(0, (4 - 4*x + x**2)/(4*a**2)) print(solve(eq, x)) print(solve(simplify(eq), x)) ``` Output: ``` [2 - sqrt(a**2 - 1)/a, 2 + sqrt(a**2 - 1)/a] [2] ``` Just `solve` gives two (!) solutions, which depend on `a`. After symplifying it gives only solution `x=2`, which is correct. What happens? Command `solveset` works correctly, but I am interested in using `solve` command. My SymPy version is 1.8.
Simpler example: ``` from sympy import * x, a = symbols("x a") eq = Eq(0, x / (2 * a)) print(solve(eq, x)) print(solve(simplify(eq), x)) ``` Output: ``` [(a - 1)/a] [0] ``` But if we swap sides of the equation `eq = Eq(x / (2 * a), 0)`, then `solve` solves it correctly! Not sure what happens, but if you do ``` eq = (4 - 4*x + x**2)/(4*a**2) print(solve(eq, x)) ``` You get the expected result. I guess that there should be something like ``` if isinstance(expr, Eq): expr = expr.rhs - expr.lhs ``` in `solve`, but haven't checked (and it looks here like not). Also ``` eq = Eq((4 - 4*x + x**2)/(4*a**2), 0) print(solve(eq, x)) ``` works, so looks like a bug. Thank you, Oscar! I obtained this quadratic equation (with much more parameters) after substitution in the solution of ordinary differential equation, so it's not very convenient for me to do these manual reorderings and simplifications, I just want to solve it as is. The problem is somewhere in the order of parts of the equation: `x/(2*a)=0` solved correctly, but `0=x/(2*a)` - incorrectly. Another hint: `0=x/(1*a)` gives correct answer, but using any other constant instead of 1 and -1 in denominator leads to wrong answer. ```python >>> from sympy import * >>> x, a = symbols("x a") >>> eq = Eq(0, (4 - 4*x + x**2)/(4*a**2)) >>> print(solveset(eq, x)) >>> print(solveset(simplify(eq), x)) FiniteSet(2) FiniteSet(2) ``` `solveset` seems to give the right answer in the given example. The `solve` function has lots of weird code paths depending on exactly how the input is provided e.g. you can get different results for `solve(eq, x)` vs `solve([eq], x)` etc or different results for `Eq` vs `Expr`: ```python In [2]: solve(eq, x) Out[2]: ⎡ ________ ________⎤ ⎢ ╱ 2 ╱ 2 ⎥ ⎢ ╲╱ a - 1 ╲╱ a - 1 ⎥ ⎢2 - ───────────, 2 + ───────────⎥ ⎣ a a ⎦ In [3]: solve([eq], x) Out[3]: [(2,)] ``` The main branch is here: https://github.com/sympy/sympy/blob/d822fcba181155b85ff2b29fe525adbafb22b448/sympy/solvers/solvers.py#L1106-L1109 Try working backwards though and see if you can figure out when `bare_f` is true or false or what the point of it is. It's a problem with `as_numer_denom()`
[ { "body": "Very strange behaviour of `solve` command in SymPy, when I try to solve quadratic equation with parameter `a` (which is dummy here, because solution doesn't depends on it):\r\n```\r\nfrom sympy import *\r\nx, a = symbols(\"x a\")\r\neq = Eq(0, (4 - 4*x + x**2)/(4*a**2))\r\nprint(solve(eq, x))\r\nprint(solve(simplify(eq), x))\r\n```\r\nOutput:\r\n```\r\n[2 - sqrt(a**2 - 1)/a, 2 + sqrt(a**2 - 1)/a]\r\n[2]\r\n```\r\nJust `solve` gives two (!) solutions, which depend on `a`. After symplifying it gives only solution `x=2`, which is correct. What happens? Command `solveset` works correctly, but I am interested in using `solve` command. My SymPy version is 1.8. ", "number": 22837, "title": "Solve simplest algebraic equations with dummy parameter" } ]
d822fcba181155b85ff2b29fe525adbafb22b448
{ "head_commit": "000db05c9a056619a610fd5ef9522e4855a7963c", "head_commit_message": "do not return 0 arg in Eq.rewrite(Add)", "patch_to_review": "diff --git a/sympy/core/add.py b/sympy/core/add.py\nindex 890d31369227..88dfbbc0c298 100644\n--- a/sympy/core/add.py\n+++ b/sympy/core/add.py\n@@ -613,6 +613,8 @@ def as_numer_denom(self):\n \"\"\"\n # clear rational denominator\n content, expr = self.primitive()\n+ if not isinstance(expr, self.func):\n+ return Mul(content, expr, evaluate=False).as_numer_denom()\n ncon, dcon = content.as_numer_denom()\n \n # collect numerators and denominators of the terms\ndiff --git a/sympy/core/relational.py b/sympy/core/relational.py\nindex 7a4be766fdb1..3f551dda3352 100644\n--- a/sympy/core/relational.py\n+++ b/sympy/core/relational.py\n@@ -578,7 +578,8 @@ def _eval_rewrite_as_Add(self, *args, **kwargs):\n the result set pass `evaluate=True` to give L - R;\n if `evaluate=None` then terms in L and R will not cancel\n but they will be listed in canonical order; otherwise\n- non-canonical args will be returned.\n+ non-canonical args will be returned. If one side is 0, the\n+ non-zero side will be returned.\n \n Examples\n ========\n@@ -595,6 +596,10 @@ def _eval_rewrite_as_Add(self, *args, **kwargs):\n \"\"\"\n from .add import _unevaluated_Add, Add\n L, R = args\n+ if L == 0:\n+ return R\n+ if R == 0:\n+ return L\n evaluate = kwargs.get('evaluate', True)\n if evaluate:\n # allow cancellation of args\ndiff --git a/sympy/core/tests/test_expr.py b/sympy/core/tests/test_expr.py\nindex b332f89d3346..eccb45ad90d0 100644\n--- a/sympy/core/tests/test_expr.py\n+++ b/sympy/core/tests/test_expr.py\n@@ -827,6 +827,10 @@ def test_as_numer_denom():\n assert ((A*B*C)**-1).as_numer_denom() == ((A*B*C)**-1, 1)\n assert ((A*B*C)**-1/x).as_numer_denom() == ((A*B*C)**-1, x)\n \n+ # the following morphs from Add to Mul during processing\n+ assert Add(0, (x + y)/z/-2, evaluate=False).as_numer_denom(\n+ ) == (-x - y, 2*z)\n+\n \n def test_trunc():\n import math\ndiff --git a/sympy/core/tests/test_relational.py b/sympy/core/tests/test_relational.py\nindex 21fcaf97118d..57d5d65d2d48 100644\n--- a/sympy/core/tests/test_relational.py\n+++ b/sympy/core/tests/test_relational.py\n@@ -1034,6 +1034,9 @@ def test_Equality_rewrite_as_Add():\n assert eq.rewrite(Add) == 2*x\n assert eq.rewrite(Add, evaluate=None).args == (x, x, y, -y)\n assert eq.rewrite(Add, evaluate=False).args == (x, y, x, -y)\n+ for e in (True, False, None):\n+ assert Eq(x, 0, evaluate=e).rewrite(Add) == x\n+ assert Eq(0, x, evaluate=e).rewrite(Add) == x\n \n \n def test_issue_15847():\n" }
[ { "diff_hunk": "@@ -613,6 +613,8 @@ def as_numer_denom(self):\n \"\"\"\n # clear rational denominator\n content, expr = self.primitive()\n+ if not isinstance(expr, self.func):", "line": null, "original_line": 616, "original_start_line": null, "path": "sympy/core/add.py", "start_line": null, "text": "@author:\n```suggestion\r\n if not isinstance(expr, Add):\r\n```" } ]
ef53083bd7afacc175d130cf2eb746aa5d53d172
diff --git a/sympy/core/add.py b/sympy/core/add.py index 890d31369227..41b5728fb8f7 100644 --- a/sympy/core/add.py +++ b/sympy/core/add.py @@ -613,6 +613,8 @@ def as_numer_denom(self): """ # clear rational denominator content, expr = self.primitive() + if not isinstance(expr, Add): + return Mul(content, expr, evaluate=False).as_numer_denom() ncon, dcon = content.as_numer_denom() # collect numerators and denominators of the terms diff --git a/sympy/core/relational.py b/sympy/core/relational.py index 7a4be766fdb1..3f551dda3352 100644 --- a/sympy/core/relational.py +++ b/sympy/core/relational.py @@ -578,7 +578,8 @@ def _eval_rewrite_as_Add(self, *args, **kwargs): the result set pass `evaluate=True` to give L - R; if `evaluate=None` then terms in L and R will not cancel but they will be listed in canonical order; otherwise - non-canonical args will be returned. + non-canonical args will be returned. If one side is 0, the + non-zero side will be returned. Examples ======== @@ -595,6 +596,10 @@ def _eval_rewrite_as_Add(self, *args, **kwargs): """ from .add import _unevaluated_Add, Add L, R = args + if L == 0: + return R + if R == 0: + return L evaluate = kwargs.get('evaluate', True) if evaluate: # allow cancellation of args diff --git a/sympy/core/tests/test_expr.py b/sympy/core/tests/test_expr.py index b332f89d3346..eccb45ad90d0 100644 --- a/sympy/core/tests/test_expr.py +++ b/sympy/core/tests/test_expr.py @@ -827,6 +827,10 @@ def test_as_numer_denom(): assert ((A*B*C)**-1).as_numer_denom() == ((A*B*C)**-1, 1) assert ((A*B*C)**-1/x).as_numer_denom() == ((A*B*C)**-1, x) + # the following morphs from Add to Mul during processing + assert Add(0, (x + y)/z/-2, evaluate=False).as_numer_denom( + ) == (-x - y, 2*z) + def test_trunc(): import math diff --git a/sympy/core/tests/test_relational.py b/sympy/core/tests/test_relational.py index 21fcaf97118d..57d5d65d2d48 100644 --- a/sympy/core/tests/test_relational.py +++ b/sympy/core/tests/test_relational.py @@ -1034,6 +1034,9 @@ def test_Equality_rewrite_as_Add(): assert eq.rewrite(Add) == 2*x assert eq.rewrite(Add, evaluate=None).args == (x, x, y, -y) assert eq.rewrite(Add, evaluate=False).args == (x, y, x, -y) + for e in (True, False, None): + assert Eq(x, 0, evaluate=e).rewrite(Add) == x + assert Eq(0, x, evaluate=e).rewrite(Add) == x def test_issue_15847():
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Bug Fixes" }
sympy__sympy-22707@32e56ff
sympy/sympy
Python
22,707
quality: fix type hints and remove type: ignore
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> Fixes #22697 #### Brief description of what is fixed or changed Remove a lot of `# type: ignore` that aren't needed with latest mypy. Fix some inaccurate type annotations. #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> NO ENTRY <!-- END RELEASE NOTES -->
2021-12-18T22:45:17Z
mypy code quality test failing The code quality test seems to be currently failing for all pull requests. This does not seem to have anything to do with the pull requests themselves. In particular, the `mypy` run is failing with the following message: ``` sympy/tensor/array/expressions/conv_array_to_matrix.py:288: error: No overload variant of "__getitem__" of "tuple" matches argument type "List[Any]" sympy/tensor/array/expressions/conv_array_to_matrix.py:288: note: Possible overload variants: sympy/tensor/array/expressions/conv_array_to_matrix.py:288: note: def __getitem__(self, int) -> Basic sympy/tensor/array/expressions/conv_array_to_matrix.py:288: note: def __getitem__(self, slice) -> Tuple[Basic, ...] sympy/tensor/array/expressions/conv_array_to_matrix.py:290: error: No overload variant of "__getitem__" of "tuple" matches argument type "List[Any]" sympy/tensor/array/expressions/conv_array_to_matrix.py:290: note: Possible overload variants: sympy/tensor/array/expressions/conv_array_to_matrix.py:290: note: def __getitem__(self, int) -> Basic sympy/tensor/array/expressions/conv_array_to_matrix.py:290: note: def __getitem__(self, slice) -> Tuple[Basic, ...] sympy/tensor/array/expressions/conv_array_to_matrix.py:494: error: Incompatible types in assignment (expression has type "List[Tuple[Any, ...]]", variable has type "Dict[Any, Tuple[Any, ...]]") sympy/tensor/array/expressions/conv_array_to_matrix.py:500: error: Incompatible types in assignment (expression has type "List[Any]", variable has type "Dict[Any, Tuple[Any, ...]]") sympy/tensor/array/expressions/arrayexpr_derivatives.py:26: error: "Expr" has no attribute "shape" sympy/tensor/array/expressions/arrayexpr_derivatives.py:66: error: "Expr" has no attribute "shape" sympy/tensor/array/expressions/arrayexpr_derivatives.py:77: error: "Expr" has no attribute "shape" sympy/tensor/array/expressions/arrayexpr_derivatives.py:82: error: "Expr" has no attribute "shape" Found 8 errors in 2 files (checked 1455 source files) Error: Process completed with exit code 1. ``` This should probably be fixed soon as it is also preventing other checks from running.
Yes ,I had reported this issue on the SymPy Gitter chat channel yesterday and asked for a potential fix .Thanks for opening this as an issue ! This diff makes mypy happy about the errors in `conv_array_to_matrix.py`: ```diff diff --git a/sympy/tensor/array/expressions/conv_array_to_matrix.py b/sympy/tensor/array/expressions/conv_array_to_matrix.py index 72e8369..6b8cfbf 100644 --- a/sympy/tensor/array/expressions/conv_array_to_matrix.py +++ b/sympy/tensor/array/expressions/conv_array_to_matrix.py @@ -283,11 +283,11 @@ def _(expr: PermuteDims): p2 = permuted[2*i+1] if p1 // 2 != p2 // 2: return _permute_dims(mat_mul_lines, permutation) - pos = p1 // 2 + pos1 = p1 // 2 if p1 > p2: - args_array[i] = _a2m_transpose(mat_mul_lines.args[pos]) + args_array[i] = _a2m_transpose(mat_mul_lines.args[pos1]) else: - args_array[i] = mat_mul_lines.args[pos] + args_array[i] = mat_mul_lines.args[pos1] return _a2m_tensor_product(*args_array) else: return expr @@ -491,15 +491,15 @@ def _(expr: ArrayDiagonal): for old_diag_tuple, new_diag_tuple in new_diag_indices.items(): if len(new_diag_tuple) == 1: removed = [i for i in removed if i not in old_diag_tuple] - new_diag_indices = [tuple(j - shifts[j] for j in i) for i in new_diag_indices.values()] + new_diag_indices2 = [tuple(j - shifts[j] for j in i) for i in new_diag_indices.values()] rank = get_rank(expr.expr) removed = ArrayDiagonal._push_indices_up(expr.diagonal_indices, removed, rank) removed = sorted({i for i in removed}) # If there are single axes to diagonalize remaining, it means that their # corresponding dimension has been removed, they no longer need diagonalization: - new_diag_indices = [i for i in new_diag_indices if len(i) > 0] - if len(new_diag_indices) > 0: - newexpr2 = _array_diagonal(newexpr, *new_diag_indices, allow_trivial_diags=True) + new_diag_indices3 = [i for i in new_diag_indices2 if len(i) > 0] + if len(new_diag_indices3) > 0: + newexpr2 = _array_diagonal(newexpr, *new_diag_indices3, allow_trivial_diags=True) else: newexpr2 = newexpr if isinstance(newexpr2, ArrayDiagonal): ``` The problem is that mypy doesn't like the same variable e.g. `new_diag_indices` changing type within the function. The code has `new_diag_indices` change from a list to a dict etc. Probably better variable names can be used to describe what these different objects are but mypy just wants the names to be different. The other errors are: ``` sympy/tensor/array/expressions/arrayexpr_derivatives.py:26: error: "Expr" has no attribute "shape" sympy/tensor/array/expressions/arrayexpr_derivatives.py:66: error: "Expr" has no attribute "shape" sympy/tensor/array/expressions/arrayexpr_derivatives.py:77: error: "Expr" has no attribute "shape" sympy/tensor/array/expressions/arrayexpr_derivatives.py:82: error: "Expr" has no attribute "shape" ``` You can this e.g. here: https://github.com/sympy/sympy/blob/88ed7abb488da615b007dd2ed5404312caef473c/sympy/tensor/array/expressions/arrayexpr_derivatives.py#L24-L26 The `type: ignore` was just added in #22699 to silence this error but it clearly is an error. An arbitrary `Expr` does not have a `.shape` attribute. That definitely looks like a legitimate bug. By the way, it looks like singledispatch.register can read type annotations as of Python 3.7 https://docs.python.org/3/library/functools.html. So the above could instead be ```py @array_derive.register def _(expr: Expr, x: Expr): return ZeroArray(*x.shape) ``` This would not only be more concise, but it would mean that mypy actually check the same thing as the decorator (i.e., if the register type was wrong but the annotation was right, I'm not sure if mypy would have caught this).
[ { "body": "The code quality test seems to be currently failing for all pull requests. This does not seem to have anything to do with the pull requests themselves. In particular, the `mypy` run is failing with the following message:\r\n```\r\nsympy/tensor/array/expressions/conv_array_to_matrix.py:288: error: No overload variant of \"__getitem__\" of \"tuple\" matches argument type \"List[Any]\"\r\nsympy/tensor/array/expressions/conv_array_to_matrix.py:288: note: Possible overload variants:\r\nsympy/tensor/array/expressions/conv_array_to_matrix.py:288: note: def __getitem__(self, int) -> Basic\r\nsympy/tensor/array/expressions/conv_array_to_matrix.py:288: note: def __getitem__(self, slice) -> Tuple[Basic, ...]\r\nsympy/tensor/array/expressions/conv_array_to_matrix.py:290: error: No overload variant of \"__getitem__\" of \"tuple\" matches argument type \"List[Any]\"\r\nsympy/tensor/array/expressions/conv_array_to_matrix.py:290: note: Possible overload variants:\r\nsympy/tensor/array/expressions/conv_array_to_matrix.py:290: note: def __getitem__(self, int) -> Basic\r\nsympy/tensor/array/expressions/conv_array_to_matrix.py:290: note: def __getitem__(self, slice) -> Tuple[Basic, ...]\r\nsympy/tensor/array/expressions/conv_array_to_matrix.py:494: error: Incompatible types in assignment (expression has type \"List[Tuple[Any, ...]]\", variable has type \"Dict[Any, Tuple[Any, ...]]\")\r\nsympy/tensor/array/expressions/conv_array_to_matrix.py:500: error: Incompatible types in assignment (expression has type \"List[Any]\", variable has type \"Dict[Any, Tuple[Any, ...]]\")\r\nsympy/tensor/array/expressions/arrayexpr_derivatives.py:26: error: \"Expr\" has no attribute \"shape\"\r\nsympy/tensor/array/expressions/arrayexpr_derivatives.py:66: error: \"Expr\" has no attribute \"shape\"\r\nsympy/tensor/array/expressions/arrayexpr_derivatives.py:77: error: \"Expr\" has no attribute \"shape\"\r\nsympy/tensor/array/expressions/arrayexpr_derivatives.py:82: error: \"Expr\" has no attribute \"shape\"\r\nFound 8 errors in 2 files (checked 1455 source files)\r\nError: Process completed with exit code 1.\r\n```\r\nThis should probably be fixed soon as it is also preventing other checks from running.", "number": 22697, "title": "mypy code quality test failing" } ]
d2197554a47702d04f863e368bcac6175d72dfb5
{ "head_commit": "32e56ff22068ff4ddb19ddb96691f13e6129d937", "head_commit_message": "quality: remove most type: ignore from sets", "patch_to_review": "diff --git a/sympy/assumptions/handlers/calculus.py b/sympy/assumptions/handlers/calculus.py\nindex ec2d9dcfd0c7..263bed6da00c 100644\n--- a/sympy/assumptions/handlers/calculus.py\n+++ b/sympy/assumptions/handlers/calculus.py\n@@ -18,7 +18,7 @@\n # FinitePredicate\n \n \[email protected](Symbol) # type: ignore\[email protected](Symbol)\n def _(expr, assumptions):\n \"\"\"\n Handles Symbol.\n@@ -29,7 +29,7 @@ def _(expr, assumptions):\n return True\n return None\n \[email protected](Add) # type: ignore\[email protected](Add)\n def _(expr, assumptions):\n \"\"\"\n Return True if expr is bounded, False if not and None if unknown.\n@@ -111,7 +111,7 @@ def _(expr, assumptions):\n result = _bounded\n return result\n \[email protected](Mul) # type: ignore\[email protected](Mul)\n def _(expr, assumptions):\n \"\"\"\n Return True if expr is bounded, False if not and None if unknown.\n@@ -166,7 +166,7 @@ def _(expr, assumptions):\n result = False\n return result\n \[email protected](Pow) # type: ignore\[email protected](Pow)\n def _(expr, assumptions):\n \"\"\"\n * Unbounded ** NonZero -> Unbounded\n@@ -198,11 +198,11 @@ def _(expr, assumptions):\n return False\n return None\n \[email protected](exp) # type: ignore\[email protected](exp)\n def _(expr, assumptions):\n return ask(Q.finite(expr.exp), assumptions)\n \[email protected](log) # type: ignore\[email protected](log)\n def _(expr, assumptions):\n # After complex -> finite fact is registered to new assumption system,\n # querying Q.infinite may be removed.\n@@ -210,16 +210,16 @@ def _(expr, assumptions):\n return False\n return ask(~Q.zero(expr.args[0]), assumptions)\n \[email protected]_many(cos, sin, Number, Pi, Exp1, GoldenRatio, # type: ignore\[email protected]_many(cos, sin, Number, Pi, Exp1, GoldenRatio,\n TribonacciConstant, ImaginaryUnit, sign)\n def _(expr, assumptions):\n return True\n \[email protected]_many(ComplexInfinity, Infinity, NegativeInfinity) # type: ignore\[email protected]_many(ComplexInfinity, Infinity, NegativeInfinity)\n def _(expr, assumptions):\n return False\n \[email protected](NaN) # type: ignore\[email protected](NaN)\n def _(expr, assumptions):\n return None\n \n@@ -227,7 +227,7 @@ def _(expr, assumptions):\n # InfinitePredicate\n \n \[email protected]_many(ComplexInfinity, Infinity, NegativeInfinity) # type: ignore\[email protected]_many(ComplexInfinity, Infinity, NegativeInfinity)\n def _(expr, assumptions):\n return True\n \n@@ -235,12 +235,12 @@ def _(expr, assumptions):\n # PositiveInfinitePredicate\n \n \[email protected](Infinity) # type: ignore\[email protected](Infinity)\n def _(expr, assumptions):\n return True\n \n \[email protected]_many(NegativeInfinity, ComplexInfinity) # type: ignore\[email protected]_many(NegativeInfinity, ComplexInfinity)\n def _(expr, assumptions):\n return False\n \n@@ -248,11 +248,11 @@ def _(expr, assumptions):\n # NegativeInfinitePredicate\n \n \[email protected](NegativeInfinity) # type: ignore\[email protected](NegativeInfinity)\n def _(expr, assumptions):\n return True\n \n \[email protected]_many(Infinity, ComplexInfinity) # type: ignore\[email protected]_many(Infinity, ComplexInfinity)\n def _(expr, assumptions):\n return False\ndiff --git a/sympy/assumptions/handlers/common.py b/sympy/assumptions/handlers/common.py\nindex 303e6e3a2596..8d2ef9859335 100644\n--- a/sympy/assumptions/handlers/common.py\n+++ b/sympy/assumptions/handlers/common.py\n@@ -47,7 +47,7 @@ def AlwaysNone(expr, assumptions):\n \n # CommutativePredicate\n \[email protected](Symbol) # type: ignore\[email protected](Symbol)\n def _(expr, assumptions):\n \"\"\"Objects are expected to be commutative unless otherwise stated\"\"\"\n assumps = conjuncts(assumptions)\n@@ -59,41 +59,41 @@ def _(expr, assumptions):\n return False\n return True\n \[email protected](Basic) # type: ignore\[email protected](Basic)\n def _(expr, assumptions):\n for arg in expr.args:\n if not ask(Q.commutative(arg), assumptions):\n return False\n return True\n \[email protected](Number) # type: ignore\[email protected](Number)\n def _(expr, assumptions):\n return True\n \[email protected](NaN) # type: ignore\[email protected](NaN)\n def _(expr, assumptions):\n return True\n \n \n # IsTruePredicate\n \[email protected](bool) # type: ignore\[email protected](bool)\n def _(expr, assumptions):\n return expr\n \[email protected](BooleanTrue) # type: ignore\[email protected](BooleanTrue)\n def _(expr, assumptions):\n return True\n \[email protected](BooleanFalse) # type: ignore\[email protected](BooleanFalse)\n def _(expr, assumptions):\n return False\n \[email protected](AppliedPredicate) # type: ignore\[email protected](AppliedPredicate)\n def _(expr, assumptions):\n return ask(expr, assumptions)\n \[email protected](Not) # type: ignore\[email protected](Not)\n def _(expr, assumptions):\n arg = expr.args[0]\n if arg.is_Symbol:\n@@ -105,7 +105,7 @@ def _(expr, assumptions):\n else:\n return None\n \[email protected](Or) # type: ignore\[email protected](Or)\n def _(expr, assumptions):\n result = False\n for arg in expr.args:\n@@ -116,7 +116,7 @@ def _(expr, assumptions):\n result = None\n return result\n \[email protected](And) # type: ignore\[email protected](And)\n def _(expr, assumptions):\n result = True\n for arg in expr.args:\n@@ -127,12 +127,12 @@ def _(expr, assumptions):\n result = None\n return result\n \[email protected](Implies) # type: ignore\[email protected](Implies)\n def _(expr, assumptions):\n p, q = expr.args\n return ask(~p | q, assumptions=assumptions)\n \[email protected](Equivalent) # type: ignore\[email protected](Equivalent)\n def _(expr, assumptions):\n p, q = expr.args\n pt = ask(p, assumptions=assumptions)\ndiff --git a/sympy/assumptions/handlers/matrices.py b/sympy/assumptions/handlers/matrices.py\nindex a220b4363dd3..73debd00c8bc 100644\n--- a/sympy/assumptions/handlers/matrices.py\n+++ b/sympy/assumptions/handlers/matrices.py\n@@ -31,14 +31,14 @@ def _Factorization(predicate, expr, assumptions):\n \n # SquarePredicate\n \[email protected](MatrixExpr) # type: ignore\[email protected](MatrixExpr)\n def _(expr, assumptions):\n return expr.shape[0] == expr.shape[1]\n \n \n # SymmetricPredicate\n \[email protected](MatMul) # type: ignore\[email protected](MatMul)\n def _(expr, assumptions):\n factor, mmul = expr.as_coeff_mmul()\n if all(ask(Q.symmetric(arg), assumptions) for arg in mmul.args):\n@@ -52,7 +52,7 @@ def _(expr, assumptions):\n return True\n return ask(Q.symmetric(MatMul(*mmul.args[1:-1])), assumptions)\n \[email protected](MatPow) # type: ignore\[email protected](MatPow)\n def _(expr, assumptions):\n # only for integer powers\n base, exp = expr.args\n@@ -65,11 +65,11 @@ def _(expr, assumptions):\n return ask(Q.symmetric(base), assumptions)\n return None\n \[email protected](MatAdd) # type: ignore\[email protected](MatAdd)\n def _(expr, assumptions):\n return all(ask(Q.symmetric(arg), assumptions) for arg in expr.args)\n \[email protected](MatrixSymbol) # type: ignore\[email protected](MatrixSymbol)\n def _(expr, assumptions):\n if not expr.is_square:\n return False\n@@ -80,15 +80,15 @@ def _(expr, assumptions):\n if Q.symmetric(expr) in conjuncts(assumptions):\n return True\n \[email protected]_many(OneMatrix, ZeroMatrix) # type: ignore\[email protected]_many(OneMatrix, ZeroMatrix)\n def _(expr, assumptions):\n return ask(Q.square(expr), assumptions)\n \[email protected]_many(Inverse, Transpose) # type: ignore\[email protected]_many(Inverse, Transpose)\n def _(expr, assumptions):\n return ask(Q.symmetric(expr.arg), assumptions)\n \[email protected](MatrixSlice) # type: ignore\[email protected](MatrixSlice)\n def _(expr, assumptions):\n # TODO: implement sathandlers system for the matrices.\n # Now it duplicates the general fact: Implies(Q.diagonal, Q.symmetric).\n@@ -99,14 +99,14 @@ def _(expr, assumptions):\n else:\n return ask(Q.symmetric(expr.parent), assumptions)\n \[email protected](Identity) # type: ignore\[email protected](Identity)\n def _(expr, assumptions):\n return True\n \n \n # InvertiblePredicate\n \[email protected](MatMul) # type: ignore\[email protected](MatMul)\n def _(expr, assumptions):\n factor, mmul = expr.as_coeff_mmul()\n if all(ask(Q.invertible(arg), assumptions) for arg in mmul.args):\n@@ -115,7 +115,7 @@ def _(expr, assumptions):\n for arg in mmul.args):\n return False\n \[email protected](MatPow) # type: ignore\[email protected](MatPow)\n def _(expr, assumptions):\n # only for integer powers\n base, exp = expr.args\n@@ -126,53 +126,53 @@ def _(expr, assumptions):\n return ask(Q.invertible(base), assumptions)\n return None\n \[email protected](MatAdd) # type: ignore\[email protected](MatAdd)\n def _(expr, assumptions):\n return None\n \[email protected](MatrixSymbol) # type: ignore\[email protected](MatrixSymbol)\n def _(expr, assumptions):\n if not expr.is_square:\n return False\n if Q.invertible(expr) in conjuncts(assumptions):\n return True\n \[email protected]_many(Identity, Inverse) # type: ignore\[email protected]_many(Identity, Inverse)\n def _(expr, assumptions):\n return True\n \[email protected](ZeroMatrix) # type: ignore\[email protected](ZeroMatrix)\n def _(expr, assumptions):\n return False\n \[email protected](OneMatrix) # type: ignore\[email protected](OneMatrix)\n def _(expr, assumptions):\n return expr.shape[0] == 1 and expr.shape[1] == 1\n \[email protected](Transpose) # type: ignore\[email protected](Transpose)\n def _(expr, assumptions):\n return ask(Q.invertible(expr.arg), assumptions)\n \[email protected](MatrixSlice) # type: ignore\[email protected](MatrixSlice)\n def _(expr, assumptions):\n if not expr.on_diag:\n return None\n else:\n return ask(Q.invertible(expr.parent), assumptions)\n \[email protected](MatrixBase) # type: ignore\[email protected](MatrixBase)\n def _(expr, assumptions):\n if not expr.is_square:\n return False\n return expr.rank() == expr.rows\n \[email protected](MatrixExpr) # type: ignore\[email protected](MatrixExpr)\n def _(expr, assumptions):\n if not expr.is_square:\n return False\n return None\n \[email protected](BlockMatrix) # type: ignore\[email protected](BlockMatrix)\n def _(expr, assumptions):\n from sympy.matrices.expressions.blockmatrix import reblock_2x2\n if not expr.is_square:\n@@ -200,7 +200,7 @@ def _(expr, assumptions):\n return invertible\n return None\n \[email protected](BlockDiagMatrix) # type: ignore\[email protected](BlockDiagMatrix)\n def _(expr, assumptions):\n if expr.rowblocksizes != expr.colblocksizes:\n return None\n@@ -209,7 +209,7 @@ def _(expr, assumptions):\n \n # OrthogonalPredicate\n \[email protected](MatMul) # type: ignore\[email protected](MatMul)\n def _(expr, assumptions):\n factor, mmul = expr.as_coeff_mmul()\n if (all(ask(Q.orthogonal(arg), assumptions) for arg in mmul.args) and\n@@ -219,7 +219,7 @@ def _(expr, assumptions):\n for arg in mmul.args):\n return False\n \[email protected](MatPow) # type: ignore\[email protected](MatPow)\n def _(expr, assumptions):\n # only for integer powers\n base, exp = expr.args\n@@ -228,13 +228,13 @@ def _(expr, assumptions):\n return ask(Q.orthogonal(base), assumptions)\n return None\n \[email protected](MatAdd) # type: ignore\[email protected](MatAdd)\n def _(expr, assumptions):\n if (len(expr.args) == 1 and\n ask(Q.orthogonal(expr.args[0]), assumptions)):\n return True\n \[email protected](MatrixSymbol) # type: ignore\[email protected](MatrixSymbol)\n def _(expr, assumptions):\n if (not expr.is_square or\n ask(Q.invertible(expr), assumptions) is False):\n@@ -242,33 +242,33 @@ def _(expr, assumptions):\n if Q.orthogonal(expr) in conjuncts(assumptions):\n return True\n \[email protected](Identity) # type: ignore\[email protected](Identity)\n def _(expr, assumptions):\n return True\n \[email protected](ZeroMatrix) # type: ignore\[email protected](ZeroMatrix)\n def _(expr, assumptions):\n return False\n \[email protected]_many(Inverse, Transpose) # type: ignore\[email protected]_many(Inverse, Transpose)\n def _(expr, assumptions):\n return ask(Q.orthogonal(expr.arg), assumptions)\n \[email protected](MatrixSlice) # type: ignore\[email protected](MatrixSlice)\n def _(expr, assumptions):\n if not expr.on_diag:\n return None\n else:\n return ask(Q.orthogonal(expr.parent), assumptions)\n \[email protected](Factorization) # type: ignore\[email protected](Factorization)\n def _(expr, assumptions):\n return _Factorization(Q.orthogonal, expr, assumptions)\n \n \n # UnitaryPredicate\n \[email protected](MatMul) # type: ignore\[email protected](MatMul)\n def _(expr, assumptions):\n factor, mmul = expr.as_coeff_mmul()\n if (all(ask(Q.unitary(arg), assumptions) for arg in mmul.args) and\n@@ -278,7 +278,7 @@ def _(expr, assumptions):\n for arg in mmul.args):\n return False\n \[email protected](MatPow) # type: ignore\[email protected](MatPow)\n def _(expr, assumptions):\n # only for integer powers\n base, exp = expr.args\n@@ -287,7 +287,7 @@ def _(expr, assumptions):\n return ask(Q.unitary(base), assumptions)\n return None\n \[email protected](MatrixSymbol) # type: ignore\[email protected](MatrixSymbol)\n def _(expr, assumptions):\n if (not expr.is_square or\n ask(Q.invertible(expr), assumptions) is False):\n@@ -295,38 +295,38 @@ def _(expr, assumptions):\n if Q.unitary(expr) in conjuncts(assumptions):\n return True\n \[email protected]_many(Inverse, Transpose) # type: ignore\[email protected]_many(Inverse, Transpose)\n def _(expr, assumptions):\n return ask(Q.unitary(expr.arg), assumptions)\n \[email protected](MatrixSlice) # type: ignore\[email protected](MatrixSlice)\n def _(expr, assumptions):\n if not expr.on_diag:\n return None\n else:\n return ask(Q.unitary(expr.parent), assumptions)\n \[email protected]_many(DFT, Identity) # type: ignore\[email protected]_many(DFT, Identity)\n def _(expr, assumptions):\n return True\n \[email protected](ZeroMatrix) # type: ignore\[email protected](ZeroMatrix)\n def _(expr, assumptions):\n return False\n \[email protected](Factorization) # type: ignore\[email protected](Factorization)\n def _(expr, assumptions):\n return _Factorization(Q.unitary, expr, assumptions)\n \n \n # FullRankPredicate\n \[email protected](MatMul) # type: ignore\[email protected](MatMul)\n def _(expr, assumptions):\n if all(ask(Q.fullrank(arg), assumptions) for arg in expr.args):\n return True\n \[email protected](MatPow) # type: ignore\[email protected](MatPow)\n def _(expr, assumptions):\n # only for integer powers\n base, exp = expr.args\n@@ -335,23 +335,23 @@ def _(expr, assumptions):\n return ask(Q.fullrank(base), assumptions)\n return None\n \[email protected](Identity) # type: ignore\[email protected](Identity)\n def _(expr, assumptions):\n return True\n \[email protected](ZeroMatrix) # type: ignore\[email protected](ZeroMatrix)\n def _(expr, assumptions):\n return False\n \[email protected](OneMatrix) # type: ignore\[email protected](OneMatrix)\n def _(expr, assumptions):\n return expr.shape[0] == 1 and expr.shape[1] == 1\n \[email protected]_many(Inverse, Transpose) # type: ignore\[email protected]_many(Inverse, Transpose)\n def _(expr, assumptions):\n return ask(Q.fullrank(expr.arg), assumptions)\n \[email protected](MatrixSlice) # type: ignore\[email protected](MatrixSlice)\n def _(expr, assumptions):\n if ask(Q.orthogonal(expr.parent), assumptions):\n return True\n@@ -359,7 +359,7 @@ def _(expr, assumptions):\n \n # PositiveDefinitePredicate\n \[email protected](MatMul) # type: ignore\[email protected](MatMul)\n def _(expr, assumptions):\n factor, mmul = expr.as_coeff_mmul()\n if (all(ask(Q.positive_definite(arg), assumptions)\n@@ -371,42 +371,42 @@ def _(expr, assumptions):\n return ask(Q.positive_definite(\n MatMul(*mmul.args[1:-1])), assumptions)\n \[email protected](MatPow) # type: ignore\[email protected](MatPow)\n def _(expr, assumptions):\n # a power of a positive definite matrix is positive definite\n if ask(Q.positive_definite(expr.args[0]), assumptions):\n return True\n \[email protected](MatAdd) # type: ignore\[email protected](MatAdd)\n def _(expr, assumptions):\n if all(ask(Q.positive_definite(arg), assumptions)\n for arg in expr.args):\n return True\n \[email protected](MatrixSymbol) # type: ignore\[email protected](MatrixSymbol)\n def _(expr, assumptions):\n if not expr.is_square:\n return False\n if Q.positive_definite(expr) in conjuncts(assumptions):\n return True\n \[email protected](Identity) # type: ignore\[email protected](Identity)\n def _(expr, assumptions):\n return True\n \[email protected](ZeroMatrix) # type: ignore\[email protected](ZeroMatrix)\n def _(expr, assumptions):\n return False\n \[email protected](OneMatrix) # type: ignore\[email protected](OneMatrix)\n def _(expr, assumptions):\n return expr.shape[0] == 1 and expr.shape[1] == 1\n \[email protected]_many(Inverse, Transpose) # type: ignore\[email protected]_many(Inverse, Transpose)\n def _(expr, assumptions):\n return ask(Q.positive_definite(expr.arg), assumptions)\n \[email protected](MatrixSlice) # type: ignore\[email protected](MatrixSlice)\n def _(expr, assumptions):\n if not expr.on_diag:\n return None\n@@ -416,18 +416,18 @@ def _(expr, assumptions):\n \n # UpperTriangularPredicate\n \[email protected](MatMul) # type: ignore\[email protected](MatMul)\n def _(expr, assumptions):\n factor, matrices = expr.as_coeff_matrices()\n if all(ask(Q.upper_triangular(m), assumptions) for m in matrices):\n return True\n \[email protected](MatAdd) # type: ignore\[email protected](MatAdd)\n def _(expr, assumptions):\n if all(ask(Q.upper_triangular(arg), assumptions) for arg in expr.args):\n return True\n \[email protected](MatPow) # type: ignore\[email protected](MatPow)\n def _(expr, assumptions):\n # only for integer powers\n base, exp = expr.args\n@@ -440,52 +440,52 @@ def _(expr, assumptions):\n return ask(Q.upper_triangular(base), assumptions)\n return None\n \[email protected](MatrixSymbol) # type: ignore\[email protected](MatrixSymbol)\n def _(expr, assumptions):\n if Q.upper_triangular(expr) in conjuncts(assumptions):\n return True\n \[email protected]_many(Identity, ZeroMatrix) # type: ignore\[email protected]_many(Identity, ZeroMatrix)\n def _(expr, assumptions):\n return True\n \[email protected](OneMatrix) # type: ignore\[email protected](OneMatrix)\n def _(expr, assumptions):\n return expr.shape[0] == 1 and expr.shape[1] == 1\n \[email protected](Transpose) # type: ignore\[email protected](Transpose)\n def _(expr, assumptions):\n return ask(Q.lower_triangular(expr.arg), assumptions)\n \[email protected](Inverse) # type: ignore\[email protected](Inverse)\n def _(expr, assumptions):\n return ask(Q.upper_triangular(expr.arg), assumptions)\n \[email protected](MatrixSlice) # type: ignore\[email protected](MatrixSlice)\n def _(expr, assumptions):\n if not expr.on_diag:\n return None\n else:\n return ask(Q.upper_triangular(expr.parent), assumptions)\n \[email protected](Factorization) # type: ignore\[email protected](Factorization)\n def _(expr, assumptions):\n return _Factorization(Q.upper_triangular, expr, assumptions)\n \n # LowerTriangularPredicate\n \[email protected](MatMul) # type: ignore\[email protected](MatMul)\n def _(expr, assumptions):\n factor, matrices = expr.as_coeff_matrices()\n if all(ask(Q.lower_triangular(m), assumptions) for m in matrices):\n return True\n \[email protected](MatAdd) # type: ignore\[email protected](MatAdd)\n def _(expr, assumptions):\n if all(ask(Q.lower_triangular(arg), assumptions) for arg in expr.args):\n return True\n \[email protected](MatPow) # type: ignore\[email protected](MatPow)\n def _(expr, assumptions):\n # only for integer powers\n base, exp = expr.args\n@@ -498,35 +498,35 @@ def _(expr, assumptions):\n return ask(Q.lower_triangular(base), assumptions)\n return None\n \[email protected](MatrixSymbol) # type: ignore\[email protected](MatrixSymbol)\n def _(expr, assumptions):\n if Q.lower_triangular(expr) in conjuncts(assumptions):\n return True\n \[email protected]_many(Identity, ZeroMatrix) # type: ignore\[email protected]_many(Identity, ZeroMatrix)\n def _(expr, assumptions):\n return True\n \[email protected](OneMatrix) # type: ignore\[email protected](OneMatrix)\n def _(expr, assumptions):\n return expr.shape[0] == 1 and expr.shape[1] == 1\n \[email protected](Transpose) # type: ignore\[email protected](Transpose)\n def _(expr, assumptions):\n return ask(Q.upper_triangular(expr.arg), assumptions)\n \[email protected](Inverse) # type: ignore\[email protected](Inverse)\n def _(expr, assumptions):\n return ask(Q.lower_triangular(expr.arg), assumptions)\n \[email protected](MatrixSlice) # type: ignore\[email protected](MatrixSlice)\n def _(expr, assumptions):\n if not expr.on_diag:\n return None\n else:\n return ask(Q.lower_triangular(expr.parent), assumptions)\n \[email protected](Factorization) # type: ignore\[email protected](Factorization)\n def _(expr, assumptions):\n return _Factorization(Q.lower_triangular, expr, assumptions)\n \n@@ -536,7 +536,7 @@ def _(expr, assumptions):\n def _is_empty_or_1x1(expr):\n return expr.shape in ((0, 0), (1, 1))\n \[email protected](MatMul) # type: ignore\[email protected](MatMul)\n def _(expr, assumptions):\n if _is_empty_or_1x1(expr):\n return True\n@@ -544,7 +544,7 @@ def _(expr, assumptions):\n if all(ask(Q.diagonal(m), assumptions) for m in matrices):\n return True\n \[email protected](MatPow) # type: ignore\[email protected](MatPow)\n def _(expr, assumptions):\n # only for integer powers\n base, exp = expr.args\n@@ -557,27 +557,27 @@ def _(expr, assumptions):\n return ask(Q.diagonal(base), assumptions)\n return None\n \[email protected](MatAdd) # type: ignore\[email protected](MatAdd)\n def _(expr, assumptions):\n if all(ask(Q.diagonal(arg), assumptions) for arg in expr.args):\n return True\n \[email protected](MatrixSymbol) # type: ignore\[email protected](MatrixSymbol)\n def _(expr, assumptions):\n if _is_empty_or_1x1(expr):\n return True\n if Q.diagonal(expr) in conjuncts(assumptions):\n return True\n \[email protected](OneMatrix) # type: ignore\[email protected](OneMatrix)\n def _(expr, assumptions):\n return expr.shape[0] == 1 and expr.shape[1] == 1\n \[email protected]_many(Inverse, Transpose) # type: ignore\[email protected]_many(Inverse, Transpose)\n def _(expr, assumptions):\n return ask(Q.diagonal(expr.arg), assumptions)\n \[email protected](MatrixSlice) # type: ignore\[email protected](MatrixSlice)\n def _(expr, assumptions):\n if _is_empty_or_1x1(expr):\n return True\n@@ -586,11 +586,11 @@ def _(expr, assumptions):\n else:\n return ask(Q.diagonal(expr.parent), assumptions)\n \[email protected]_many(DiagonalMatrix, DiagMatrix, Identity, ZeroMatrix) # type: ignore\[email protected]_many(DiagonalMatrix, DiagMatrix, Identity, ZeroMatrix)\n def _(expr, assumptions):\n return True\n \[email protected](Factorization) # type: ignore\[email protected](Factorization)\n def _(expr, assumptions):\n return _Factorization(Q.diagonal, expr, assumptions)\n \n@@ -613,12 +613,12 @@ def MatMul_elements(matrix_predicate, scalar_predicate, expr, assumptions):\n test_closed_group(Basic(*matrices), assumptions, matrix_predicate)])\n \n \[email protected]_many(Determinant, HadamardProduct, MatAdd, # type: ignore\[email protected]_many(Determinant, HadamardProduct, MatAdd,\n Trace, Transpose)\n def _(expr, assumptions):\n return test_closed_group(expr, assumptions, Q.integer_elements)\n \[email protected](MatPow) # type: ignore\[email protected](MatPow)\n def _(expr, assumptions):\n # only for integer powers\n base, exp = expr.args\n@@ -629,31 +629,31 @@ def _(expr, assumptions):\n return ask(Q.integer_elements(base), assumptions)\n return None\n \[email protected]_many(Identity, OneMatrix, ZeroMatrix) # type: ignore\[email protected]_many(Identity, OneMatrix, ZeroMatrix)\n def _(expr, assumptions):\n return True\n \[email protected](MatMul) # type: ignore\[email protected](MatMul)\n def _(expr, assumptions):\n return MatMul_elements(Q.integer_elements, Q.integer, expr, assumptions)\n \[email protected](MatrixSlice) # type: ignore\[email protected](MatrixSlice)\n def _(expr, assumptions):\n return MS_elements(Q.integer_elements, expr, assumptions)\n \[email protected](BlockMatrix) # type: ignore\[email protected](BlockMatrix)\n def _(expr, assumptions):\n return BM_elements(Q.integer_elements, expr, assumptions)\n \n \n # RealElementsPredicate\n \[email protected]_many(Determinant, Factorization, HadamardProduct, # type: ignore\[email protected]_many(Determinant, Factorization, HadamardProduct,\n MatAdd, Trace, Transpose)\n def _(expr, assumptions):\n return test_closed_group(expr, assumptions, Q.real_elements)\n \[email protected](MatPow) # type: ignore\[email protected](MatPow)\n def _(expr, assumptions):\n # only for integer powers\n base, exp = expr.args\n@@ -666,27 +666,27 @@ def _(expr, assumptions):\n return ask(Q.real_elements(base), assumptions)\n return None\n \[email protected](MatMul) # type: ignore\[email protected](MatMul)\n def _(expr, assumptions):\n return MatMul_elements(Q.real_elements, Q.real, expr, assumptions)\n \[email protected](MatrixSlice) # type: ignore\[email protected](MatrixSlice)\n def _(expr, assumptions):\n return MS_elements(Q.real_elements, expr, assumptions)\n \[email protected](BlockMatrix) # type: ignore\[email protected](BlockMatrix)\n def _(expr, assumptions):\n return BM_elements(Q.real_elements, expr, assumptions)\n \n \n # ComplexElementsPredicate\n \[email protected]_many(Determinant, Factorization, HadamardProduct, # type: ignore\[email protected]_many(Determinant, Factorization, HadamardProduct,\n Inverse, MatAdd, Trace, Transpose)\n def _(expr, assumptions):\n return test_closed_group(expr, assumptions, Q.complex_elements)\n \[email protected](MatPow) # type: ignore\[email protected](MatPow)\n def _(expr, assumptions):\n # only for integer powers\n base, exp = expr.args\n@@ -699,18 +699,18 @@ def _(expr, assumptions):\n return ask(Q.complex_elements(base), assumptions)\n return None\n \[email protected](MatMul) # type: ignore\[email protected](MatMul)\n def _(expr, assumptions):\n return MatMul_elements(Q.complex_elements, Q.complex, expr, assumptions)\n \[email protected](MatrixSlice) # type: ignore\[email protected](MatrixSlice)\n def _(expr, assumptions):\n return MS_elements(Q.complex_elements, expr, assumptions)\n \[email protected](BlockMatrix) # type: ignore\[email protected](BlockMatrix)\n def _(expr, assumptions):\n return BM_elements(Q.complex_elements, expr, assumptions)\n \[email protected](DFT) # type: ignore\[email protected](DFT)\n def _(expr, assumptions):\n return True\ndiff --git a/sympy/assumptions/handlers/ntheory.py b/sympy/assumptions/handlers/ntheory.py\nindex 48b5b09b45f8..4f1397b283ee 100644\n--- a/sympy/assumptions/handlers/ntheory.py\n+++ b/sympy/assumptions/handlers/ntheory.py\n@@ -31,19 +31,19 @@ def _PrimePredicate_number(expr, assumptions):\n # when not exact, we won't give a True or False\n # since the number represents an approximate value\n \[email protected](Expr) # type: ignore\[email protected](Expr)\n def _(expr, assumptions):\n ret = expr.is_prime\n if ret is None:\n raise MDNotImplementedError\n return ret\n \[email protected](Basic) # type: ignore\[email protected](Basic)\n def _(expr, assumptions):\n if expr.is_number:\n return _PrimePredicate_number(expr, assumptions)\n \[email protected](Mul) # type: ignore\[email protected](Mul)\n def _(expr, assumptions):\n if expr.is_number:\n return _PrimePredicate_number(expr, assumptions)\n@@ -54,7 +54,7 @@ def _(expr, assumptions):\n if arg.is_number and arg.is_composite:\n return False\n \[email protected](Pow) # type: ignore\[email protected](Pow)\n def _(expr, assumptions):\n \"\"\"\n Integer**Integer -> !Prime\n@@ -65,37 +65,37 @@ def _(expr, assumptions):\n ask(Q.integer(expr.base), assumptions):\n return False\n \[email protected](Integer) # type: ignore\[email protected](Integer)\n def _(expr, assumptions):\n return isprime(expr)\n \[email protected]_many(Rational, Infinity, NegativeInfinity, ImaginaryUnit) # type: ignore\[email protected]_many(Rational, Infinity, NegativeInfinity, ImaginaryUnit)\n def _(expr, assumptions):\n return False\n \[email protected](Float) # type: ignore\[email protected](Float)\n def _(expr, assumptions):\n return _PrimePredicate_number(expr, assumptions)\n \[email protected](NumberSymbol) # type: ignore\[email protected](NumberSymbol)\n def _(expr, assumptions):\n return _PrimePredicate_number(expr, assumptions)\n \[email protected](NaN) # type: ignore\[email protected](NaN)\n def _(expr, assumptions):\n return None\n \n \n # CompositePredicate\n \[email protected](Expr) # type: ignore\[email protected](Expr)\n def _(expr, assumptions):\n ret = expr.is_composite\n if ret is None:\n raise MDNotImplementedError\n return ret\n \[email protected](Basic) # type: ignore\[email protected](Basic)\n def _(expr, assumptions):\n _positive = ask(Q.positive(expr), assumptions)\n if _positive:\n@@ -129,19 +129,19 @@ def _EvenPredicate_number(expr, assumptions):\n return False\n return i % 2 == 0\n \[email protected](Expr) # type: ignore\[email protected](Expr)\n def _(expr, assumptions):\n ret = expr.is_even\n if ret is None:\n raise MDNotImplementedError\n return ret\n \[email protected](Basic) # type: ignore\[email protected](Basic)\n def _(expr, assumptions):\n if expr.is_number:\n return _EvenPredicate_number(expr, assumptions)\n \[email protected](Mul) # type: ignore\[email protected](Mul)\n def _(expr, assumptions):\n \"\"\"\n Even * Integer -> Even\n@@ -182,7 +182,7 @@ def _(expr, assumptions):\n if odd == len(expr.args):\n return False\n \[email protected](Add) # type: ignore\[email protected](Add)\n def _(expr, assumptions):\n \"\"\"\n Even + Odd -> Odd\n@@ -203,7 +203,7 @@ def _(expr, assumptions):\n else:\n return _result\n \[email protected](Pow) # type: ignore\[email protected](Pow)\n def _(expr, assumptions):\n if expr.is_number:\n return _EvenPredicate_number(expr, assumptions)\n@@ -215,48 +215,48 @@ def _(expr, assumptions):\n elif expr.base is S.NegativeOne:\n return False\n \[email protected](Integer) # type: ignore\[email protected](Integer)\n def _(expr, assumptions):\n return not bool(expr.p & 1)\n \[email protected]_many(Rational, Infinity, NegativeInfinity, ImaginaryUnit) # type: ignore\[email protected]_many(Rational, Infinity, NegativeInfinity, ImaginaryUnit)\n def _(expr, assumptions):\n return False\n \[email protected](NumberSymbol) # type: ignore\[email protected](NumberSymbol)\n def _(expr, assumptions):\n return _EvenPredicate_number(expr, assumptions)\n \[email protected](Abs) # type: ignore\[email protected](Abs)\n def _(expr, assumptions):\n if ask(Q.real(expr.args[0]), assumptions):\n return ask(Q.even(expr.args[0]), assumptions)\n \[email protected](re) # type: ignore\[email protected](re)\n def _(expr, assumptions):\n if ask(Q.real(expr.args[0]), assumptions):\n return ask(Q.even(expr.args[0]), assumptions)\n \[email protected](im) # type: ignore\[email protected](im)\n def _(expr, assumptions):\n if ask(Q.real(expr.args[0]), assumptions):\n return True\n \[email protected](NaN) # type: ignore\[email protected](NaN)\n def _(expr, assumptions):\n return None\n \n \n # OddPredicate\n \[email protected](Expr) # type: ignore\[email protected](Expr)\n def _(expr, assumptions):\n ret = expr.is_odd\n if ret is None:\n raise MDNotImplementedError\n return ret\n \[email protected](Basic) # type: ignore\[email protected](Basic)\n def _(expr, assumptions):\n _integer = ask(Q.integer(expr), assumptions)\n if _integer:\ndiff --git a/sympy/assumptions/handlers/order.py b/sympy/assumptions/handlers/order.py\nindex 30d025118d2e..f4a5378c20a9 100644\n--- a/sympy/assumptions/handlers/order.py\n+++ b/sympy/assumptions/handlers/order.py\n@@ -40,19 +40,19 @@ def _NegativePredicate_number(expr, assumptions):\n if r._prec != 1:\n return r < 0\n \[email protected](Basic) # type: ignore\[email protected](Basic)\n def _(expr, assumptions):\n if expr.is_number:\n return _NegativePredicate_number(expr, assumptions)\n \[email protected](Expr) # type: ignore\[email protected](Expr)\n def _(expr, assumptions):\n ret = expr.is_negative\n if ret is None:\n raise MDNotImplementedError\n return ret\n \[email protected](Add) # type: ignore\[email protected](Add)\n def _(expr, assumptions):\n \"\"\"\n Positive + Positive -> Positive,\n@@ -76,7 +76,7 @@ def _(expr, assumptions):\n if nonpos < len(expr.args):\n return True\n \[email protected](Mul) # type: ignore\[email protected](Mul)\n def _(expr, assumptions):\n if expr.is_number:\n return _NegativePredicate_number(expr, assumptions)\n@@ -92,7 +92,7 @@ def _(expr, assumptions):\n return\n return result\n \[email protected](Pow) # type: ignore\[email protected](Pow)\n def _(expr, assumptions):\n \"\"\"\n Real ** Even -> NonNegative\n@@ -116,11 +116,11 @@ def _(expr, assumptions):\n if ask(Q.odd(expr.exp), assumptions):\n return ask(Q.negative(expr.base), assumptions)\n \[email protected]_many(Abs, ImaginaryUnit) # type: ignore\[email protected]_many(Abs, ImaginaryUnit)\n def _(expr, assumptions):\n return False\n \[email protected](exp) # type: ignore\[email protected](exp)\n def _(expr, assumptions):\n if ask(Q.real(expr.exp), assumptions):\n return False\n@@ -129,7 +129,7 @@ def _(expr, assumptions):\n \n # NonNegativePredicate\n \[email protected](Basic) # type: ignore\[email protected](Basic)\n def _(expr, assumptions):\n if expr.is_number:\n notnegative = fuzzy_not(_NegativePredicate_number(expr, assumptions))\n@@ -138,7 +138,7 @@ def _(expr, assumptions):\n else:\n return notnegative\n \[email protected](Expr) # type: ignore\[email protected](Expr)\n def _(expr, assumptions):\n ret = expr.is_nonnegative\n if ret is None:\n@@ -148,14 +148,14 @@ def _(expr, assumptions):\n \n # NonZeroPredicate\n \[email protected](Expr) # type: ignore\[email protected](Expr)\n def _(expr, assumptions):\n ret = expr.is_nonzero\n if ret is None:\n raise MDNotImplementedError\n return ret\n \[email protected](Basic) # type: ignore\[email protected](Basic)\n def _(expr, assumptions):\n if ask(Q.real(expr)) is False:\n return False\n@@ -167,13 +167,13 @@ def nonz(i):\n return i != 0\n return fuzzy_or(nonz(i) for i in i.as_real_imag())\n \[email protected](Add) # type: ignore\[email protected](Add)\n def _(expr, assumptions):\n if all(ask(Q.positive(x), assumptions) for x in expr.args) \\\n or all(ask(Q.negative(x), assumptions) for x in expr.args):\n return True\n \[email protected](Mul) # type: ignore\[email protected](Mul)\n def _(expr, assumptions):\n for arg in expr.args:\n result = ask(Q.nonzero(arg), assumptions)\n@@ -182,34 +182,34 @@ def _(expr, assumptions):\n return result\n return True\n \[email protected](Pow) # type: ignore\[email protected](Pow)\n def _(expr, assumptions):\n return ask(Q.nonzero(expr.base), assumptions)\n \[email protected](Abs) # type: ignore\[email protected](Abs)\n def _(expr, assumptions):\n return ask(Q.nonzero(expr.args[0]), assumptions)\n \[email protected](NaN) # type: ignore\[email protected](NaN)\n def _(expr, assumptions):\n return None\n \n \n # ZeroPredicate\n \[email protected](Expr) # type: ignore\[email protected](Expr)\n def _(expr, assumptions):\n ret = expr.is_zero\n if ret is None:\n raise MDNotImplementedError\n return ret\n \[email protected](Basic) # type: ignore\[email protected](Basic)\n def _(expr, assumptions):\n return fuzzy_and([fuzzy_not(ask(Q.nonzero(expr), assumptions)),\n ask(Q.real(expr), assumptions)])\n \[email protected](Mul) # type: ignore\[email protected](Mul)\n def _(expr, assumptions):\n # TODO: This should be deducible from the nonzero handler\n return fuzzy_or(ask(Q.zero(arg), assumptions) for arg in expr.args)\n@@ -217,14 +217,14 @@ def _(expr, assumptions):\n \n # NonPositivePredicate\n \[email protected](Expr) # type: ignore\[email protected](Expr)\n def _(expr, assumptions):\n ret = expr.is_nonpositive\n if ret is None:\n raise MDNotImplementedError\n return ret\n \[email protected](Basic) # type: ignore\[email protected](Basic)\n def _(expr, assumptions):\n if expr.is_number:\n notpositive = fuzzy_not(_PositivePredicate_number(expr, assumptions))\n@@ -255,19 +255,19 @@ def _PositivePredicate_number(expr, assumptions):\n if r._prec != 1:\n return r > 0\n \[email protected](Expr) # type: ignore\[email protected](Expr)\n def _(expr, assumptions):\n ret = expr.is_positive\n if ret is None:\n raise MDNotImplementedError\n return ret\n \[email protected](Basic) # type: ignore\[email protected](Basic)\n def _(expr, assumptions):\n if expr.is_number:\n return _PositivePredicate_number(expr, assumptions)\n \[email protected](Mul) # type: ignore\[email protected](Mul)\n def _(expr, assumptions):\n if expr.is_number:\n return _PositivePredicate_number(expr, assumptions)\n@@ -281,7 +281,7 @@ def _(expr, assumptions):\n return\n return result\n \[email protected](Add) # type: ignore\[email protected](Add)\n def _(expr, assumptions):\n if expr.is_number:\n return _PositivePredicate_number(expr, assumptions)\n@@ -301,7 +301,7 @@ def _(expr, assumptions):\n if nonneg < len(expr.args):\n return True\n \[email protected](Pow) # type: ignore\[email protected](Pow)\n def _(expr, assumptions):\n if expr.base == E:\n if ask(Q.real(expr.exp), assumptions):\n@@ -321,14 +321,14 @@ def _(expr, assumptions):\n if ask(Q.odd(expr.exp), assumptions):\n return False\n \[email protected](exp) # type: ignore\[email protected](exp)\n def _(expr, assumptions):\n if ask(Q.real(expr.exp), assumptions):\n return True\n if ask(Q.imaginary(expr.exp), assumptions):\n return ask(Q.even(expr.exp/(I*pi)), assumptions)\n \[email protected](log) # type: ignore\[email protected](log)\n def _(expr, assumptions):\n r = ask(Q.real(expr.args[0]), assumptions)\n if r is not True:\n@@ -338,41 +338,41 @@ def _(expr, assumptions):\n if ask(Q.negative(expr.args[0] - 1), assumptions):\n return False\n \[email protected](factorial) # type: ignore\[email protected](factorial)\n def _(expr, assumptions):\n x = expr.args[0]\n if ask(Q.integer(x) & Q.positive(x), assumptions):\n return True\n \[email protected](ImaginaryUnit) # type: ignore\[email protected](ImaginaryUnit)\n def _(expr, assumptions):\n return False\n \[email protected](Abs) # type: ignore\[email protected](Abs)\n def _(expr, assumptions):\n return ask(Q.nonzero(expr), assumptions)\n \[email protected](Trace) # type: ignore\[email protected](Trace)\n def _(expr, assumptions):\n if ask(Q.positive_definite(expr.arg), assumptions):\n return True\n \[email protected](Determinant) # type: ignore\[email protected](Determinant)\n def _(expr, assumptions):\n if ask(Q.positive_definite(expr.arg), assumptions):\n return True\n \[email protected](MatrixElement) # type: ignore\[email protected](MatrixElement)\n def _(expr, assumptions):\n if (expr.i == expr.j\n and ask(Q.positive_definite(expr.parent), assumptions)):\n return True\n \[email protected](atan) # type: ignore\[email protected](atan)\n def _(expr, assumptions):\n return ask(Q.positive(expr.args[0]), assumptions)\n \[email protected](asin) # type: ignore\[email protected](asin)\n def _(expr, assumptions):\n x = expr.args[0]\n if ask(Q.positive(x) & Q.nonpositive(x - 1), assumptions):\n@@ -380,38 +380,38 @@ def _(expr, assumptions):\n if ask(Q.negative(x) & Q.nonnegative(x + 1), assumptions):\n return False\n \[email protected](acos) # type: ignore\[email protected](acos)\n def _(expr, assumptions):\n x = expr.args[0]\n if ask(Q.nonpositive(x - 1) & Q.nonnegative(x + 1), assumptions):\n return True\n \[email protected](acot) # type: ignore\[email protected](acot)\n def _(expr, assumptions):\n return ask(Q.real(expr.args[0]), assumptions)\n \[email protected](NaN) # type: ignore\[email protected](NaN)\n def _(expr, assumptions):\n return None\n \n \n # ExtendedNegativePredicate\n \[email protected](object) # type: ignore\[email protected](object)\n def _(expr, assumptions):\n return ask(Q.negative(expr) | Q.negative_infinite(expr), assumptions)\n \n \n # ExtendedPositivePredicate\n \[email protected](object) # type: ignore\[email protected](object)\n def _(expr, assumptions):\n return ask(Q.positive(expr) | Q.positive_infinite(expr), assumptions)\n \n \n # ExtendedNonZeroPredicate\n \[email protected](object) # type: ignore\[email protected](object)\n def _(expr, assumptions):\n return ask(\n Q.negative_infinite(expr) | Q.negative(expr) | Q.positive(expr) | Q.positive_infinite(expr),\n@@ -420,7 +420,7 @@ def _(expr, assumptions):\n \n # ExtendedNonPositivePredicate\n \[email protected](object) # type: ignore\[email protected](object)\n def _(expr, assumptions):\n return ask(\n Q.negative_infinite(expr) | Q.negative(expr) | Q.zero(expr),\n@@ -429,7 +429,7 @@ def _(expr, assumptions):\n \n # ExtendedNonNegativePredicate\n \[email protected](object) # type: ignore\[email protected](object)\n def _(expr, assumptions):\n return ask(\n Q.zero(expr) | Q.positive(expr) | Q.positive_infinite(expr),\ndiff --git a/sympy/assumptions/handlers/sets.py b/sympy/assumptions/handlers/sets.py\nindex 377ce28eae07..b53bcfedef30 100644\n--- a/sympy/assumptions/handlers/sets.py\n+++ b/sympy/assumptions/handlers/sets.py\n@@ -41,19 +41,19 @@ def _IntegerPredicate_number(expr, assumptions):\n def _(expr, assumptions):\n return True\n \[email protected]_many(Exp1, GoldenRatio, ImaginaryUnit, Infinity, # type: ignore\[email protected]_many(Exp1, GoldenRatio, ImaginaryUnit, Infinity,\n NegativeInfinity, Pi, Rational, TribonacciConstant)\n def _(expr, assumptions):\n return False\n \[email protected](Expr) # type: ignore\[email protected](Expr)\n def _(expr, assumptions):\n ret = expr.is_integer\n if ret is None:\n raise MDNotImplementedError\n return ret\n \[email protected]_many(Add, Pow) # type: ignore\[email protected]_many(Add, Pow)\n def _(expr, assumptions):\n \"\"\"\n * Integer + Integer -> Integer\n@@ -64,7 +64,7 @@ def _(expr, assumptions):\n return _IntegerPredicate_number(expr, assumptions)\n return test_closed_group(expr, assumptions, Q.integer)\n \[email protected](Mul) # type: ignore\[email protected](Mul)\n def _(expr, assumptions):\n \"\"\"\n * Integer*Integer -> Integer\n@@ -92,38 +92,38 @@ def _(expr, assumptions):\n \n return _output\n \[email protected](Abs) # type: ignore\[email protected](Abs)\n def _(expr, assumptions):\n return ask(Q.integer(expr.args[0]), assumptions)\n \[email protected]_many(Determinant, MatrixElement, Trace) # type: ignore\[email protected]_many(Determinant, MatrixElement, Trace)\n def _(expr, assumptions):\n return ask(Q.integer_elements(expr.args[0]), assumptions)\n \n \n # RationalPredicate\n \[email protected](Rational) # type: ignore\[email protected](Rational)\n def _(expr, assumptions):\n return True\n \[email protected](Float) # type: ignore\[email protected](Float)\n def _(expr, assumptions):\n return None\n \[email protected]_many(Exp1, GoldenRatio, ImaginaryUnit, Infinity, # type: ignore\[email protected]_many(Exp1, GoldenRatio, ImaginaryUnit, Infinity,\n NegativeInfinity, Pi, TribonacciConstant)\n def _(expr, assumptions):\n return False\n \[email protected](Expr) # type: ignore\[email protected](Expr)\n def _(expr, assumptions):\n ret = expr.is_rational\n if ret is None:\n raise MDNotImplementedError\n return ret\n \[email protected]_many(Add, Mul) # type: ignore\[email protected]_many(Add, Mul)\n def _(expr, assumptions):\n \"\"\"\n * Rational + Rational -> Rational\n@@ -135,7 +135,7 @@ def _(expr, assumptions):\n return False\n return test_closed_group(expr, assumptions, Q.rational)\n \[email protected](Pow) # type: ignore\[email protected](Pow)\n def _(expr, assumptions):\n \"\"\"\n * Rational ** Integer -> Rational\n@@ -154,25 +154,25 @@ def _(expr, assumptions):\n if ask(Q.prime(expr.base), assumptions):\n return False\n \[email protected]_many(asin, atan, cos, sin, tan) # type: ignore\[email protected]_many(asin, atan, cos, sin, tan)\n def _(expr, assumptions):\n x = expr.args[0]\n if ask(Q.rational(x), assumptions):\n return ask(~Q.nonzero(x), assumptions)\n \[email protected](exp) # type: ignore\[email protected](exp)\n def _(expr, assumptions):\n x = expr.exp\n if ask(Q.rational(x), assumptions):\n return ask(~Q.nonzero(x), assumptions)\n \[email protected]_many(acot, cot) # type: ignore\[email protected]_many(acot, cot)\n def _(expr, assumptions):\n x = expr.args[0]\n if ask(Q.rational(x), assumptions):\n return False\n \[email protected]_many(acos, log) # type: ignore\[email protected]_many(acos, log)\n def _(expr, assumptions):\n x = expr.args[0]\n if ask(Q.rational(x), assumptions):\n@@ -181,14 +181,14 @@ def _(expr, assumptions):\n \n # IrrationalPredicate\n \[email protected](Expr) # type: ignore\[email protected](Expr)\n def _(expr, assumptions):\n ret = expr.is_irrational\n if ret is None:\n raise MDNotImplementedError\n return ret\n \[email protected](Basic) # type: ignore\[email protected](Basic)\n def _(expr, assumptions):\n _real = ask(Q.real(expr), assumptions)\n if _real:\n@@ -211,23 +211,23 @@ def _RealPredicate_number(expr, assumptions):\n # allow None to be returned if we couldn't show for sure\n # that i was 0\n \[email protected]_many(Abs, Exp1, Float, GoldenRatio, im, Pi, Rational, # type: ignore\[email protected]_many(Abs, Exp1, Float, GoldenRatio, im, Pi, Rational,\n re, TribonacciConstant)\n def _(expr, assumptions):\n return True\n \[email protected]_many(ImaginaryUnit, Infinity, NegativeInfinity) # type: ignore\[email protected]_many(ImaginaryUnit, Infinity, NegativeInfinity)\n def _(expr, assumptions):\n return False\n \[email protected](Expr) # type: ignore\[email protected](Expr)\n def _(expr, assumptions):\n ret = expr.is_real\n if ret is None:\n raise MDNotImplementedError\n return ret\n \[email protected](Add) # type: ignore\[email protected](Add)\n def _(expr, assumptions):\n \"\"\"\n * Real + Real -> Real\n@@ -237,7 +237,7 @@ def _(expr, assumptions):\n return _RealPredicate_number(expr, assumptions)\n return test_closed_group(expr, assumptions, Q.real)\n \[email protected](Mul) # type: ignore\[email protected](Mul)\n def _(expr, assumptions):\n \"\"\"\n * Real*Real -> Real\n@@ -257,7 +257,7 @@ def _(expr, assumptions):\n else:\n return result\n \[email protected](Pow) # type: ignore\[email protected](Pow)\n def _(expr, assumptions):\n \"\"\"\n * Real**Integer -> Real\n@@ -321,29 +321,29 @@ def _(expr, assumptions):\n elif ask(Q.negative(expr.base), assumptions):\n return False\n \[email protected]_many(cos, sin) # type: ignore\[email protected]_many(cos, sin)\n def _(expr, assumptions):\n if ask(Q.real(expr.args[0]), assumptions):\n return True\n \[email protected](exp) # type: ignore\[email protected](exp)\n def _(expr, assumptions):\n return ask(\n Q.integer(expr.exp/I/pi) | Q.real(expr.exp), assumptions\n )\n \[email protected](log) # type: ignore\[email protected](log)\n def _(expr, assumptions):\n return ask(Q.positive(expr.args[0]), assumptions)\n \[email protected]_many(Determinant, MatrixElement, Trace) # type: ignore\[email protected]_many(Determinant, MatrixElement, Trace)\n def _(expr, assumptions):\n return ask(Q.real_elements(expr.args[0]), assumptions)\n \n \n # ExtendedRealPredicate\n \[email protected](object) # type: ignore\[email protected](object)\n def _(expr, assumptions):\n return ask(Q.negative_infinite(expr)\n | Q.negative(expr)\n@@ -352,7 +352,7 @@ def _(expr, assumptions):\n | Q.positive_infinite(expr),\n assumptions)\n \[email protected]_many(Infinity, NegativeInfinity) # type: ignore\[email protected]_many(Infinity, NegativeInfinity)\n def _(expr, assumptions):\n return True\n \ndiff --git a/sympy/assumptions/sathandlers.py b/sympy/assumptions/sathandlers.py\nindex b96154b60db2..48579a87274e 100644\n--- a/sympy/assumptions/sathandlers.py\n+++ b/sympy/assumptions/sathandlers.py\n@@ -200,7 +200,7 @@ def __call__(self, expr):\n \n ## Abs ##\n \n-@class_fact_registry.multiregister(Abs) # type: ignore\n+@class_fact_registry.multiregister(Abs)\n def _(expr):\n arg = expr.args[0]\n return [Q.nonnegative(expr),\n@@ -213,7 +213,7 @@ def _(expr):\n \n ### Add ##\n \n-@class_fact_registry.multiregister(Add) # type: ignore\n+@class_fact_registry.multiregister(Add)\n def _(expr):\n return [allargs(x, Q.positive(x), expr) >> Q.positive(expr),\n allargs(x, Q.negative(x), expr) >> Q.negative(expr),\n@@ -223,7 +223,7 @@ def _(expr):\n exactlyonearg(x, ~Q.integer(x), expr) >> ~Q.integer(expr),\n ]\n \n-@class_fact_registry.register(Add) # type: ignore\n+@class_fact_registry.register(Add)\n def _(expr):\n allargs_real = allargs(x, Q.real(x), expr)\n onearg_irrational = exactlyonearg(x, Q.irrational(x), expr)\n@@ -232,7 +232,7 @@ def _(expr):\n \n ### Mul ###\n \n-@class_fact_registry.multiregister(Mul) # type: ignore\n+@class_fact_registry.multiregister(Mul)\n def _(expr):\n return [Equivalent(Q.zero(expr), anyarg(x, Q.zero(x), expr)),\n allargs(x, Q.positive(x), expr) >> Q.positive(expr),\n@@ -243,7 +243,7 @@ def _(expr):\n allargs(x, Q.commutative(x), expr) >> Q.commutative(expr),\n ]\n \n-@class_fact_registry.register(Mul) # type: ignore\n+@class_fact_registry.register(Mul)\n def _(expr):\n # Implicitly assumes Mul has more than one arg\n # Would be allargs(x, Q.prime(x) | Q.composite(x)) except 1 is composite\n@@ -252,20 +252,20 @@ def _(expr):\n allargs_prime = allargs(x, Q.prime(x), expr)\n return Implies(allargs_prime, ~Q.prime(expr))\n \n-@class_fact_registry.register(Mul) # type: ignore\n+@class_fact_registry.register(Mul)\n def _(expr):\n # General Case: Odd number of imaginary args implies mul is imaginary(To be implemented)\n allargs_imag_or_real = allargs(x, Q.imaginary(x) | Q.real(x), expr)\n onearg_imaginary = exactlyonearg(x, Q.imaginary(x), expr)\n return Implies(allargs_imag_or_real, Implies(onearg_imaginary, Q.imaginary(expr)))\n \n-@class_fact_registry.register(Mul) # type: ignore\n+@class_fact_registry.register(Mul)\n def _(expr):\n allargs_real = allargs(x, Q.real(x), expr)\n onearg_irrational = exactlyonearg(x, Q.irrational(x), expr)\n return Implies(allargs_real, Implies(onearg_irrational, Q.irrational(expr)))\n \n-@class_fact_registry.register(Mul) # type: ignore\n+@class_fact_registry.register(Mul)\n def _(expr):\n # Including the integer qualification means we don't need to add any facts\n # for odd, since the assumptions already know that every integer is\n@@ -277,7 +277,7 @@ def _(expr):\n \n ### MatMul ###\n \n-@class_fact_registry.register(MatMul) # type: ignore\n+@class_fact_registry.register(MatMul)\n def _(expr):\n allargs_square = allargs(x, Q.square(x), expr)\n allargs_invertible = allargs(x, Q.invertible(x), expr)\n@@ -286,7 +286,7 @@ def _(expr):\n \n ### Pow ###\n \n-@class_fact_registry.multiregister(Pow) # type: ignore\n+@class_fact_registry.multiregister(Pow)\n def _(expr):\n base, exp = expr.base, expr.exp\n return [\n@@ -312,7 +312,7 @@ def _(expr):\n Q.composite: lambda o: o.is_composite,\n }\n \n-@class_fact_registry.multiregister(Number, NumberSymbol, ImaginaryUnit) # type: ignore\n+@class_fact_registry.multiregister(Number, NumberSymbol, ImaginaryUnit)\n def _(expr):\n ret = []\n for p, getter in _old_assump_getters.items():\ndiff --git a/sympy/sets/handlers/add.py b/sympy/sets/handlers/add.py\nindex a1b6f9bbd8ee..8c07b25ed19d 100644\n--- a/sympy/sets/handlers/add.py\n+++ b/sympy/sets/handlers/add.py\n@@ -1,8 +1,7 @@\n from sympy.core.numbers import oo, Infinity, NegativeInfinity\n from sympy.core.singleton import S\n-from sympy.core.symbol import symbols\n from sympy.core import Basic, Expr\n-from sympy.multipledispatch import dispatch\n+from sympy.multipledispatch import Dispatcher\n from sympy.sets import Interval, FiniteSet\n \n \n@@ -10,21 +9,22 @@\n # XXX: The functions in this module are clearly not tested and are broken in a\n # number of ways.\n \n-_x, _y = symbols(\"x y\")\n+_set_add = Dispatcher('_set_add')\n+_set_sub = Dispatcher('_set_sub')\n \n \n-@dispatch(Basic, Basic) # type: ignore # noqa:F811\n-def _set_add(x, y): # noqa:F811\n+@_set_add.register(Basic, Basic)\n+def _(x, y):\n return None\n \n \n-@dispatch(Expr, Expr) # type: ignore # noqa:F811\n-def _set_add(x, y): # noqa:F811\n+@_set_add.register(Expr, Expr)\n+def _(x, y):\n return x+y\n \n \n-@dispatch(Interval, Interval) # type: ignore # noqa:F811\n-def _set_add(x, y): # noqa:F811\n+@_set_add.register(Interval, Interval)\n+def _(x, y):\n \"\"\"\n Additions in interval arithmetic\n https://en.wikipedia.org/wiki/Interval_arithmetic\n@@ -33,31 +33,31 @@ def _set_add(x, y): # noqa:F811\n x.left_open or y.left_open, x.right_open or y.right_open)\n \n \n-@dispatch(Interval, Infinity) # type: ignore # noqa:F811\n-def _set_add(x, y): # noqa:F811\n+@_set_add.register(Interval, Infinity)\n+def _(x, y):\n if x.start is S.NegativeInfinity:\n return Interval(-oo, oo)\n return FiniteSet({S.Infinity})\n \n-@dispatch(Interval, NegativeInfinity) # type: ignore # noqa:F811\n-def _set_add(x, y): # noqa:F811\n+@_set_add.register(Interval, NegativeInfinity)\n+def _(x, y):\n if x.end is S.Infinity:\n return Interval(-oo, oo)\n return FiniteSet({S.NegativeInfinity})\n \n \n-@dispatch(Basic, Basic) # type: ignore\n-def _set_sub(x, y): # noqa:F811\n+@_set_sub.register(Basic, Basic)\n+def _(x, y):\n return None\n \n \n-@dispatch(Expr, Expr) # type: ignore # noqa:F811\n-def _set_sub(x, y): # noqa:F811\n+@_set_sub.register(Expr, Expr)\n+def _(x, y):\n return x-y\n \n \n-@dispatch(Interval, Interval) # type: ignore # noqa:F811\n-def _set_sub(x, y): # noqa:F811\n+@_set_sub.register(Interval, Interval)\n+def _(x, y):\n \"\"\"\n Subtractions in interval arithmetic\n https://en.wikipedia.org/wiki/Interval_arithmetic\n@@ -66,14 +66,14 @@ def _set_sub(x, y): # noqa:F811\n x.left_open or y.right_open, x.right_open or y.left_open)\n \n \n-@dispatch(Interval, Infinity) # type: ignore # noqa:F811\n-def _set_sub(x, y): # noqa:F811\n+@_set_sub.register(Interval, Infinity)\n+def _(x, y):\n if x.start is S.NegativeInfinity:\n return Interval(-oo, oo)\n return FiniteSet(-oo)\n \n-@dispatch(Interval, NegativeInfinity) # type: ignore # noqa:F811\n-def _set_sub(x, y): # noqa:F811\n+@_set_sub.register(Interval, NegativeInfinity)\n+def _(x, y):\n if x.start is S.NegativeInfinity:\n return Interval(-oo, oo)\n return FiniteSet(-oo)\ndiff --git a/sympy/sets/handlers/functions.py b/sympy/sets/handlers/functions.py\nindex 88df9e2cda6d..2529dbfd4584 100644\n--- a/sympy/sets/handlers/functions.py\n+++ b/sympy/sets/handlers/functions.py\n@@ -8,7 +8,7 @@\n from sympy.functions.elementary.exponential import exp, log\n from sympy.functions.elementary.miscellaneous import Min, Max\n from sympy.logic.boolalg import true\n-from sympy.multipledispatch import dispatch\n+from sympy.multipledispatch import Dispatcher\n from sympy.sets import (imageset, Interval, FiniteSet, Union, ImageSet,\n Intersection, Range, Complement)\n from sympy.sets.sets import EmptySet, is_function_invertible_in_set\n@@ -20,17 +20,19 @@\n \n FunctionUnion = (FunctionClass, Lambda)\n \n+_set_function = Dispatcher('_set_function')\n \n-@dispatch(FunctionClass, Set) # type: ignore # noqa:F811\n-def _set_function(f, x): # noqa:F811\n+\n+@_set_function.register(FunctionClass, Set)\n+def _(f, x):\n return None\n \n-@dispatch(FunctionUnion, FiniteSet) # type: ignore # noqa:F811\n-def _set_function(f, x): # noqa:F811\n+@_set_function.register(FunctionUnion, FiniteSet)\n+def _(f, x):\n return FiniteSet(*map(f, x))\n \n-@dispatch(Lambda, Interval) # type: ignore # noqa:F811\n-def _set_function(f, x): # noqa:F811\n+@_set_function.register(Lambda, Interval)\n+def _(f, x):\n from sympy.solvers.solveset import solveset\n from sympy.series import limit\n # TODO: handle functions with infinitely many solutions (eg, sin, tan)\n@@ -120,36 +122,36 @@ def _set_function(f, x): # noqa:F811\n for i in range(0, len(sing) - 1)]) + \\\n imageset(f, Interval(sing[-1], x.end, True, x.right_open))\n \n-@dispatch(FunctionClass, Interval) # type: ignore # noqa:F811\n-def _set_function(f, x): # noqa:F811\n+@_set_function.register(FunctionClass, Interval)\n+def _(f, x):\n if f == exp:\n return Interval(exp(x.start), exp(x.end), x.left_open, x.right_open)\n elif f == log:\n return Interval(log(x.start), log(x.end), x.left_open, x.right_open)\n return ImageSet(Lambda(_x, f(_x)), x)\n \n-@dispatch(FunctionUnion, Union) # type: ignore # noqa:F811\n-def _set_function(f, x): # noqa:F811\n+@_set_function.register(FunctionUnion, Union)\n+def _(f, x):\n return Union(*(imageset(f, arg) for arg in x.args))\n \n-@dispatch(FunctionUnion, Intersection) # type: ignore # noqa:F811\n-def _set_function(f, x): # noqa:F811\n+@_set_function.register(FunctionUnion, Intersection)\n+def _(f, x):\n # If the function is invertible, intersect the maps of the sets.\n if is_function_invertible_in_set(f, x):\n return Intersection(*(imageset(f, arg) for arg in x.args))\n else:\n return ImageSet(Lambda(_x, f(_x)), x)\n \n-@dispatch(FunctionUnion, EmptySet) # type: ignore # noqa:F811\n-def _set_function(f, x): # noqa:F811\n+@_set_function.register(FunctionUnion, EmptySet)\n+def _(f, x):\n return x\n \n-@dispatch(FunctionUnion, Set) # type: ignore # noqa:F811\n-def _set_function(f, x): # noqa:F811\n+@_set_function.register(FunctionUnion, Set)\n+def _(f, x):\n return ImageSet(Lambda(_x, f(_x)), x)\n \n-@dispatch(FunctionUnion, Range) # type: ignore # noqa:F811\n-def _set_function(f, self): # noqa:F811\n+@_set_function.register(FunctionUnion, Range)\n+def _(f, self):\n if not self:\n return S.EmptySet\n if not isinstance(f.expr, Expr):\n@@ -172,8 +174,8 @@ def _set_function(f, self): # noqa:F811\n if F != expr:\n return imageset(x, F, Range(self.size))\n \n-@dispatch(FunctionUnion, Integers) # type: ignore # noqa:F811\n-def _set_function(f, self): # noqa:F811\n+@_set_function.register(FunctionUnion, Integers)\n+def _(f, self):\n expr = f.expr\n if not isinstance(expr, Expr):\n return\n@@ -225,8 +227,8 @@ def _set_function(f, self): # noqa:F811\n return ImageSet(Lambda(n, expr), S.Integers)\n \n \n-@dispatch(FunctionUnion, Naturals) # type: ignore # noqa:F811\n-def _set_function(f, self): # noqa:F811\n+@_set_function.register(FunctionUnion, Naturals)\n+def _(f, self):\n expr = f.expr\n if not isinstance(expr, Expr):\n return\n@@ -252,8 +254,8 @@ def _set_function(f, self): # noqa:F811\n return Range(c, -oo, step)\n \n \n-@dispatch(FunctionUnion, Reals) # type: ignore # noqa:F811\n-def _set_function(f, self): # noqa:F811\n+@_set_function.register(FunctionUnion, Reals)\n+def _(f, self):\n expr = f.expr\n if not isinstance(expr, Expr):\n return\ndiff --git a/sympy/sets/handlers/intersection.py b/sympy/sets/handlers/intersection.py\nindex 1980251c5d97..9305721b3cb7 100644\n--- a/sympy/sets/handlers/intersection.py\n+++ b/sympy/sets/handlers/intersection.py\n@@ -6,7 +6,7 @@\n from sympy.core.symbol import (Dummy, symbols)\n from sympy.sets.fancysets import ComplexRegion\n from sympy.sets.sets import (FiniteSet, Intersection, Interval, Set, Union)\n-from sympy.multipledispatch import dispatch\n+from sympy.multipledispatch import Dispatcher\n from sympy.sets.conditionset import ConditionSet\n from sympy.sets.fancysets import (Integers, Naturals, Reals, Range,\n ImageSet, Rationals)\n@@ -14,28 +14,31 @@\n from sympy.simplify.radsimp import numer\n \n \n-@dispatch(ConditionSet, ConditionSet) # type: ignore # noqa:F811\n-def intersection_sets(a, b): # noqa:F811\n+intersection_sets = Dispatcher('intersection_sets')\n+\n+\n+@intersection_sets.register(ConditionSet, ConditionSet)\n+def _(a, b):\n return None\n \n-@dispatch(ConditionSet, Set) # type: ignore # noqa:F811\n-def intersection_sets(a, b): # noqa:F811\n+@intersection_sets.register(ConditionSet, Set)\n+def _(a, b):\n return ConditionSet(a.sym, a.condition, Intersection(a.base_set, b))\n \n-@dispatch(Naturals, Integers) # type: ignore # noqa:F811\n-def intersection_sets(a, b): # noqa:F811\n+@intersection_sets.register(Naturals, Integers)\n+def _(a, b):\n return a\n \n-@dispatch(Naturals, Naturals) # type: ignore # noqa:F811\n-def intersection_sets(a, b): # noqa:F811\n+@intersection_sets.register(Naturals, Naturals)\n+def _(a, b):\n return a if a is S.Naturals else b\n \n-@dispatch(Interval, Naturals) # type: ignore # noqa:F811\n-def intersection_sets(a, b): # noqa:F811\n+@intersection_sets.register(Interval, Naturals)\n+def _(a, b):\n return intersection_sets(b, a)\n \n-@dispatch(ComplexRegion, Set) # type: ignore # noqa:F811\n-def intersection_sets(self, other): # noqa:F811\n+@intersection_sets.register(ComplexRegion, Set)\n+def _(self, other):\n if other.is_ComplexRegion:\n # self in rectangular form\n if (not self.polar) and (not other.polar):\n@@ -81,12 +84,12 @@ def intersection_sets(self, other): # noqa:F811\n new_interval = Union(*new_interval)\n return Intersection(new_interval, other)\n \n-@dispatch(Integers, Reals) # type: ignore # noqa:F811\n-def intersection_sets(a, b): # noqa:F811\n+@intersection_sets.register(Integers, Reals)\n+def _(a, b):\n return a\n \n-@dispatch(Range, Interval) # type: ignore # noqa:F811\n-def intersection_sets(a, b): # noqa:F811\n+@intersection_sets.register(Range, Interval)\n+def _(a, b):\n # Check that there are no symbolic arguments\n if not all(i.is_number for i in a.args + b.args[:2]):\n return\n@@ -106,12 +109,12 @@ def intersection_sets(a, b): # noqa:F811\n end -= 1\n return intersection_sets(a, Range(start, end + 1))\n \n-@dispatch(Range, Naturals) # type: ignore # noqa:F811\n-def intersection_sets(a, b): # noqa:F811\n+@intersection_sets.register(Range, Naturals)\n+def _(a, b):\n return intersection_sets(a, Interval(b.inf, S.Infinity))\n \n-@dispatch(Range, Range) # type: ignore # noqa:F811\n-def intersection_sets(a, b): # noqa:F811\n+@intersection_sets.register(Range, Range)\n+def _(a, b):\n # Check that there are no symbolic range arguments\n if not all(all(v.is_number for v in r.args) for r in [a, b]):\n return None\n@@ -226,13 +229,13 @@ def _updated_range(r, first):\n return Range(start, stop, step)\n \n \n-@dispatch(Range, Integers) # type: ignore # noqa:F811\n-def intersection_sets(a, b): # noqa:F811\n+@intersection_sets.register(Range, Integers)\n+def _(a, b):\n return a\n \n \n-@dispatch(ImageSet, Set) # type: ignore # noqa:F811\n-def intersection_sets(self, other): # noqa:F811\n+@intersection_sets.register(ImageSet, Set)\n+def _(self, other):\n from sympy.solvers.diophantine import diophantine\n \n # Only handle the straight-forward univariate case\n@@ -395,15 +398,15 @@ def _solution_union(exprs, sym):\n return\n \n \n-@dispatch(ProductSet, ProductSet) # type: ignore # noqa:F811\n-def intersection_sets(a, b): # noqa:F811\n+@intersection_sets.register(ProductSet, ProductSet)\n+def _(a, b):\n if len(b.args) != len(a.args):\n return S.EmptySet\n return ProductSet(*(i.intersect(j) for i, j in zip(a.sets, b.sets)))\n \n \n-@dispatch(Interval, Interval) # type: ignore # noqa:F811\n-def intersection_sets(a, b): # noqa:F811\n+@intersection_sets.register(Interval, Interval)\n+def _(a, b):\n # handle (-oo, oo)\n infty = S.NegativeInfinity, S.Infinity\n if a == Interval(*infty):\n@@ -449,39 +452,39 @@ def intersection_sets(a, b): # noqa:F811\n \n return Interval(start, end, left_open, right_open)\n \n-@dispatch(EmptySet, Set) # type: ignore # noqa:F811\n-def intersection_sets(a, b): # noqa:F811\n+@intersection_sets.register(EmptySet, Set)\n+def _(a, b):\n return S.EmptySet\n \n-@dispatch(UniversalSet, Set) # type: ignore # noqa:F811\n-def intersection_sets(a, b): # noqa:F811\n+@intersection_sets.register(UniversalSet, Set)\n+def _(a, b):\n return b\n \n-@dispatch(FiniteSet, FiniteSet) # type: ignore # noqa:F811\n-def intersection_sets(a, b): # noqa:F811\n+@intersection_sets.register(FiniteSet, FiniteSet)\n+def _(a, b):\n return FiniteSet(*(a._elements & b._elements))\n \n-@dispatch(FiniteSet, Set) # type: ignore # noqa:F811\n-def intersection_sets(a, b): # noqa:F811\n+@intersection_sets.register(FiniteSet, Set)\n+def _(a, b):\n try:\n return FiniteSet(*[el for el in a if el in b])\n except TypeError:\n return None # could not evaluate `el in b` due to symbolic ranges.\n \n-@dispatch(Set, Set) # type: ignore # noqa:F811\n-def intersection_sets(a, b): # noqa:F811\n+@intersection_sets.register(Set, Set)\n+def _(a, b):\n return None\n \n-@dispatch(Integers, Rationals) # type: ignore # noqa:F811\n-def intersection_sets(a, b): # noqa:F811\n+@intersection_sets.register(Integers, Rationals)\n+def _(a, b):\n return a\n \n-@dispatch(Naturals, Rationals) # type: ignore # noqa:F811\n-def intersection_sets(a, b): # noqa:F811\n+@intersection_sets.register(Naturals, Rationals)\n+def _(a, b):\n return a\n \n-@dispatch(Rationals, Reals) # type: ignore # noqa:F811\n-def intersection_sets(a, b): # noqa:F811\n+@intersection_sets.register(Rationals, Reals)\n+def _(a, b):\n return a\n \n def _intlike_interval(a, b):\n@@ -494,10 +497,10 @@ def _intlike_interval(a, b):\n except ValueError:\n return None\n \n-@dispatch(Integers, Interval) # type: ignore # noqa:F811\n-def intersection_sets(a, b): # noqa:F811\n+@intersection_sets.register(Integers, Interval)\n+def _(a, b):\n return _intlike_interval(a, b)\n \n-@dispatch(Naturals, Interval) # type: ignore # noqa:F811\n-def intersection_sets(a, b): # noqa:F811\n+@intersection_sets.register(Naturals, Interval)\n+def _(a, b):\n return _intlike_interval(a, b)\ndiff --git a/sympy/sets/handlers/issubset.py b/sympy/sets/handlers/issubset.py\nindex f39c594101cf..cc23e8bf56f1 100644\n--- a/sympy/sets/handlers/issubset.py\n+++ b/sympy/sets/handlers/issubset.py\n@@ -4,17 +4,21 @@\n from sympy.core.relational import Eq\n from sympy.sets.sets import FiniteSet, Interval, Set, Union, ProductSet\n from sympy.sets.fancysets import Complexes, Reals, Range, Rationals\n-from sympy.multipledispatch import dispatch\n+from sympy.multipledispatch import Dispatcher\n \n \n _inf_sets = [S.Naturals, S.Naturals0, S.Integers, S.Rationals, S.Reals, S.Complexes]\n \n-@dispatch(Set, Set) # type: ignore # noqa:F811\n-def is_subset_sets(a, b): # noqa:F811\n+\n+is_subset_sets = Dispatcher('is_subset_sets')\n+\n+\n+@is_subset_sets.register(Set, Set)\n+def _(a, b):\n return None\n \n-@dispatch(Interval, Interval) # type: ignore # noqa:F811\n-def is_subset_sets(a, b): # noqa:F811\n+@is_subset_sets.register(Interval, Interval)\n+def _(a, b):\n # This is correct but can be made more comprehensive...\n if fuzzy_bool(a.start < b.start):\n return False\n@@ -25,15 +29,15 @@ def is_subset_sets(a, b): # noqa:F811\n if (b.right_open and not a.right_open and fuzzy_bool(Eq(a.end, b.end))):\n return False\n \n-@dispatch(Interval, FiniteSet) # type: ignore # noqa:F811\n-def is_subset_sets(a_interval, b_fs): # noqa:F811\n+@is_subset_sets.register(Interval, FiniteSet)\n+def _(a_interval, b_fs):\n # An Interval can only be a subset of a finite set if it is finite\n # which can only happen if it has zero measure.\n if fuzzy_not(a_interval.measure.is_zero):\n return False\n \n-@dispatch(Interval, Union) # type: ignore # noqa:F811\n-def is_subset_sets(a_interval, b_u): # noqa:F811\n+@is_subset_sets.register(Interval, Union)\n+def _(a_interval, b_u):\n if all(isinstance(s, (Interval, FiniteSet)) for s in b_u.args):\n intervals = [s for s in b_u.args if isinstance(s, Interval)]\n if all(fuzzy_bool(a_interval.start < s.start) for s in intervals):\n@@ -48,14 +52,14 @@ def is_subset_sets(a_interval, b_u): # noqa:F811\n if all(no_overlap(s, a_interval) for s in intervals):\n return False\n \n-@dispatch(Range, Range) # type: ignore # noqa:F811\n-def is_subset_sets(a, b): # noqa:F811\n+@is_subset_sets.register(Range, Range)\n+def _(a, b):\n if a.step == b.step == 1:\n return fuzzy_and([fuzzy_bool(a.start >= b.start),\n fuzzy_bool(a.stop <= b.stop)])\n \n-@dispatch(Range, Interval) # type: ignore # noqa:F811\n-def is_subset_sets(a_range, b_interval): # noqa:F811\n+@is_subset_sets.register(Range, Interval)\n+def _(a_range, b_interval):\n if a_range.step.is_positive:\n if b_interval.left_open and a_range.inf.is_finite:\n cond_left = a_range.inf > b_interval.left\n@@ -67,8 +71,8 @@ def is_subset_sets(a_range, b_interval): # noqa:F811\n cond_right = a_range.sup <= b_interval.right\n return fuzzy_and([cond_left, cond_right])\n \n-@dispatch(Range, FiniteSet) # type: ignore # noqa:F811\n-def is_subset_sets(a_range, b_finiteset): # noqa:F811\n+@is_subset_sets.register(Range, FiniteSet)\n+def _(a_range, b_finiteset):\n try:\n a_size = a_range.size\n except ValueError:\n@@ -101,40 +105,40 @@ def is_subset_sets(a_range, b_finiteset): # noqa:F811\n return True\n return None\n \n-@dispatch(Interval, Range) # type: ignore # noqa:F811\n-def is_subset_sets(a_interval, b_range): # noqa:F811\n+@is_subset_sets.register(Interval, Range)\n+def _(a_interval, b_range):\n if a_interval.measure.is_extended_nonzero:\n return False\n \n-@dispatch(Interval, Rationals) # type: ignore # noqa:F811\n-def is_subset_sets(a_interval, b_rationals): # noqa:F811\n+@is_subset_sets.register(Interval, Rationals)\n+def _(a_interval, b_rationals):\n if a_interval.measure.is_extended_nonzero:\n return False\n \n-@dispatch(Range, Complexes) # type: ignore # noqa:F811\n-def is_subset_sets(a, b): # noqa:F811\n+@is_subset_sets.register(Range, Complexes)\n+def _(a, b):\n return True\n \n-@dispatch(Complexes, Interval) # type: ignore # noqa:F811\n-def is_subset_sets(a, b): # noqa:F811\n+@is_subset_sets.register(Complexes, Interval)\n+def _(a, b):\n return False\n \n-@dispatch(Complexes, Range) # type: ignore # noqa:F811\n-def is_subset_sets(a, b): # noqa:F811\n+@is_subset_sets.register(Complexes, Range)\n+def _(a, b):\n return False\n \n-@dispatch(Complexes, Rationals) # type: ignore # noqa:F811\n-def is_subset_sets(a, b): # noqa:F811\n+@is_subset_sets.register(Complexes, Rationals)\n+def _(a, b):\n return False\n \n-@dispatch(Rationals, Reals) # type: ignore # noqa:F811\n-def is_subset_sets(a, b): # noqa:F811\n+@is_subset_sets.register(Rationals, Reals)\n+def _(a, b):\n return True\n \n-@dispatch(Rationals, Range) # type: ignore # noqa:F811\n-def is_subset_sets(a, b): # noqa:F811\n+@is_subset_sets.register(Rationals, Range)\n+def _(a, b):\n return False\n \n-@dispatch(ProductSet, FiniteSet) # type: ignore # noqa:F811\n-def is_subset_sets(a_ps, b_fs): # noqa:F811\n+@is_subset_sets.register(ProductSet, FiniteSet)\n+def _(a_ps, b_fs):\n return fuzzy_and(b_fs.contains(x) for x in a_ps)\ndiff --git a/sympy/sets/handlers/mul.py b/sympy/sets/handlers/mul.py\nindex 984ac0571930..0dedc8068b79 100644\n--- a/sympy/sets/handlers/mul.py\n+++ b/sympy/sets/handlers/mul.py\n@@ -1,27 +1,32 @@\n from sympy.core import Basic, Expr\n from sympy.core.numbers import oo\n from sympy.core.symbol import symbols\n-from sympy.multipledispatch import dispatch\n+from sympy.multipledispatch import Dispatcher\n from sympy.sets.setexpr import set_mul\n from sympy.sets.sets import Interval, Set\n \n+\n _x, _y = symbols(\"x y\")\n \n \n-@dispatch(Basic, Basic) # type: ignore # noqa:F811\n-def _set_mul(x, y): # noqa:F811\n+_set_mul = Dispatcher('_set_mul')\n+_set_div = Dispatcher('_set_div')\n+\n+\n+@_set_mul.register(Basic, Basic)\n+def _(x, y):\n return None\n \n-@dispatch(Set, Set) # type: ignore # noqa:F811\n-def _set_mul(x, y): # noqa:F811\n+@_set_mul.register(Set, Set)\n+def _(x, y):\n return None\n \n-@dispatch(Expr, Expr) # type: ignore # noqa:F811\n-def _set_mul(x, y): # noqa:F811\n+@_set_mul.register(Expr, Expr)\n+def _(x, y):\n return x*y\n \n-@dispatch(Interval, Interval) # type: ignore # noqa:F811\n-def _set_mul(x, y): # noqa:F811\n+@_set_mul.register(Interval, Interval)\n+def _(x, y):\n \"\"\"\n Multiplications in interval arithmetic\n https://en.wikipedia.org/wiki/Interval_arithmetic\n@@ -43,20 +48,20 @@ def _set_mul(x, y): # noqa:F811\n maxopen\n )\n \n-@dispatch(Basic, Basic) # type: ignore # noqa:F811\n-def _set_div(x, y): # noqa:F811\n+@_set_div.register(Basic, Basic)\n+def _(x, y):\n return None\n \n-@dispatch(Expr, Expr) # type: ignore # noqa:F811\n-def _set_div(x, y): # noqa:F811\n+@_set_div.register(Expr, Expr)\n+def _(x, y):\n return x/y\n \n-@dispatch(Set, Set) # type: ignore # noqa:F811 # noqa:F811\n-def _set_div(x, y): # noqa:F811\n+@_set_div.register(Set, Set)\n+def _(x, y):\n return None\n \n-@dispatch(Interval, Interval) # type: ignore # noqa:F811\n-def _set_div(x, y): # noqa:F811\n+@_set_div.register(Interval, Interval)\n+def _(x, y):\n \"\"\"\n Divisions in interval arithmetic\n https://en.wikipedia.org/wiki/Interval_arithmetic\ndiff --git a/sympy/sets/handlers/power.py b/sympy/sets/handlers/power.py\nindex 2e510deb1653..3cad4ee49ab2 100644\n--- a/sympy/sets/handlers/power.py\n+++ b/sympy/sets/handlers/power.py\n@@ -7,30 +7,33 @@\n from sympy.sets.fancysets import ImageSet\n from sympy.sets.setexpr import set_div\n from sympy.sets.sets import Set, Interval, FiniteSet, Union\n-from sympy.multipledispatch import dispatch\n+from sympy.multipledispatch import Dispatcher\n \n \n _x, _y = symbols(\"x y\")\n \n \n-@dispatch(Basic, Basic) # type: ignore # noqa:F811\n-def _set_pow(x, y): # noqa:F811\n+_set_pow = Dispatcher('_set_pow')\n+\n+\n+@_set_pow.register(Basic, Basic)\n+def _(x, y):\n return None\n \n-@dispatch(Set, Set) # type: ignore # noqa:F811\n-def _set_pow(x, y): # noqa:F811\n+@_set_pow.register(Set, Set)\n+def _(x, y):\n return ImageSet(Lambda((_x, _y), (_x ** _y)), x, y)\n \n-@dispatch(Expr, Expr) # type: ignore # noqa:F811\n-def _set_pow(x, y): # noqa:F811\n+@_set_pow.register(Expr, Expr)\n+def _(x, y):\n return x**y\n \n-@dispatch(Interval, Zero) # type: ignore # noqa:F811\n-def _set_pow(x, z): # noqa:F811\n+@_set_pow.register(Interval, Zero)\n+def _(x, z):\n return FiniteSet(S.One)\n \n-@dispatch(Interval, Integer) # type: ignore # noqa:F811\n-def _set_pow(x, exponent): # noqa:F811\n+@_set_pow.register(Interval, Integer)\n+def _(x, exponent):\n \"\"\"\n Powers in interval arithmetic\n https://en.wikipedia.org/wiki/Interval_arithmetic\n@@ -77,8 +80,8 @@ def _set_pow(x, exponent): # noqa:F811\n else:\n return Interval(S.Zero, sleft, S.Zero not in x, left_open)\n \n-@dispatch(Interval, Infinity) # type: ignore # noqa:F811\n-def _set_pow(b, e): # noqa:F811\n+@_set_pow.register(Interval, Infinity)\n+def _(b, e):\n # TODO: add logic for open intervals?\n if b.start.is_nonnegative:\n if b.end < 1:\n@@ -99,6 +102,6 @@ def _set_pow(b, e): # noqa:F811\n return Interval(0, oo)\n return Interval(-oo, oo)\n \n-@dispatch(Interval, NegativeInfinity) # type: ignore # noqa:F811\n-def _set_pow(b, e): # noqa:F811\n+@_set_pow.register(Interval, NegativeInfinity)\n+def _(b, e):\n return _set_pow(set_div(S.One, b), oo)\ndiff --git a/sympy/sets/handlers/union.py b/sympy/sets/handlers/union.py\nindex fee36c83b97b..35ccf8f6d743 100644\n--- a/sympy/sets/handlers/union.py\n+++ b/sympy/sets/handlers/union.py\n@@ -4,43 +4,46 @@\n Interval, ProductSet, Set, Union, UniversalSet)\n from sympy.sets.fancysets import (ComplexRegion, Naturals, Naturals0,\n Integers, Rationals, Reals)\n-from sympy.multipledispatch import dispatch\n+from sympy.multipledispatch import Dispatcher\n \n \n-@dispatch(Naturals0, Naturals) # type: ignore # noqa:F811\n-def union_sets(a, b): # noqa:F811\n+union_sets = Dispatcher('union_sets')\n+\n+\n+@union_sets.register(Naturals0, Naturals)\n+def _(a, b):\n return a\n \n-@dispatch(Rationals, Naturals) # type: ignore # noqa:F811\n-def union_sets(a, b): # noqa:F811\n+@union_sets.register(Rationals, Naturals)\n+def _(a, b):\n return a\n \n-@dispatch(Rationals, Naturals0) # type: ignore # noqa:F811\n-def union_sets(a, b): # noqa:F811\n+@union_sets.register(Rationals, Naturals0)\n+def _(a, b):\n return a\n \n-@dispatch(Reals, Naturals) # type: ignore # noqa:F811\n-def union_sets(a, b): # noqa:F811\n+@union_sets.register(Reals, Naturals)\n+def _(a, b):\n return a\n \n-@dispatch(Reals, Naturals0) # type: ignore # noqa:F811\n-def union_sets(a, b): # noqa:F811\n+@union_sets.register(Reals, Naturals0)\n+def _(a, b):\n return a\n \n-@dispatch(Reals, Rationals) # type: ignore # noqa:F811\n-def union_sets(a, b): # noqa:F811\n+@union_sets.register(Reals, Rationals)\n+def _(a, b):\n return a\n \n-@dispatch(Integers, Set) # type: ignore # noqa:F811\n-def union_sets(a, b): # noqa:F811\n+@union_sets.register(Integers, Set)\n+def _(a, b):\n intersect = Intersection(a, b)\n if intersect == a:\n return b\n elif intersect == b:\n return a\n \n-@dispatch(ComplexRegion, Set) # type: ignore # noqa:F811\n-def union_sets(a, b): # noqa:F811\n+@union_sets.register(ComplexRegion, Set)\n+def _(a, b):\n if b.is_subset(S.Reals):\n # treat a subset of reals as a complex region\n b = ComplexRegion.from_real(b)\n@@ -54,17 +57,17 @@ def union_sets(a, b): # noqa:F811\n return ComplexRegion(Union(a.sets, b.sets), polar=True)\n return None\n \n-@dispatch(EmptySet, Set) # type: ignore # noqa:F811\n-def union_sets(a, b): # noqa:F811\n+@union_sets.register(EmptySet, Set)\n+def _(a, b):\n return b\n \n \n-@dispatch(UniversalSet, Set) # type: ignore # noqa:F811\n-def union_sets(a, b): # noqa:F811\n+@union_sets.register(UniversalSet, Set)\n+def _(a, b):\n return a\n \n-@dispatch(ProductSet, ProductSet) # type: ignore # noqa:F811\n-def union_sets(a, b): # noqa:F811\n+@union_sets.register(ProductSet, ProductSet)\n+def _(a, b):\n if b.is_subset(a):\n return a\n if len(b.sets) != len(a.sets):\n@@ -78,14 +81,14 @@ def union_sets(a, b): # noqa:F811\n return Union(a1, b1) * a2\n return None\n \n-@dispatch(ProductSet, Set) # type: ignore # noqa:F811\n-def union_sets(a, b): # noqa:F811\n+@union_sets.register(ProductSet, Set)\n+def _(a, b):\n if b.is_subset(a):\n return a\n return None\n \n-@dispatch(Interval, Interval) # type: ignore # noqa:F811\n-def union_sets(a, b): # noqa:F811\n+@union_sets.register(Interval, Interval)\n+def _(a, b):\n if a._is_comparable(b):\n from sympy.functions.elementary.miscellaneous import Min, Max\n # Non-overlapping intervals\n@@ -104,12 +107,12 @@ def union_sets(a, b): # noqa:F811\n (b.end != end or b.right_open))\n return Interval(start, end, left_open, right_open)\n \n-@dispatch(Interval, UniversalSet) # type: ignore # noqa:F811\n-def union_sets(a, b): # noqa:F811\n+@union_sets.register(Interval, UniversalSet)\n+def _(a, b):\n return S.UniversalSet\n \n-@dispatch(Interval, Set) # type: ignore # noqa:F811\n-def union_sets(a, b): # noqa:F811\n+@union_sets.register(Interval, Set)\n+def _(a, b):\n # If I have open end points and these endpoints are contained in b\n # But only in case, when endpoints are finite. Because\n # interval does not contain oo or -oo.\n@@ -127,18 +130,18 @@ def union_sets(a, b): # noqa:F811\n return {new_a, b}\n return None\n \n-@dispatch(FiniteSet, FiniteSet) # type: ignore # noqa:F811\n-def union_sets(a, b): # noqa:F811\n+@union_sets.register(FiniteSet, FiniteSet)\n+def _(a, b):\n return FiniteSet(*(a._elements | b._elements))\n \n-@dispatch(FiniteSet, Set) # type: ignore # noqa:F811\n-def union_sets(a, b): # noqa:F811\n+@union_sets.register(FiniteSet, Set)\n+def _(a, b):\n # If `b` set contains one of my elements, remove it from `a`\n if any(b.contains(x) == True for x in a):\n return {\n FiniteSet(*[x for x in a if b.contains(x) != True]), b}\n return None\n \n-@dispatch(Set, Set) # type: ignore # noqa:F811\n-def union_sets(a, b): # noqa:F811\n+@union_sets.register(Set, Set)\n+def _(a, b):\n return None\ndiff --git a/sympy/stats/sampling/sample_numpy.py b/sympy/stats/sampling/sample_numpy.py\nindex ff4856593dcc..a50b17d256f1 100644\n--- a/sympy/stats/sampling/sample_numpy.py\n+++ b/sympy/stats/sampling/sample_numpy.py\n@@ -17,65 +17,65 @@ def do_sample_numpy(dist, size, rand_state):\n \n # CRV:\n \n-@do_sample_numpy.register(BetaDistribution) # type: ignore\n+@do_sample_numpy.register(BetaDistribution)\n def _(dist: BetaDistribution, size, rand_state):\n return rand_state.beta(a=float(dist.alpha), b=float(dist.beta), size=size)\n \n \n-@do_sample_numpy.register(ChiSquaredDistribution) # type: ignore\n+@do_sample_numpy.register(ChiSquaredDistribution)\n def _(dist: ChiSquaredDistribution, size, rand_state):\n return rand_state.chisquare(df=float(dist.k), size=size)\n \n \n-@do_sample_numpy.register(ExponentialDistribution) # type: ignore\n+@do_sample_numpy.register(ExponentialDistribution)\n def _(dist: ExponentialDistribution, size, rand_state):\n return rand_state.exponential(1 / float(dist.rate), size=size)\n \n \n-@do_sample_numpy.register(GammaDistribution) # type: ignore\n+@do_sample_numpy.register(GammaDistribution)\n def _(dist: GammaDistribution, size, rand_state):\n return rand_state.gamma(float(dist.k), float(dist.theta), size=size)\n \n \n-@do_sample_numpy.register(LogNormalDistribution) # type: ignore\n+@do_sample_numpy.register(LogNormalDistribution)\n def _(dist: LogNormalDistribution, size, rand_state):\n return rand_state.lognormal(float(dist.mean), float(dist.std), size=size)\n \n \n-@do_sample_numpy.register(NormalDistribution) # type: ignore\n+@do_sample_numpy.register(NormalDistribution)\n def _(dist: NormalDistribution, size, rand_state):\n return rand_state.normal(float(dist.mean), float(dist.std), size=size)\n \n \n-@do_sample_numpy.register(ParetoDistribution) # type: ignore\n+@do_sample_numpy.register(ParetoDistribution)\n def _(dist: ParetoDistribution, size, rand_state):\n return (numpy.random.pareto(a=float(dist.alpha), size=size) + 1) * float(dist.xm)\n \n \n-@do_sample_numpy.register(UniformDistribution) # type: ignore\n+@do_sample_numpy.register(UniformDistribution)\n def _(dist: UniformDistribution, size, rand_state):\n return rand_state.uniform(low=float(dist.left), high=float(dist.right), size=size)\n \n \n # DRV:\n \n-@do_sample_numpy.register(GeometricDistribution) # type: ignore\n+@do_sample_numpy.register(GeometricDistribution)\n def _(dist: GeometricDistribution, size, rand_state):\n return rand_state.geometric(p=float(dist.p), size=size)\n \n \n-@do_sample_numpy.register(PoissonDistribution) # type: ignore\n+@do_sample_numpy.register(PoissonDistribution)\n def _(dist: PoissonDistribution, size, rand_state):\n return rand_state.poisson(lam=float(dist.lamda), size=size)\n \n \n-@do_sample_numpy.register(ZetaDistribution) # type: ignore\n+@do_sample_numpy.register(ZetaDistribution)\n def _(dist: ZetaDistribution, size, rand_state):\n return rand_state.zipf(a=float(dist.s), size=size)\n \n \n # FRV:\n \n-@do_sample_numpy.register(BinomialDistribution) # type: ignore\n+@do_sample_numpy.register(BinomialDistribution)\n def _(dist: BinomialDistribution, size, rand_state):\n return rand_state.binomial(n=int(dist.n), p=float(dist.p), size=size)\ndiff --git a/sympy/stats/sampling/sample_pymc3.py b/sympy/stats/sampling/sample_pymc3.py\nindex e3c6f8f3aae9..a20e3858e16d 100644\n--- a/sympy/stats/sampling/sample_pymc3.py\n+++ b/sympy/stats/sampling/sample_pymc3.py\n@@ -17,81 +17,81 @@ def do_sample_pymc3(dist):\n \n # CRV:\n \n-@do_sample_pymc3.register(BetaDistribution) # type: ignore\n+@do_sample_pymc3.register(BetaDistribution)\n def _(dist: BetaDistribution):\n return pymc3.Beta('X', alpha=float(dist.alpha), beta=float(dist.beta))\n \n \n-@do_sample_pymc3.register(CauchyDistribution) # type: ignore\n+@do_sample_pymc3.register(CauchyDistribution)\n def _(dist: CauchyDistribution):\n return pymc3.Cauchy('X', alpha=float(dist.x0), beta=float(dist.gamma))\n \n \n-@do_sample_pymc3.register(ChiSquaredDistribution) # type: ignore\n+@do_sample_pymc3.register(ChiSquaredDistribution)\n def _(dist: ChiSquaredDistribution):\n return pymc3.ChiSquared('X', nu=float(dist.k))\n \n \n-@do_sample_pymc3.register(ExponentialDistribution) # type: ignore\n+@do_sample_pymc3.register(ExponentialDistribution)\n def _(dist: ExponentialDistribution):\n return pymc3.Exponential('X', lam=float(dist.rate))\n \n \n-@do_sample_pymc3.register(GammaDistribution) # type: ignore\n+@do_sample_pymc3.register(GammaDistribution)\n def _(dist: GammaDistribution):\n return pymc3.Gamma('X', alpha=float(dist.k), beta=1 / float(dist.theta))\n \n \n-@do_sample_pymc3.register(LogNormalDistribution) # type: ignore\n+@do_sample_pymc3.register(LogNormalDistribution)\n def _(dist: LogNormalDistribution):\n return pymc3.Lognormal('X', mu=float(dist.mean), sigma=float(dist.std))\n \n \n-@do_sample_pymc3.register(NormalDistribution) # type: ignore\n+@do_sample_pymc3.register(NormalDistribution)\n def _(dist: NormalDistribution):\n return pymc3.Normal('X', float(dist.mean), float(dist.std))\n \n \n-@do_sample_pymc3.register(GaussianInverseDistribution) # type: ignore\n+@do_sample_pymc3.register(GaussianInverseDistribution)\n def _(dist: GaussianInverseDistribution):\n return pymc3.Wald('X', mu=float(dist.mean), lam=float(dist.shape))\n \n \n-@do_sample_pymc3.register(ParetoDistribution) # type: ignore\n+@do_sample_pymc3.register(ParetoDistribution)\n def _(dist: ParetoDistribution):\n return pymc3.Pareto('X', alpha=float(dist.alpha), m=float(dist.xm))\n \n \n-@do_sample_pymc3.register(UniformDistribution) # type: ignore\n+@do_sample_pymc3.register(UniformDistribution)\n def _(dist: UniformDistribution):\n return pymc3.Uniform('X', lower=float(dist.left), upper=float(dist.right))\n \n \n # DRV:\n \n-@do_sample_pymc3.register(GeometricDistribution) # type: ignore\n+@do_sample_pymc3.register(GeometricDistribution)\n def _(dist: GeometricDistribution):\n return pymc3.Geometric('X', p=float(dist.p))\n \n \n-@do_sample_pymc3.register(NegativeBinomialDistribution) # type: ignore\n+@do_sample_pymc3.register(NegativeBinomialDistribution)\n def _(dist: NegativeBinomialDistribution):\n return pymc3.NegativeBinomial('X', mu=float((dist.p * dist.r) / (1 - dist.p)),\n alpha=float(dist.r))\n \n \n-@do_sample_pymc3.register(PoissonDistribution) # type: ignore\n+@do_sample_pymc3.register(PoissonDistribution)\n def _(dist: PoissonDistribution):\n return pymc3.Poisson('X', mu=float(dist.lamda))\n \n \n # FRV:\n \n-@do_sample_pymc3.register(BernoulliDistribution) # type: ignore\n+@do_sample_pymc3.register(BernoulliDistribution)\n def _(dist: BernoulliDistribution):\n return pymc3.Bernoulli('X', p=float(dist.p))\n \n \n-@do_sample_pymc3.register(BinomialDistribution) # type: ignore\n+@do_sample_pymc3.register(BinomialDistribution)\n def _(dist: BinomialDistribution):\n return pymc3.Binomial('X', n=int(dist.n), p=float(dist.p))\ndiff --git a/sympy/stats/sampling/sample_scipy.py b/sympy/stats/sampling/sample_scipy.py\nindex 21af16f8342f..f12508f68844 100644\n--- a/sympy/stats/sampling/sample_scipy.py\n+++ b/sympy/stats/sampling/sample_scipy.py\n@@ -24,7 +24,7 @@ def do_sample_scipy(dist, size, seed):\n \n # CRV\n \n-@do_sample_scipy.register(SingleContinuousDistribution) # type: ignore\n+@do_sample_scipy.register(SingleContinuousDistribution)\n def _(dist: SingleContinuousDistribution, size, seed):\n # if we don't need to make a handmade pdf, we won't\n import scipy.stats\n@@ -41,66 +41,66 @@ def _pdf(dist, x):\n return scipy_rv.rvs(size=size, random_state=seed)\n \n \n-@do_sample_scipy.register(ChiSquaredDistribution) # type: ignore\n+@do_sample_scipy.register(ChiSquaredDistribution)\n def _(dist: ChiSquaredDistribution, size, seed):\n # same parametrisation\n return scipy.stats.chi2.rvs(df=float(dist.k), size=size, random_state=seed)\n \n \n-@do_sample_scipy.register(ExponentialDistribution) # type: ignore\n+@do_sample_scipy.register(ExponentialDistribution)\n def _(dist: ExponentialDistribution, size, seed):\n # https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.expon.html#scipy.stats.expon\n return scipy.stats.expon.rvs(scale=1 / float(dist.rate), size=size, random_state=seed)\n \n \n-@do_sample_scipy.register(GammaDistribution) # type: ignore\n+@do_sample_scipy.register(GammaDistribution)\n def _(dist: GammaDistribution, size, seed):\n # https://stackoverflow.com/questions/42150965/how-to-plot-gamma-distribution-with-alpha-and-beta-parameters-in-python\n return scipy.stats.gamma.rvs(a=float(dist.k), scale=float(dist.theta), size=size, random_state=seed)\n \n \n-@do_sample_scipy.register(LogNormalDistribution) # type: ignore\n+@do_sample_scipy.register(LogNormalDistribution)\n def _(dist: LogNormalDistribution, size, seed):\n # https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.lognorm.html\n return scipy.stats.lognorm.rvs(scale=float(exp(dist.mean)), s=float(dist.std), size=size, random_state=seed)\n \n \n-@do_sample_scipy.register(NormalDistribution) # type: ignore\n+@do_sample_scipy.register(NormalDistribution)\n def _(dist: NormalDistribution, size, seed):\n return scipy.stats.norm.rvs(loc=float(dist.mean), scale=float(dist.std), size=size, random_state=seed)\n \n \n-@do_sample_scipy.register(ParetoDistribution) # type: ignore\n+@do_sample_scipy.register(ParetoDistribution)\n def _(dist: ParetoDistribution, size, seed):\n # https://stackoverflow.com/questions/42260519/defining-pareto-distribution-in-python-scipy\n return scipy.stats.pareto.rvs(b=float(dist.alpha), scale=float(dist.xm), size=size, random_state=seed)\n \n \n-@do_sample_scipy.register(StudentTDistribution) # type: ignore\n+@do_sample_scipy.register(StudentTDistribution)\n def _(dist: StudentTDistribution, size, seed):\n return scipy.stats.t.rvs(df=float(dist.nu), size=size, random_state=seed)\n \n \n-@do_sample_scipy.register(UniformDistribution) # type: ignore\n+@do_sample_scipy.register(UniformDistribution)\n def _(dist: UniformDistribution, size, seed):\n # https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.uniform.html\n return scipy.stats.uniform.rvs(loc=float(dist.left), scale=float(dist.right - dist.left), size=size, random_state=seed)\n \n \n-@do_sample_scipy.register(BetaDistribution) # type: ignore\n+@do_sample_scipy.register(BetaDistribution)\n def _(dist: BetaDistribution, size, seed):\n # same parametrisation\n return scipy.stats.beta.rvs(a=float(dist.alpha), b=float(dist.beta), size=size, random_state=seed)\n \n \n-@do_sample_scipy.register(CauchyDistribution) # type: ignore\n+@do_sample_scipy.register(CauchyDistribution)\n def _(dist: CauchyDistribution, size, seed):\n return scipy.stats.cauchy.rvs(loc=float(dist.x0), scale=float(dist.gamma), size=size, random_state=seed)\n \n \n # DRV:\n \n-@do_sample_scipy.register(DiscreteDistributionHandmade) # type: ignore\n+@do_sample_scipy.register(DiscreteDistributionHandmade)\n def _(dist: DiscreteDistributionHandmade, size, seed):\n from scipy.stats import rv_discrete\n \n@@ -116,44 +116,44 @@ def _pmf(dist, x):\n return scipy_rv.rvs(size=size, random_state=seed)\n \n \n-@do_sample_scipy.register(GeometricDistribution) # type: ignore\n+@do_sample_scipy.register(GeometricDistribution)\n def _(dist: GeometricDistribution, size, seed):\n return scipy.stats.geom.rvs(p=float(dist.p), size=size, random_state=seed)\n \n \n-@do_sample_scipy.register(LogarithmicDistribution) # type: ignore\n+@do_sample_scipy.register(LogarithmicDistribution)\n def _(dist: LogarithmicDistribution, size, seed):\n return scipy.stats.logser.rvs(p=float(dist.p), size=size, random_state=seed)\n \n \n-@do_sample_scipy.register(NegativeBinomialDistribution) # type: ignore\n+@do_sample_scipy.register(NegativeBinomialDistribution)\n def _(dist: NegativeBinomialDistribution, size, seed):\n return scipy.stats.nbinom.rvs(n=float(dist.r), p=float(dist.p), size=size, random_state=seed)\n \n \n-@do_sample_scipy.register(PoissonDistribution) # type: ignore\n+@do_sample_scipy.register(PoissonDistribution)\n def _(dist: PoissonDistribution, size, seed):\n return scipy.stats.poisson.rvs(mu=float(dist.lamda), size=size, random_state=seed)\n \n \n-@do_sample_scipy.register(SkellamDistribution) # type: ignore\n+@do_sample_scipy.register(SkellamDistribution)\n def _(dist: SkellamDistribution, size, seed):\n return scipy.stats.skellam.rvs(mu1=float(dist.mu1), mu2=float(dist.mu2), size=size, random_state=seed)\n \n \n-@do_sample_scipy.register(YuleSimonDistribution) # type: ignore\n+@do_sample_scipy.register(YuleSimonDistribution)\n def _(dist: YuleSimonDistribution, size, seed):\n return scipy.stats.yulesimon.rvs(alpha=float(dist.rho), size=size, random_state=seed)\n \n \n-@do_sample_scipy.register(ZetaDistribution) # type: ignore\n+@do_sample_scipy.register(ZetaDistribution)\n def _(dist: ZetaDistribution, size, seed):\n return scipy.stats.zipf.rvs(a=float(dist.s), size=size, random_state=seed)\n \n \n # FRV:\n \n-@do_sample_scipy.register(SingleFiniteDistribution) # type: ignore\n+@do_sample_scipy.register(SingleFiniteDistribution)\n def _(dist: SingleFiniteDistribution, size, seed):\n # scipy can handle with custom distributions\n \ndiff --git a/sympy/tensor/array/expressions/array_expressions.py b/sympy/tensor/array/expressions/array_expressions.py\nindex 75573cbb3200..5ccbe8f39388 100644\n--- a/sympy/tensor/array/expressions/array_expressions.py\n+++ b/sympy/tensor/array/expressions/array_expressions.py\n@@ -33,7 +33,7 @@\n \n \n class _ArrayExpr(Expr):\n- pass\n+ shape : tTuple[Expr, ...]\n \n \n class ArraySymbol(_ArrayExpr):\n@@ -1571,7 +1571,7 @@ def __init__(self, base_array: typing.Union[ArrayContraction, ArrayDiagonal, Arr\n args_with_ind[arg_pos].indices[rel_pos] = i\n self.args_with_ind: List[_ArgE] = args_with_ind\n self.number_of_contraction_indices: int = len(contraction_indices)\n- self._track_permutation: Optional[List[List[int]]] = None\n+ self._track_permutation: List[List[int]] = None # type: ignore\n \n mapping = _get_mapping_from_subranks(base_array.subranks)\n \ndiff --git a/sympy/tensor/array/expressions/arrayexpr_derivatives.py b/sympy/tensor/array/expressions/arrayexpr_derivatives.py\nindex 20ca1b283892..97fa4659f5d5 100644\n--- a/sympy/tensor/array/expressions/arrayexpr_derivatives.py\n+++ b/sympy/tensor/array/expressions/arrayexpr_derivatives.py\n@@ -10,9 +10,11 @@\n from sympy.matrices.expressions.transpose import Transpose\n from sympy.combinatorics.permutations import _af_invert\n from sympy.matrices.expressions.applyfunc import ElementwiseApplyFunction\n-from sympy.tensor.array.expressions.array_expressions import ZeroArray, ArraySymbol, ArrayTensorProduct, \\\n- ArrayAdd, PermuteDims, ArrayDiagonal, ArrayElementwiseApplyFunc, get_rank, \\\n- get_shape, ArrayContraction, _array_tensor_product, _array_contraction, _array_diagonal, _array_add, _permute_dims\n+from sympy.tensor.array.expressions.array_expressions import (\n+ _ArrayExpr, ZeroArray, ArraySymbol, ArrayTensorProduct, ArrayAdd,\n+ PermuteDims, ArrayDiagonal, ArrayElementwiseApplyFunc, get_rank,\n+ get_shape, ArrayContraction, _array_tensor_product, _array_contraction,\n+ _array_diagonal, _array_add, _permute_dims)\n from sympy.tensor.array.expressions.conv_matrix_to_array import convert_matrix_to_array\n \n \n@@ -21,12 +23,12 @@ def array_derive(expr, x):\n raise NotImplementedError(f\"not implemented for type {type(expr)}\")\n \n \n-@array_derive.register(Expr) # type: ignore\n-def _(expr: Expr, x: Expr):\n- return ZeroArray(*x.shape) # type: ignore\n+@array_derive.register(Expr)\n+def _(expr: Expr, x: _ArrayExpr):\n+ return ZeroArray(*x.shape)\n \n \n-@array_derive.register(ArrayTensorProduct) # type: ignore\n+@array_derive.register(ArrayTensorProduct)\n def _(expr: ArrayTensorProduct, x: Expr):\n args = expr.args\n addend_list = []\n@@ -56,33 +58,33 @@ def _(expr: ArrayTensorProduct, x: Expr):\n return _array_add(*addend_list)\n \n \n-@array_derive.register(ArraySymbol) # type: ignore\n-def _(expr: ArraySymbol, x: Expr):\n+@array_derive.register(ArraySymbol)\n+def _(expr: ArraySymbol, x: _ArrayExpr):\n if expr == x:\n return _permute_dims(\n ArrayTensorProduct.fromiter(Identity(i) for i in expr.shape),\n [2*i for i in range(len(expr.shape))] + [2*i+1 for i in range(len(expr.shape))]\n )\n- return ZeroArray(*(x.shape + expr.shape)) # type: ignore\n+ return ZeroArray(*(x.shape + expr.shape))\n \n \n-@array_derive.register(MatrixSymbol) # type: ignore\n-def _(expr: MatrixSymbol, x: Expr):\n+@array_derive.register(MatrixSymbol)\n+def _(expr: MatrixSymbol, x: _ArrayExpr):\n m, n = expr.shape\n if expr == x:\n return _permute_dims(\n _array_tensor_product(Identity(m), Identity(n)),\n [0, 2, 1, 3]\n )\n- return ZeroArray(*(x.shape + expr.shape)) # type: ignore\n+ return ZeroArray(*(x.shape + expr.shape))\n \n \n-@array_derive.register(Identity) # type: ignore\n-def _(expr: Identity, x: Expr):\n- return ZeroArray(*(x.shape + expr.shape)) # type: ignore\n+@array_derive.register(Identity)\n+def _(expr: Identity, x: _ArrayExpr):\n+ return ZeroArray(*(x.shape + expr.shape))\n \n \n-@array_derive.register(Transpose) # type: ignore\n+@array_derive.register(Transpose)\n def _(expr: Transpose, x: Expr):\n # D(A.T, A) ==> (m,n,i,j) ==> D(A_ji, A_mn) = d_mj d_ni\n # D(B.T, A) ==> (m,n,i,j) ==> D(B_ji, A_mn)\n@@ -90,7 +92,7 @@ def _(expr: Transpose, x: Expr):\n return _permute_dims(fd, [0, 1, 3, 2])\n \n \n-@array_derive.register(Inverse) # type: ignore\n+@array_derive.register(Inverse)\n def _(expr: Inverse, x: Expr):\n mat = expr.I\n dexpr = array_derive(mat, x)\n@@ -100,7 +102,7 @@ def _(expr: Inverse, x: Expr):\n return pp\n \n \n-@array_derive.register(ElementwiseApplyFunction) # type: ignore\n+@array_derive.register(ElementwiseApplyFunction)\n def _(expr: ElementwiseApplyFunction, x: Expr):\n assert get_rank(expr) == 2\n assert get_rank(x) == 2\n@@ -116,7 +118,7 @@ def _(expr: ElementwiseApplyFunction, x: Expr):\n return td\n \n \n-@array_derive.register(ArrayElementwiseApplyFunc) # type: ignore\n+@array_derive.register(ArrayElementwiseApplyFunc)\n def _(expr: ArrayElementwiseApplyFunc, x: Expr):\n fdiff = expr._get_function_fdiff()\n subexpr = expr.expr\n@@ -131,18 +133,18 @@ def _(expr: ArrayElementwiseApplyFunc, x: Expr):\n return _array_diagonal(tp, *diag_indices)\n \n \n-@array_derive.register(MatrixExpr) # type: ignore\n+@array_derive.register(MatrixExpr)\n def _(expr: MatrixExpr, x: Expr):\n cg = convert_matrix_to_array(expr)\n return array_derive(cg, x)\n \n \n-@array_derive.register(HadamardProduct) # type: ignore\n+@array_derive.register(HadamardProduct)\n def _(expr: HadamardProduct, x: Expr):\n raise NotImplementedError()\n \n \n-@array_derive.register(ArrayContraction) # type: ignore\n+@array_derive.register(ArrayContraction)\n def _(expr: ArrayContraction, x: Expr):\n fd = array_derive(expr.expr, x)\n rank_x = len(get_shape(x))\n@@ -151,7 +153,7 @@ def _(expr: ArrayContraction, x: Expr):\n return _array_contraction(fd, *new_contraction_indices)\n \n \n-@array_derive.register(ArrayDiagonal) # type: ignore\n+@array_derive.register(ArrayDiagonal)\n def _(expr: ArrayDiagonal, x: Expr):\n dsubexpr = array_derive(expr.expr, x)\n rank_x = len(get_shape(x))\n@@ -159,12 +161,12 @@ def _(expr: ArrayDiagonal, x: Expr):\n return _array_diagonal(dsubexpr, *diag_indices)\n \n \n-@array_derive.register(ArrayAdd) # type: ignore\n+@array_derive.register(ArrayAdd)\n def _(expr: ArrayAdd, x: Expr):\n return _array_add(*[array_derive(arg, x) for arg in expr.args])\n \n \n-@array_derive.register(PermuteDims) # type: ignore\n+@array_derive.register(PermuteDims)\n def _(expr: PermuteDims, x: Expr):\n de = array_derive(expr.expr, x)\n perm = [0, 1] + [i + 2 for i in expr.permutation.array_form]\ndiff --git a/sympy/tensor/array/expressions/conv_array_to_matrix.py b/sympy/tensor/array/expressions/conv_array_to_matrix.py\nindex 50d250dfe1bf..99fb1e0e35f7 100644\n--- a/sympy/tensor/array/expressions/conv_array_to_matrix.py\n+++ b/sympy/tensor/array/expressions/conv_array_to_matrix.py\n@@ -172,7 +172,7 @@ def _array2matrix(expr):\n return expr\n \n \n-@_array2matrix.register(ZeroArray) # type: ignore\n+@_array2matrix.register(ZeroArray)\n def _(expr: ZeroArray):\n if get_rank(expr) == 2:\n return ZeroMatrix(*expr.shape)\n@@ -180,12 +180,12 @@ def _(expr: ZeroArray):\n return expr\n \n \n-@_array2matrix.register(ArrayTensorProduct) # type: ignore\n+@_array2matrix.register(ArrayTensorProduct)\n def _(expr: ArrayTensorProduct):\n return _a2m_tensor_product(*[_array2matrix(arg) for arg in expr.args])\n \n \n-@_array2matrix.register(ArrayContraction) # type: ignore\n+@_array2matrix.register(ArrayContraction)\n def _(expr: ArrayContraction):\n expr = expr.flatten_contraction_of_diagonal()\n expr = identify_removable_identity_matrices(expr)\n@@ -226,7 +226,7 @@ def _(expr: ArrayContraction):\n return _array_contraction(ret, *expr.contraction_indices)\n \n \n-@_array2matrix.register(ArrayDiagonal) # type: ignore\n+@_array2matrix.register(ArrayDiagonal)\n def _(expr: ArrayDiagonal):\n pexpr = _array_diagonal(_array2matrix(expr.expr), *expr.diagonal_indices)\n pexpr = identify_hadamard_products(pexpr)\n@@ -237,7 +237,7 @@ def _(expr: ArrayDiagonal):\n return _array2matrix(pexpr)\n \n \n-@_array2matrix.register(PermuteDims) # type: ignore\n+@_array2matrix.register(PermuteDims)\n def _(expr: PermuteDims):\n if expr.permutation.array_form == [1, 0]:\n return _a2m_transpose(_array2matrix(expr.expr))\n@@ -283,23 +283,23 @@ def _(expr: PermuteDims):\n p2 = permuted[2*i+1]\n if p1 // 2 != p2 // 2:\n return _permute_dims(mat_mul_lines, permutation)\n- pos = p1 // 2\n+ posi = p1 // 2\n if p1 > p2:\n- args_array[i] = _a2m_transpose(mat_mul_lines.args[pos]) # type: ignore\n+ args_array[i] = _a2m_transpose(mat_mul_lines.args[posi])\n else:\n- args_array[i] = mat_mul_lines.args[pos] # type: ignore\n+ args_array[i] = mat_mul_lines.args[posi]\n return _a2m_tensor_product(*args_array)\n else:\n return expr\n \n \n-@_array2matrix.register(ArrayAdd) # type: ignore\n+@_array2matrix.register(ArrayAdd)\n def _(expr: ArrayAdd):\n addends = [_array2matrix(arg) for arg in expr.args]\n return _a2m_add(*addends)\n \n \n-@_array2matrix.register(ArrayElementwiseApplyFunc) # type: ignore\n+@_array2matrix.register(ArrayElementwiseApplyFunc)\n def _(expr: ArrayElementwiseApplyFunc):\n subexpr = _array2matrix(expr.expr)\n if isinstance(subexpr, MatrixExpr):\n@@ -315,7 +315,7 @@ def _(expr: ArrayElementwiseApplyFunc):\n return ArrayElementwiseApplyFunc(expr.function, subexpr)\n \n \n-@_array2matrix.register(ArrayElement) # type: ignore\n+@_array2matrix.register(ArrayElement)\n def _(expr: ArrayElement):\n ret = _array2matrix(expr.name)\n if isinstance(ret, MatrixExpr):\n@@ -328,7 +328,7 @@ def _remove_trivial_dims(expr):\n return expr, []\n \n \n-@_remove_trivial_dims.register(ArrayTensorProduct) # type: ignore\n+@_remove_trivial_dims.register(ArrayTensorProduct)\n def _(expr: ArrayTensorProduct):\n # Recognize expressions like [x, y] with shape (k, 1, k, 1) as `x*y.T`.\n # The matrix expression has to be equivalent to the tensor product of the\n@@ -403,7 +403,7 @@ def _(expr: ArrayTensorProduct):\n return newexpr, newremoved\n \n \n-@_remove_trivial_dims.register(ArrayAdd) # type: ignore\n+@_remove_trivial_dims.register(ArrayAdd)\n def _(expr: ArrayAdd):\n rec = [_remove_trivial_dims(arg) for arg in expr.args]\n newargs, removed = zip(*rec)\n@@ -412,7 +412,7 @@ def _(expr: ArrayAdd):\n return _a2m_add(*newargs), removed[0]\n \n \n-@_remove_trivial_dims.register(PermuteDims) # type: ignore\n+@_remove_trivial_dims.register(PermuteDims)\n def _(expr: PermuteDims):\n subexpr, subremoved = _remove_trivial_dims(expr.expr)\n p = expr.permutation.array_form\n@@ -429,7 +429,7 @@ def _(expr: PermuteDims):\n return newexpr, premoved\n \n \n-@_remove_trivial_dims.register(ArrayContraction) # type: ignore\n+@_remove_trivial_dims.register(ArrayContraction)\n def _(expr: ArrayContraction):\n new_expr, removed0 = _array_contraction_to_diagonal_multiple_identity(expr)\n if new_expr != expr:\n@@ -483,21 +483,21 @@ def _remove_diagonalized_identity_matrices(expr: ArrayDiagonal):\n return editor.to_array_contraction(), removed\n \n \n-@_remove_trivial_dims.register(ArrayDiagonal) # type: ignore\n+@_remove_trivial_dims.register(ArrayDiagonal)\n def _(expr: ArrayDiagonal):\n newexpr, removed = _remove_trivial_dims(expr.expr)\n shifts = list(accumulate([0] + [1 if i in removed else 0 for i in range(get_rank(expr.expr))]))\n- new_diag_indices = {i: tuple(j for j in i if j not in removed) for i in expr.diagonal_indices}\n- for old_diag_tuple, new_diag_tuple in new_diag_indices.items():\n+ new_diag_indices_map = {i: tuple(j for j in i if j not in removed) for i in expr.diagonal_indices}\n+ for old_diag_tuple, new_diag_tuple in new_diag_indices_map.items():\n if len(new_diag_tuple) == 1:\n removed = [i for i in removed if i not in old_diag_tuple]\n- new_diag_indices = [tuple(j - shifts[j] for j in i) for i in new_diag_indices.values()] # type: ignore\n+ new_diag_indices = [tuple(j - shifts[j] for j in i) for i in new_diag_indices_map.values()]\n rank = get_rank(expr.expr)\n removed = ArrayDiagonal._push_indices_up(expr.diagonal_indices, removed, rank)\n removed = sorted({i for i in removed})\n # If there are single axes to diagonalize remaining, it means that their\n # corresponding dimension has been removed, they no longer need diagonalization:\n- new_diag_indices = [i for i in new_diag_indices if len(i) > 0] # type: ignore\n+ new_diag_indices = [i for i in new_diag_indices if len(i) > 0]\n if len(new_diag_indices) > 0:\n newexpr2 = _array_diagonal(newexpr, *new_diag_indices, allow_trivial_diags=True)\n else:\n@@ -510,7 +510,7 @@ def _(expr: ArrayDiagonal):\n return newexpr2, removed\n \n \n-@_remove_trivial_dims.register(ElementwiseApplyFunction) # type: ignore\n+@_remove_trivial_dims.register(ElementwiseApplyFunction)\n def _(expr: ElementwiseApplyFunction):\n subexpr, removed = _remove_trivial_dims(expr.expr)\n if subexpr.shape == (1, 1):\n@@ -519,7 +519,7 @@ def _(expr: ElementwiseApplyFunction):\n return ElementwiseApplyFunction(expr.function, subexpr), []\n \n \n-@_remove_trivial_dims.register(ArrayElementwiseApplyFunc) # type: ignore\n+@_remove_trivial_dims.register(ArrayElementwiseApplyFunc)\n def _(expr: ArrayElementwiseApplyFunc):\n subexpr, removed = _remove_trivial_dims(expr.expr)\n return ArrayElementwiseApplyFunc(expr.function, subexpr), removed\n@@ -956,7 +956,7 @@ def _array_contraction_to_diagonal_multiple_identity(expr: ArrayContraction):\n flag = True\n break\n free_pos = list(range(*editor.get_absolute_free_range(id1)))[0]\n- editor._track_permutation[-1].append(free_pos) # type: ignore\n+ editor._track_permutation[-1].append(free_pos)\n id1.element = None\n flag = False\n break\n@@ -970,7 +970,7 @@ def _array_contraction_to_diagonal_multiple_identity(expr: ArrayContraction):\n for j, e in enumerate(editor.args_with_ind):\n if e.element is None:\n editor._track_permutation[j] = None # type: ignore\n- editor._track_permutation = [i for i in editor._track_permutation if i is not None] # type: ignore\n+ editor._track_permutation = [i for i in editor._track_permutation if i is not None]\n # Renumber permutation array form in order to deal with deleted positions:\n remap = {e: i for i, e in enumerate(sorted({k for j in editor._track_permutation for k in j}))}\n editor._track_permutation = [[remap[j] for j in i] for i in editor._track_permutation]\n" }
[ { "diff_hunk": "@@ -1571,7 +1571,7 @@ def __init__(self, base_array: typing.Union[ArrayContraction, ArrayDiagonal, Arr\n args_with_ind[arg_pos].indices[rel_pos] = i\n self.args_with_ind: List[_ArgE] = args_with_ind\n self.number_of_contraction_indices: int = len(contraction_indices)\n- self._track_permutation: Optional[List[List[int]]] = None\n+ self._track_permutation: List[List[int]] = None # type: ignore", "line": null, "original_line": 1574, "original_start_line": null, "path": "sympy/tensor/array/expressions/array_expressions.py", "start_line": null, "text": "@user1:\nWould `List[List[int]] | NoneType` allow the use of `None` here without ignore?\n\n@author:\nMaybe. I'll check...\n\n@author:\nThat gives\r\n```console\r\n$ mypy sympy\r\nsympy/tensor/array/expressions/array_expressions.py:1574: error: X | Y syntax for unions requires Python 3.10\r\nsympy/tensor/array/expressions/array_expressions.py:1574: error: Name \"NoneType\" is not defined\r\n```\r\n\n\n@user1:\nReading the docs, it looks like it is exactly what `Optional` does in the first place: https://docs.python.org/3/library/typing.html#typing.Optional \r\n\r\nWhy was that changed? It feels like a step backwards. \n\n@author:\nI'm not sure exactly how to type this given that before use it will always become non-None. If you have Optional here then mypy will complain every time this is used:\r\n```\r\n$ mypy sympy\r\nsympy/tensor/array/expressions/conv_array_to_matrix.py:959: error: Value of type \"Optional[List[List[int]]]\" is not indexable\r\nsympy/tensor/array/expressions/conv_array_to_matrix.py:973: error: Item \"None\" of \"Optional[List[List[int]]]\" has no attribute \"__iter__\" (not iterable)\r\nFound 2 errors in 1 file (checked 1455 source files)\r\n```\n\n@user1:\nI see, it seems that mypy simply doesn't allow you to do this unless you handle \"None\" in the code explicitly everywhere even if you know it will never be `None`.\n\n@author:\nI'll just add back the type: ignore. This class is too complicated for me to figure out how to clean it up. (I don't think it needs to be complicated but the code is really messy)\n\n@user1:\n```diff\r\ndiff --git a/sympy/tensor/array/expressions/array_expressions.py b/sympy/tensor/array/expressions/array_expressions.py\r\nindex 75573cbb32..c42c53babb 100644\r\n--- a/sympy/tensor/array/expressions/array_expressions.py\r\n+++ b/sympy/tensor/array/expressions/array_expressions.py\r\n@@ -1571,7 +1571,7 @@ def __init__(self, base_array: typing.Union[ArrayContraction, ArrayDiagonal, Arr\r\n args_with_ind[arg_pos].indices[rel_pos] = i\r\n self.args_with_ind: List[_ArgE] = args_with_ind\r\n self.number_of_contraction_indices: int = len(contraction_indices)\r\n- self._track_permutation: Optional[List[List[int]]] = None\r\n+ self._track_permutation: List[List[int]] = [[]]\r\n\r\n mapping = _get_mapping_from_subranks(base_array.subranks)\r\n\r\n@@ -1673,7 +1673,7 @@ def to_array_contraction(self):\r\n contraction_indices = self.get_contraction_indices()\r\n expr = _array_contraction(_array_tensor_product(*args), *contraction_indices)\r\n expr2 = _array_diagonal(expr, *diag_indices_filtered)\r\n- if self._track_permutation is not None:\r\n+ if any(self._track_permutation):\r\n permutation2 = _af_invert([j for i in self._track_permutation for j in i])\r\n expr2 = _permute_dims(expr2, permutation2)\r\n\r\n@@ -1755,8 +1755,8 @@ def track_permutation_start(self):\r\n def track_permutation_merge(self, destination: _ArgE, from_element: _ArgE):\r\n index_destination = self.args_with_ind.index(destination)\r\n index_element = self.args_with_ind.index(from_element)\r\n- self._track_permutation[index_destination].extend(self._track_permutation[index_element]) # type: ignore\r\n- self._track_permutation.pop(index_element) # type: ignore\r\n+ self._track_permutation[index_destination].extend(self._track_permutation[index_element])\r\n+ self._track_permutation.pop(index_element)\r\n\r\n def get_absolute_free_range(self, arg: _ArgE) -> typing.Tuple[int, int]:\r\n \"\"\"\r\n```\r\nIt seems to pass `mypy`, `bin/test sympy/tensor`, `bin/doctest/ sympy/tensor`, but haven't done the full tests\n\n@author:\nThere is this which might be affected:\r\nhttps://github.com/sympy/sympy/blob/c43550c31ef6355c49c4461e90a23e9629291b93/sympy/tensor/array/expressions/array_expressions.py#L1676\n\n@user1:\nThe reason why it has that is because it uses `self._track_permutation` in a list comprehension, so the `any(self._track_permutation)` check should identify the uninitialized list.\r\n\r\nAlternative is to use what mypy seems to imply:\r\n```diff\r\ndiff --git a/sympy/tensor/array/expressions/array_expressions.py b/sympy/tensor/array/expressions/array_expressions.py\r\nindex 75573cbb32..079a1febe6 100644\r\n--- a/sympy/tensor/array/expressions/array_expressions.py\r\n+++ b/sympy/tensor/array/expressions/array_expressions.py\r\n@@ -1755,8 +1755,9 @@ def track_permutation_start(self):\r\n def track_permutation_merge(self, destination: _ArgE, from_element: _ArgE):\r\n index_destination = self.args_with_ind.index(destination)\r\n index_element = self.args_with_ind.index(from_element)\r\n- self._track_permutation[index_destination].extend(self._track_permutation[index_element]) # type: ignore\r\n- self._track_permutation.pop(index_element) # type: ignore\r\n+ if self._track_permutation is not None:\r\n+ self._track_permutation[index_destination].extend(self._track_permutation[index_element])\r\n+ self._track_permutation.pop(index_element)\r\n\r\n def get_absolute_free_range(self, arg: _ArgE) -> typing.Tuple[int, int]:\r\n \"\"\"\r\n```\r\n\n\n@author:\nThis code is too messy for me to try and make changes to it. I don't really understand what it does and I don't know how well tested it is so I'd rather just leave the `type: ignore` there and let someone who wants to improve this code fix it. At least I don't want to smuggle a bug in with a PR that has a large number of trivial changes.\n\n@user1:\nOk, fair enough." }, { "diff_hunk": "@@ -283,23 +283,23 @@ def _(expr: PermuteDims):\n p2 = permuted[2*i+1]\n if p1 // 2 != p2 // 2:\n return _permute_dims(mat_mul_lines, permutation)\n- pos = p1 // 2\n+ posi = p1 // 2", "line": null, "original_line": 286, "original_start_line": null, "path": "sympy/tensor/array/expressions/conv_array_to_matrix.py", "start_line": null, "text": "@user1:\nWhy store this in a variable at all, it gets used only once due to the if-else (unless it gets moved up)\n\n@author:\nI'm not sure I understand, It gets used in either of two branches below.\n\n@user1:\nRight, it gets used in either branch, so only once. It's not exactly a complicated expression, so why not just put `p1 // 2`` directly where `posi` is used? \n\n@author:\nThere can be good reasons for naming something even if it is only used once but since this isn't a particularly good name anyway I'll do that." } ]
601588cf14e4c41f8f81b3e91a3be2ebae1f4cd7
diff --git a/sympy/assumptions/handlers/calculus.py b/sympy/assumptions/handlers/calculus.py index ec2d9dcfd0c7..263bed6da00c 100644 --- a/sympy/assumptions/handlers/calculus.py +++ b/sympy/assumptions/handlers/calculus.py @@ -18,7 +18,7 @@ # FinitePredicate [email protected](Symbol) # type: ignore [email protected](Symbol) def _(expr, assumptions): """ Handles Symbol. @@ -29,7 +29,7 @@ def _(expr, assumptions): return True return None [email protected](Add) # type: ignore [email protected](Add) def _(expr, assumptions): """ Return True if expr is bounded, False if not and None if unknown. @@ -111,7 +111,7 @@ def _(expr, assumptions): result = _bounded return result [email protected](Mul) # type: ignore [email protected](Mul) def _(expr, assumptions): """ Return True if expr is bounded, False if not and None if unknown. @@ -166,7 +166,7 @@ def _(expr, assumptions): result = False return result [email protected](Pow) # type: ignore [email protected](Pow) def _(expr, assumptions): """ * Unbounded ** NonZero -> Unbounded @@ -198,11 +198,11 @@ def _(expr, assumptions): return False return None [email protected](exp) # type: ignore [email protected](exp) def _(expr, assumptions): return ask(Q.finite(expr.exp), assumptions) [email protected](log) # type: ignore [email protected](log) def _(expr, assumptions): # After complex -> finite fact is registered to new assumption system, # querying Q.infinite may be removed. @@ -210,16 +210,16 @@ def _(expr, assumptions): return False return ask(~Q.zero(expr.args[0]), assumptions) [email protected]_many(cos, sin, Number, Pi, Exp1, GoldenRatio, # type: ignore [email protected]_many(cos, sin, Number, Pi, Exp1, GoldenRatio, TribonacciConstant, ImaginaryUnit, sign) def _(expr, assumptions): return True [email protected]_many(ComplexInfinity, Infinity, NegativeInfinity) # type: ignore [email protected]_many(ComplexInfinity, Infinity, NegativeInfinity) def _(expr, assumptions): return False [email protected](NaN) # type: ignore [email protected](NaN) def _(expr, assumptions): return None @@ -227,7 +227,7 @@ def _(expr, assumptions): # InfinitePredicate [email protected]_many(ComplexInfinity, Infinity, NegativeInfinity) # type: ignore [email protected]_many(ComplexInfinity, Infinity, NegativeInfinity) def _(expr, assumptions): return True @@ -235,12 +235,12 @@ def _(expr, assumptions): # PositiveInfinitePredicate [email protected](Infinity) # type: ignore [email protected](Infinity) def _(expr, assumptions): return True [email protected]_many(NegativeInfinity, ComplexInfinity) # type: ignore [email protected]_many(NegativeInfinity, ComplexInfinity) def _(expr, assumptions): return False @@ -248,11 +248,11 @@ def _(expr, assumptions): # NegativeInfinitePredicate [email protected](NegativeInfinity) # type: ignore [email protected](NegativeInfinity) def _(expr, assumptions): return True [email protected]_many(Infinity, ComplexInfinity) # type: ignore [email protected]_many(Infinity, ComplexInfinity) def _(expr, assumptions): return False diff --git a/sympy/assumptions/handlers/common.py b/sympy/assumptions/handlers/common.py index 303e6e3a2596..8d2ef9859335 100644 --- a/sympy/assumptions/handlers/common.py +++ b/sympy/assumptions/handlers/common.py @@ -47,7 +47,7 @@ def AlwaysNone(expr, assumptions): # CommutativePredicate [email protected](Symbol) # type: ignore [email protected](Symbol) def _(expr, assumptions): """Objects are expected to be commutative unless otherwise stated""" assumps = conjuncts(assumptions) @@ -59,41 +59,41 @@ def _(expr, assumptions): return False return True [email protected](Basic) # type: ignore [email protected](Basic) def _(expr, assumptions): for arg in expr.args: if not ask(Q.commutative(arg), assumptions): return False return True [email protected](Number) # type: ignore [email protected](Number) def _(expr, assumptions): return True [email protected](NaN) # type: ignore [email protected](NaN) def _(expr, assumptions): return True # IsTruePredicate [email protected](bool) # type: ignore [email protected](bool) def _(expr, assumptions): return expr [email protected](BooleanTrue) # type: ignore [email protected](BooleanTrue) def _(expr, assumptions): return True [email protected](BooleanFalse) # type: ignore [email protected](BooleanFalse) def _(expr, assumptions): return False [email protected](AppliedPredicate) # type: ignore [email protected](AppliedPredicate) def _(expr, assumptions): return ask(expr, assumptions) [email protected](Not) # type: ignore [email protected](Not) def _(expr, assumptions): arg = expr.args[0] if arg.is_Symbol: @@ -105,7 +105,7 @@ def _(expr, assumptions): else: return None [email protected](Or) # type: ignore [email protected](Or) def _(expr, assumptions): result = False for arg in expr.args: @@ -116,7 +116,7 @@ def _(expr, assumptions): result = None return result [email protected](And) # type: ignore [email protected](And) def _(expr, assumptions): result = True for arg in expr.args: @@ -127,12 +127,12 @@ def _(expr, assumptions): result = None return result [email protected](Implies) # type: ignore [email protected](Implies) def _(expr, assumptions): p, q = expr.args return ask(~p | q, assumptions=assumptions) [email protected](Equivalent) # type: ignore [email protected](Equivalent) def _(expr, assumptions): p, q = expr.args pt = ask(p, assumptions=assumptions) diff --git a/sympy/assumptions/handlers/matrices.py b/sympy/assumptions/handlers/matrices.py index a220b4363dd3..73debd00c8bc 100644 --- a/sympy/assumptions/handlers/matrices.py +++ b/sympy/assumptions/handlers/matrices.py @@ -31,14 +31,14 @@ def _Factorization(predicate, expr, assumptions): # SquarePredicate [email protected](MatrixExpr) # type: ignore [email protected](MatrixExpr) def _(expr, assumptions): return expr.shape[0] == expr.shape[1] # SymmetricPredicate [email protected](MatMul) # type: ignore [email protected](MatMul) def _(expr, assumptions): factor, mmul = expr.as_coeff_mmul() if all(ask(Q.symmetric(arg), assumptions) for arg in mmul.args): @@ -52,7 +52,7 @@ def _(expr, assumptions): return True return ask(Q.symmetric(MatMul(*mmul.args[1:-1])), assumptions) [email protected](MatPow) # type: ignore [email protected](MatPow) def _(expr, assumptions): # only for integer powers base, exp = expr.args @@ -65,11 +65,11 @@ def _(expr, assumptions): return ask(Q.symmetric(base), assumptions) return None [email protected](MatAdd) # type: ignore [email protected](MatAdd) def _(expr, assumptions): return all(ask(Q.symmetric(arg), assumptions) for arg in expr.args) [email protected](MatrixSymbol) # type: ignore [email protected](MatrixSymbol) def _(expr, assumptions): if not expr.is_square: return False @@ -80,15 +80,15 @@ def _(expr, assumptions): if Q.symmetric(expr) in conjuncts(assumptions): return True [email protected]_many(OneMatrix, ZeroMatrix) # type: ignore [email protected]_many(OneMatrix, ZeroMatrix) def _(expr, assumptions): return ask(Q.square(expr), assumptions) [email protected]_many(Inverse, Transpose) # type: ignore [email protected]_many(Inverse, Transpose) def _(expr, assumptions): return ask(Q.symmetric(expr.arg), assumptions) [email protected](MatrixSlice) # type: ignore [email protected](MatrixSlice) def _(expr, assumptions): # TODO: implement sathandlers system for the matrices. # Now it duplicates the general fact: Implies(Q.diagonal, Q.symmetric). @@ -99,14 +99,14 @@ def _(expr, assumptions): else: return ask(Q.symmetric(expr.parent), assumptions) [email protected](Identity) # type: ignore [email protected](Identity) def _(expr, assumptions): return True # InvertiblePredicate [email protected](MatMul) # type: ignore [email protected](MatMul) def _(expr, assumptions): factor, mmul = expr.as_coeff_mmul() if all(ask(Q.invertible(arg), assumptions) for arg in mmul.args): @@ -115,7 +115,7 @@ def _(expr, assumptions): for arg in mmul.args): return False [email protected](MatPow) # type: ignore [email protected](MatPow) def _(expr, assumptions): # only for integer powers base, exp = expr.args @@ -126,53 +126,53 @@ def _(expr, assumptions): return ask(Q.invertible(base), assumptions) return None [email protected](MatAdd) # type: ignore [email protected](MatAdd) def _(expr, assumptions): return None [email protected](MatrixSymbol) # type: ignore [email protected](MatrixSymbol) def _(expr, assumptions): if not expr.is_square: return False if Q.invertible(expr) in conjuncts(assumptions): return True [email protected]_many(Identity, Inverse) # type: ignore [email protected]_many(Identity, Inverse) def _(expr, assumptions): return True [email protected](ZeroMatrix) # type: ignore [email protected](ZeroMatrix) def _(expr, assumptions): return False [email protected](OneMatrix) # type: ignore [email protected](OneMatrix) def _(expr, assumptions): return expr.shape[0] == 1 and expr.shape[1] == 1 [email protected](Transpose) # type: ignore [email protected](Transpose) def _(expr, assumptions): return ask(Q.invertible(expr.arg), assumptions) [email protected](MatrixSlice) # type: ignore [email protected](MatrixSlice) def _(expr, assumptions): if not expr.on_diag: return None else: return ask(Q.invertible(expr.parent), assumptions) [email protected](MatrixBase) # type: ignore [email protected](MatrixBase) def _(expr, assumptions): if not expr.is_square: return False return expr.rank() == expr.rows [email protected](MatrixExpr) # type: ignore [email protected](MatrixExpr) def _(expr, assumptions): if not expr.is_square: return False return None [email protected](BlockMatrix) # type: ignore [email protected](BlockMatrix) def _(expr, assumptions): from sympy.matrices.expressions.blockmatrix import reblock_2x2 if not expr.is_square: @@ -200,7 +200,7 @@ def _(expr, assumptions): return invertible return None [email protected](BlockDiagMatrix) # type: ignore [email protected](BlockDiagMatrix) def _(expr, assumptions): if expr.rowblocksizes != expr.colblocksizes: return None @@ -209,7 +209,7 @@ def _(expr, assumptions): # OrthogonalPredicate [email protected](MatMul) # type: ignore [email protected](MatMul) def _(expr, assumptions): factor, mmul = expr.as_coeff_mmul() if (all(ask(Q.orthogonal(arg), assumptions) for arg in mmul.args) and @@ -219,7 +219,7 @@ def _(expr, assumptions): for arg in mmul.args): return False [email protected](MatPow) # type: ignore [email protected](MatPow) def _(expr, assumptions): # only for integer powers base, exp = expr.args @@ -228,13 +228,13 @@ def _(expr, assumptions): return ask(Q.orthogonal(base), assumptions) return None [email protected](MatAdd) # type: ignore [email protected](MatAdd) def _(expr, assumptions): if (len(expr.args) == 1 and ask(Q.orthogonal(expr.args[0]), assumptions)): return True [email protected](MatrixSymbol) # type: ignore [email protected](MatrixSymbol) def _(expr, assumptions): if (not expr.is_square or ask(Q.invertible(expr), assumptions) is False): @@ -242,33 +242,33 @@ def _(expr, assumptions): if Q.orthogonal(expr) in conjuncts(assumptions): return True [email protected](Identity) # type: ignore [email protected](Identity) def _(expr, assumptions): return True [email protected](ZeroMatrix) # type: ignore [email protected](ZeroMatrix) def _(expr, assumptions): return False [email protected]_many(Inverse, Transpose) # type: ignore [email protected]_many(Inverse, Transpose) def _(expr, assumptions): return ask(Q.orthogonal(expr.arg), assumptions) [email protected](MatrixSlice) # type: ignore [email protected](MatrixSlice) def _(expr, assumptions): if not expr.on_diag: return None else: return ask(Q.orthogonal(expr.parent), assumptions) [email protected](Factorization) # type: ignore [email protected](Factorization) def _(expr, assumptions): return _Factorization(Q.orthogonal, expr, assumptions) # UnitaryPredicate [email protected](MatMul) # type: ignore [email protected](MatMul) def _(expr, assumptions): factor, mmul = expr.as_coeff_mmul() if (all(ask(Q.unitary(arg), assumptions) for arg in mmul.args) and @@ -278,7 +278,7 @@ def _(expr, assumptions): for arg in mmul.args): return False [email protected](MatPow) # type: ignore [email protected](MatPow) def _(expr, assumptions): # only for integer powers base, exp = expr.args @@ -287,7 +287,7 @@ def _(expr, assumptions): return ask(Q.unitary(base), assumptions) return None [email protected](MatrixSymbol) # type: ignore [email protected](MatrixSymbol) def _(expr, assumptions): if (not expr.is_square or ask(Q.invertible(expr), assumptions) is False): @@ -295,38 +295,38 @@ def _(expr, assumptions): if Q.unitary(expr) in conjuncts(assumptions): return True [email protected]_many(Inverse, Transpose) # type: ignore [email protected]_many(Inverse, Transpose) def _(expr, assumptions): return ask(Q.unitary(expr.arg), assumptions) [email protected](MatrixSlice) # type: ignore [email protected](MatrixSlice) def _(expr, assumptions): if not expr.on_diag: return None else: return ask(Q.unitary(expr.parent), assumptions) [email protected]_many(DFT, Identity) # type: ignore [email protected]_many(DFT, Identity) def _(expr, assumptions): return True [email protected](ZeroMatrix) # type: ignore [email protected](ZeroMatrix) def _(expr, assumptions): return False [email protected](Factorization) # type: ignore [email protected](Factorization) def _(expr, assumptions): return _Factorization(Q.unitary, expr, assumptions) # FullRankPredicate [email protected](MatMul) # type: ignore [email protected](MatMul) def _(expr, assumptions): if all(ask(Q.fullrank(arg), assumptions) for arg in expr.args): return True [email protected](MatPow) # type: ignore [email protected](MatPow) def _(expr, assumptions): # only for integer powers base, exp = expr.args @@ -335,23 +335,23 @@ def _(expr, assumptions): return ask(Q.fullrank(base), assumptions) return None [email protected](Identity) # type: ignore [email protected](Identity) def _(expr, assumptions): return True [email protected](ZeroMatrix) # type: ignore [email protected](ZeroMatrix) def _(expr, assumptions): return False [email protected](OneMatrix) # type: ignore [email protected](OneMatrix) def _(expr, assumptions): return expr.shape[0] == 1 and expr.shape[1] == 1 [email protected]_many(Inverse, Transpose) # type: ignore [email protected]_many(Inverse, Transpose) def _(expr, assumptions): return ask(Q.fullrank(expr.arg), assumptions) [email protected](MatrixSlice) # type: ignore [email protected](MatrixSlice) def _(expr, assumptions): if ask(Q.orthogonal(expr.parent), assumptions): return True @@ -359,7 +359,7 @@ def _(expr, assumptions): # PositiveDefinitePredicate [email protected](MatMul) # type: ignore [email protected](MatMul) def _(expr, assumptions): factor, mmul = expr.as_coeff_mmul() if (all(ask(Q.positive_definite(arg), assumptions) @@ -371,42 +371,42 @@ def _(expr, assumptions): return ask(Q.positive_definite( MatMul(*mmul.args[1:-1])), assumptions) [email protected](MatPow) # type: ignore [email protected](MatPow) def _(expr, assumptions): # a power of a positive definite matrix is positive definite if ask(Q.positive_definite(expr.args[0]), assumptions): return True [email protected](MatAdd) # type: ignore [email protected](MatAdd) def _(expr, assumptions): if all(ask(Q.positive_definite(arg), assumptions) for arg in expr.args): return True [email protected](MatrixSymbol) # type: ignore [email protected](MatrixSymbol) def _(expr, assumptions): if not expr.is_square: return False if Q.positive_definite(expr) in conjuncts(assumptions): return True [email protected](Identity) # type: ignore [email protected](Identity) def _(expr, assumptions): return True [email protected](ZeroMatrix) # type: ignore [email protected](ZeroMatrix) def _(expr, assumptions): return False [email protected](OneMatrix) # type: ignore [email protected](OneMatrix) def _(expr, assumptions): return expr.shape[0] == 1 and expr.shape[1] == 1 [email protected]_many(Inverse, Transpose) # type: ignore [email protected]_many(Inverse, Transpose) def _(expr, assumptions): return ask(Q.positive_definite(expr.arg), assumptions) [email protected](MatrixSlice) # type: ignore [email protected](MatrixSlice) def _(expr, assumptions): if not expr.on_diag: return None @@ -416,18 +416,18 @@ def _(expr, assumptions): # UpperTriangularPredicate [email protected](MatMul) # type: ignore [email protected](MatMul) def _(expr, assumptions): factor, matrices = expr.as_coeff_matrices() if all(ask(Q.upper_triangular(m), assumptions) for m in matrices): return True [email protected](MatAdd) # type: ignore [email protected](MatAdd) def _(expr, assumptions): if all(ask(Q.upper_triangular(arg), assumptions) for arg in expr.args): return True [email protected](MatPow) # type: ignore [email protected](MatPow) def _(expr, assumptions): # only for integer powers base, exp = expr.args @@ -440,52 +440,52 @@ def _(expr, assumptions): return ask(Q.upper_triangular(base), assumptions) return None [email protected](MatrixSymbol) # type: ignore [email protected](MatrixSymbol) def _(expr, assumptions): if Q.upper_triangular(expr) in conjuncts(assumptions): return True [email protected]_many(Identity, ZeroMatrix) # type: ignore [email protected]_many(Identity, ZeroMatrix) def _(expr, assumptions): return True [email protected](OneMatrix) # type: ignore [email protected](OneMatrix) def _(expr, assumptions): return expr.shape[0] == 1 and expr.shape[1] == 1 [email protected](Transpose) # type: ignore [email protected](Transpose) def _(expr, assumptions): return ask(Q.lower_triangular(expr.arg), assumptions) [email protected](Inverse) # type: ignore [email protected](Inverse) def _(expr, assumptions): return ask(Q.upper_triangular(expr.arg), assumptions) [email protected](MatrixSlice) # type: ignore [email protected](MatrixSlice) def _(expr, assumptions): if not expr.on_diag: return None else: return ask(Q.upper_triangular(expr.parent), assumptions) [email protected](Factorization) # type: ignore [email protected](Factorization) def _(expr, assumptions): return _Factorization(Q.upper_triangular, expr, assumptions) # LowerTriangularPredicate [email protected](MatMul) # type: ignore [email protected](MatMul) def _(expr, assumptions): factor, matrices = expr.as_coeff_matrices() if all(ask(Q.lower_triangular(m), assumptions) for m in matrices): return True [email protected](MatAdd) # type: ignore [email protected](MatAdd) def _(expr, assumptions): if all(ask(Q.lower_triangular(arg), assumptions) for arg in expr.args): return True [email protected](MatPow) # type: ignore [email protected](MatPow) def _(expr, assumptions): # only for integer powers base, exp = expr.args @@ -498,35 +498,35 @@ def _(expr, assumptions): return ask(Q.lower_triangular(base), assumptions) return None [email protected](MatrixSymbol) # type: ignore [email protected](MatrixSymbol) def _(expr, assumptions): if Q.lower_triangular(expr) in conjuncts(assumptions): return True [email protected]_many(Identity, ZeroMatrix) # type: ignore [email protected]_many(Identity, ZeroMatrix) def _(expr, assumptions): return True [email protected](OneMatrix) # type: ignore [email protected](OneMatrix) def _(expr, assumptions): return expr.shape[0] == 1 and expr.shape[1] == 1 [email protected](Transpose) # type: ignore [email protected](Transpose) def _(expr, assumptions): return ask(Q.upper_triangular(expr.arg), assumptions) [email protected](Inverse) # type: ignore [email protected](Inverse) def _(expr, assumptions): return ask(Q.lower_triangular(expr.arg), assumptions) [email protected](MatrixSlice) # type: ignore [email protected](MatrixSlice) def _(expr, assumptions): if not expr.on_diag: return None else: return ask(Q.lower_triangular(expr.parent), assumptions) [email protected](Factorization) # type: ignore [email protected](Factorization) def _(expr, assumptions): return _Factorization(Q.lower_triangular, expr, assumptions) @@ -536,7 +536,7 @@ def _(expr, assumptions): def _is_empty_or_1x1(expr): return expr.shape in ((0, 0), (1, 1)) [email protected](MatMul) # type: ignore [email protected](MatMul) def _(expr, assumptions): if _is_empty_or_1x1(expr): return True @@ -544,7 +544,7 @@ def _(expr, assumptions): if all(ask(Q.diagonal(m), assumptions) for m in matrices): return True [email protected](MatPow) # type: ignore [email protected](MatPow) def _(expr, assumptions): # only for integer powers base, exp = expr.args @@ -557,27 +557,27 @@ def _(expr, assumptions): return ask(Q.diagonal(base), assumptions) return None [email protected](MatAdd) # type: ignore [email protected](MatAdd) def _(expr, assumptions): if all(ask(Q.diagonal(arg), assumptions) for arg in expr.args): return True [email protected](MatrixSymbol) # type: ignore [email protected](MatrixSymbol) def _(expr, assumptions): if _is_empty_or_1x1(expr): return True if Q.diagonal(expr) in conjuncts(assumptions): return True [email protected](OneMatrix) # type: ignore [email protected](OneMatrix) def _(expr, assumptions): return expr.shape[0] == 1 and expr.shape[1] == 1 [email protected]_many(Inverse, Transpose) # type: ignore [email protected]_many(Inverse, Transpose) def _(expr, assumptions): return ask(Q.diagonal(expr.arg), assumptions) [email protected](MatrixSlice) # type: ignore [email protected](MatrixSlice) def _(expr, assumptions): if _is_empty_or_1x1(expr): return True @@ -586,11 +586,11 @@ def _(expr, assumptions): else: return ask(Q.diagonal(expr.parent), assumptions) [email protected]_many(DiagonalMatrix, DiagMatrix, Identity, ZeroMatrix) # type: ignore [email protected]_many(DiagonalMatrix, DiagMatrix, Identity, ZeroMatrix) def _(expr, assumptions): return True [email protected](Factorization) # type: ignore [email protected](Factorization) def _(expr, assumptions): return _Factorization(Q.diagonal, expr, assumptions) @@ -613,12 +613,12 @@ def MatMul_elements(matrix_predicate, scalar_predicate, expr, assumptions): test_closed_group(Basic(*matrices), assumptions, matrix_predicate)]) [email protected]_many(Determinant, HadamardProduct, MatAdd, # type: ignore [email protected]_many(Determinant, HadamardProduct, MatAdd, Trace, Transpose) def _(expr, assumptions): return test_closed_group(expr, assumptions, Q.integer_elements) [email protected](MatPow) # type: ignore [email protected](MatPow) def _(expr, assumptions): # only for integer powers base, exp = expr.args @@ -629,31 +629,31 @@ def _(expr, assumptions): return ask(Q.integer_elements(base), assumptions) return None [email protected]_many(Identity, OneMatrix, ZeroMatrix) # type: ignore [email protected]_many(Identity, OneMatrix, ZeroMatrix) def _(expr, assumptions): return True [email protected](MatMul) # type: ignore [email protected](MatMul) def _(expr, assumptions): return MatMul_elements(Q.integer_elements, Q.integer, expr, assumptions) [email protected](MatrixSlice) # type: ignore [email protected](MatrixSlice) def _(expr, assumptions): return MS_elements(Q.integer_elements, expr, assumptions) [email protected](BlockMatrix) # type: ignore [email protected](BlockMatrix) def _(expr, assumptions): return BM_elements(Q.integer_elements, expr, assumptions) # RealElementsPredicate [email protected]_many(Determinant, Factorization, HadamardProduct, # type: ignore [email protected]_many(Determinant, Factorization, HadamardProduct, MatAdd, Trace, Transpose) def _(expr, assumptions): return test_closed_group(expr, assumptions, Q.real_elements) [email protected](MatPow) # type: ignore [email protected](MatPow) def _(expr, assumptions): # only for integer powers base, exp = expr.args @@ -666,27 +666,27 @@ def _(expr, assumptions): return ask(Q.real_elements(base), assumptions) return None [email protected](MatMul) # type: ignore [email protected](MatMul) def _(expr, assumptions): return MatMul_elements(Q.real_elements, Q.real, expr, assumptions) [email protected](MatrixSlice) # type: ignore [email protected](MatrixSlice) def _(expr, assumptions): return MS_elements(Q.real_elements, expr, assumptions) [email protected](BlockMatrix) # type: ignore [email protected](BlockMatrix) def _(expr, assumptions): return BM_elements(Q.real_elements, expr, assumptions) # ComplexElementsPredicate [email protected]_many(Determinant, Factorization, HadamardProduct, # type: ignore [email protected]_many(Determinant, Factorization, HadamardProduct, Inverse, MatAdd, Trace, Transpose) def _(expr, assumptions): return test_closed_group(expr, assumptions, Q.complex_elements) [email protected](MatPow) # type: ignore [email protected](MatPow) def _(expr, assumptions): # only for integer powers base, exp = expr.args @@ -699,18 +699,18 @@ def _(expr, assumptions): return ask(Q.complex_elements(base), assumptions) return None [email protected](MatMul) # type: ignore [email protected](MatMul) def _(expr, assumptions): return MatMul_elements(Q.complex_elements, Q.complex, expr, assumptions) [email protected](MatrixSlice) # type: ignore [email protected](MatrixSlice) def _(expr, assumptions): return MS_elements(Q.complex_elements, expr, assumptions) [email protected](BlockMatrix) # type: ignore [email protected](BlockMatrix) def _(expr, assumptions): return BM_elements(Q.complex_elements, expr, assumptions) [email protected](DFT) # type: ignore [email protected](DFT) def _(expr, assumptions): return True diff --git a/sympy/assumptions/handlers/ntheory.py b/sympy/assumptions/handlers/ntheory.py index 48b5b09b45f8..4f1397b283ee 100644 --- a/sympy/assumptions/handlers/ntheory.py +++ b/sympy/assumptions/handlers/ntheory.py @@ -31,19 +31,19 @@ def _PrimePredicate_number(expr, assumptions): # when not exact, we won't give a True or False # since the number represents an approximate value [email protected](Expr) # type: ignore [email protected](Expr) def _(expr, assumptions): ret = expr.is_prime if ret is None: raise MDNotImplementedError return ret [email protected](Basic) # type: ignore [email protected](Basic) def _(expr, assumptions): if expr.is_number: return _PrimePredicate_number(expr, assumptions) [email protected](Mul) # type: ignore [email protected](Mul) def _(expr, assumptions): if expr.is_number: return _PrimePredicate_number(expr, assumptions) @@ -54,7 +54,7 @@ def _(expr, assumptions): if arg.is_number and arg.is_composite: return False [email protected](Pow) # type: ignore [email protected](Pow) def _(expr, assumptions): """ Integer**Integer -> !Prime @@ -65,37 +65,37 @@ def _(expr, assumptions): ask(Q.integer(expr.base), assumptions): return False [email protected](Integer) # type: ignore [email protected](Integer) def _(expr, assumptions): return isprime(expr) [email protected]_many(Rational, Infinity, NegativeInfinity, ImaginaryUnit) # type: ignore [email protected]_many(Rational, Infinity, NegativeInfinity, ImaginaryUnit) def _(expr, assumptions): return False [email protected](Float) # type: ignore [email protected](Float) def _(expr, assumptions): return _PrimePredicate_number(expr, assumptions) [email protected](NumberSymbol) # type: ignore [email protected](NumberSymbol) def _(expr, assumptions): return _PrimePredicate_number(expr, assumptions) [email protected](NaN) # type: ignore [email protected](NaN) def _(expr, assumptions): return None # CompositePredicate [email protected](Expr) # type: ignore [email protected](Expr) def _(expr, assumptions): ret = expr.is_composite if ret is None: raise MDNotImplementedError return ret [email protected](Basic) # type: ignore [email protected](Basic) def _(expr, assumptions): _positive = ask(Q.positive(expr), assumptions) if _positive: @@ -129,19 +129,19 @@ def _EvenPredicate_number(expr, assumptions): return False return i % 2 == 0 [email protected](Expr) # type: ignore [email protected](Expr) def _(expr, assumptions): ret = expr.is_even if ret is None: raise MDNotImplementedError return ret [email protected](Basic) # type: ignore [email protected](Basic) def _(expr, assumptions): if expr.is_number: return _EvenPredicate_number(expr, assumptions) [email protected](Mul) # type: ignore [email protected](Mul) def _(expr, assumptions): """ Even * Integer -> Even @@ -182,7 +182,7 @@ def _(expr, assumptions): if odd == len(expr.args): return False [email protected](Add) # type: ignore [email protected](Add) def _(expr, assumptions): """ Even + Odd -> Odd @@ -203,7 +203,7 @@ def _(expr, assumptions): else: return _result [email protected](Pow) # type: ignore [email protected](Pow) def _(expr, assumptions): if expr.is_number: return _EvenPredicate_number(expr, assumptions) @@ -215,48 +215,48 @@ def _(expr, assumptions): elif expr.base is S.NegativeOne: return False [email protected](Integer) # type: ignore [email protected](Integer) def _(expr, assumptions): return not bool(expr.p & 1) [email protected]_many(Rational, Infinity, NegativeInfinity, ImaginaryUnit) # type: ignore [email protected]_many(Rational, Infinity, NegativeInfinity, ImaginaryUnit) def _(expr, assumptions): return False [email protected](NumberSymbol) # type: ignore [email protected](NumberSymbol) def _(expr, assumptions): return _EvenPredicate_number(expr, assumptions) [email protected](Abs) # type: ignore [email protected](Abs) def _(expr, assumptions): if ask(Q.real(expr.args[0]), assumptions): return ask(Q.even(expr.args[0]), assumptions) [email protected](re) # type: ignore [email protected](re) def _(expr, assumptions): if ask(Q.real(expr.args[0]), assumptions): return ask(Q.even(expr.args[0]), assumptions) [email protected](im) # type: ignore [email protected](im) def _(expr, assumptions): if ask(Q.real(expr.args[0]), assumptions): return True [email protected](NaN) # type: ignore [email protected](NaN) def _(expr, assumptions): return None # OddPredicate [email protected](Expr) # type: ignore [email protected](Expr) def _(expr, assumptions): ret = expr.is_odd if ret is None: raise MDNotImplementedError return ret [email protected](Basic) # type: ignore [email protected](Basic) def _(expr, assumptions): _integer = ask(Q.integer(expr), assumptions) if _integer: diff --git a/sympy/assumptions/handlers/order.py b/sympy/assumptions/handlers/order.py index 30d025118d2e..f4a5378c20a9 100644 --- a/sympy/assumptions/handlers/order.py +++ b/sympy/assumptions/handlers/order.py @@ -40,19 +40,19 @@ def _NegativePredicate_number(expr, assumptions): if r._prec != 1: return r < 0 [email protected](Basic) # type: ignore [email protected](Basic) def _(expr, assumptions): if expr.is_number: return _NegativePredicate_number(expr, assumptions) [email protected](Expr) # type: ignore [email protected](Expr) def _(expr, assumptions): ret = expr.is_negative if ret is None: raise MDNotImplementedError return ret [email protected](Add) # type: ignore [email protected](Add) def _(expr, assumptions): """ Positive + Positive -> Positive, @@ -76,7 +76,7 @@ def _(expr, assumptions): if nonpos < len(expr.args): return True [email protected](Mul) # type: ignore [email protected](Mul) def _(expr, assumptions): if expr.is_number: return _NegativePredicate_number(expr, assumptions) @@ -92,7 +92,7 @@ def _(expr, assumptions): return return result [email protected](Pow) # type: ignore [email protected](Pow) def _(expr, assumptions): """ Real ** Even -> NonNegative @@ -116,11 +116,11 @@ def _(expr, assumptions): if ask(Q.odd(expr.exp), assumptions): return ask(Q.negative(expr.base), assumptions) [email protected]_many(Abs, ImaginaryUnit) # type: ignore [email protected]_many(Abs, ImaginaryUnit) def _(expr, assumptions): return False [email protected](exp) # type: ignore [email protected](exp) def _(expr, assumptions): if ask(Q.real(expr.exp), assumptions): return False @@ -129,7 +129,7 @@ def _(expr, assumptions): # NonNegativePredicate [email protected](Basic) # type: ignore [email protected](Basic) def _(expr, assumptions): if expr.is_number: notnegative = fuzzy_not(_NegativePredicate_number(expr, assumptions)) @@ -138,7 +138,7 @@ def _(expr, assumptions): else: return notnegative [email protected](Expr) # type: ignore [email protected](Expr) def _(expr, assumptions): ret = expr.is_nonnegative if ret is None: @@ -148,14 +148,14 @@ def _(expr, assumptions): # NonZeroPredicate [email protected](Expr) # type: ignore [email protected](Expr) def _(expr, assumptions): ret = expr.is_nonzero if ret is None: raise MDNotImplementedError return ret [email protected](Basic) # type: ignore [email protected](Basic) def _(expr, assumptions): if ask(Q.real(expr)) is False: return False @@ -167,13 +167,13 @@ def nonz(i): return i != 0 return fuzzy_or(nonz(i) for i in i.as_real_imag()) [email protected](Add) # type: ignore [email protected](Add) def _(expr, assumptions): if all(ask(Q.positive(x), assumptions) for x in expr.args) \ or all(ask(Q.negative(x), assumptions) for x in expr.args): return True [email protected](Mul) # type: ignore [email protected](Mul) def _(expr, assumptions): for arg in expr.args: result = ask(Q.nonzero(arg), assumptions) @@ -182,34 +182,34 @@ def _(expr, assumptions): return result return True [email protected](Pow) # type: ignore [email protected](Pow) def _(expr, assumptions): return ask(Q.nonzero(expr.base), assumptions) [email protected](Abs) # type: ignore [email protected](Abs) def _(expr, assumptions): return ask(Q.nonzero(expr.args[0]), assumptions) [email protected](NaN) # type: ignore [email protected](NaN) def _(expr, assumptions): return None # ZeroPredicate [email protected](Expr) # type: ignore [email protected](Expr) def _(expr, assumptions): ret = expr.is_zero if ret is None: raise MDNotImplementedError return ret [email protected](Basic) # type: ignore [email protected](Basic) def _(expr, assumptions): return fuzzy_and([fuzzy_not(ask(Q.nonzero(expr), assumptions)), ask(Q.real(expr), assumptions)]) [email protected](Mul) # type: ignore [email protected](Mul) def _(expr, assumptions): # TODO: This should be deducible from the nonzero handler return fuzzy_or(ask(Q.zero(arg), assumptions) for arg in expr.args) @@ -217,14 +217,14 @@ def _(expr, assumptions): # NonPositivePredicate [email protected](Expr) # type: ignore [email protected](Expr) def _(expr, assumptions): ret = expr.is_nonpositive if ret is None: raise MDNotImplementedError return ret [email protected](Basic) # type: ignore [email protected](Basic) def _(expr, assumptions): if expr.is_number: notpositive = fuzzy_not(_PositivePredicate_number(expr, assumptions)) @@ -255,19 +255,19 @@ def _PositivePredicate_number(expr, assumptions): if r._prec != 1: return r > 0 [email protected](Expr) # type: ignore [email protected](Expr) def _(expr, assumptions): ret = expr.is_positive if ret is None: raise MDNotImplementedError return ret [email protected](Basic) # type: ignore [email protected](Basic) def _(expr, assumptions): if expr.is_number: return _PositivePredicate_number(expr, assumptions) [email protected](Mul) # type: ignore [email protected](Mul) def _(expr, assumptions): if expr.is_number: return _PositivePredicate_number(expr, assumptions) @@ -281,7 +281,7 @@ def _(expr, assumptions): return return result [email protected](Add) # type: ignore [email protected](Add) def _(expr, assumptions): if expr.is_number: return _PositivePredicate_number(expr, assumptions) @@ -301,7 +301,7 @@ def _(expr, assumptions): if nonneg < len(expr.args): return True [email protected](Pow) # type: ignore [email protected](Pow) def _(expr, assumptions): if expr.base == E: if ask(Q.real(expr.exp), assumptions): @@ -321,14 +321,14 @@ def _(expr, assumptions): if ask(Q.odd(expr.exp), assumptions): return False [email protected](exp) # type: ignore [email protected](exp) def _(expr, assumptions): if ask(Q.real(expr.exp), assumptions): return True if ask(Q.imaginary(expr.exp), assumptions): return ask(Q.even(expr.exp/(I*pi)), assumptions) [email protected](log) # type: ignore [email protected](log) def _(expr, assumptions): r = ask(Q.real(expr.args[0]), assumptions) if r is not True: @@ -338,41 +338,41 @@ def _(expr, assumptions): if ask(Q.negative(expr.args[0] - 1), assumptions): return False [email protected](factorial) # type: ignore [email protected](factorial) def _(expr, assumptions): x = expr.args[0] if ask(Q.integer(x) & Q.positive(x), assumptions): return True [email protected](ImaginaryUnit) # type: ignore [email protected](ImaginaryUnit) def _(expr, assumptions): return False [email protected](Abs) # type: ignore [email protected](Abs) def _(expr, assumptions): return ask(Q.nonzero(expr), assumptions) [email protected](Trace) # type: ignore [email protected](Trace) def _(expr, assumptions): if ask(Q.positive_definite(expr.arg), assumptions): return True [email protected](Determinant) # type: ignore [email protected](Determinant) def _(expr, assumptions): if ask(Q.positive_definite(expr.arg), assumptions): return True [email protected](MatrixElement) # type: ignore [email protected](MatrixElement) def _(expr, assumptions): if (expr.i == expr.j and ask(Q.positive_definite(expr.parent), assumptions)): return True [email protected](atan) # type: ignore [email protected](atan) def _(expr, assumptions): return ask(Q.positive(expr.args[0]), assumptions) [email protected](asin) # type: ignore [email protected](asin) def _(expr, assumptions): x = expr.args[0] if ask(Q.positive(x) & Q.nonpositive(x - 1), assumptions): @@ -380,38 +380,38 @@ def _(expr, assumptions): if ask(Q.negative(x) & Q.nonnegative(x + 1), assumptions): return False [email protected](acos) # type: ignore [email protected](acos) def _(expr, assumptions): x = expr.args[0] if ask(Q.nonpositive(x - 1) & Q.nonnegative(x + 1), assumptions): return True [email protected](acot) # type: ignore [email protected](acot) def _(expr, assumptions): return ask(Q.real(expr.args[0]), assumptions) [email protected](NaN) # type: ignore [email protected](NaN) def _(expr, assumptions): return None # ExtendedNegativePredicate [email protected](object) # type: ignore [email protected](object) def _(expr, assumptions): return ask(Q.negative(expr) | Q.negative_infinite(expr), assumptions) # ExtendedPositivePredicate [email protected](object) # type: ignore [email protected](object) def _(expr, assumptions): return ask(Q.positive(expr) | Q.positive_infinite(expr), assumptions) # ExtendedNonZeroPredicate [email protected](object) # type: ignore [email protected](object) def _(expr, assumptions): return ask( Q.negative_infinite(expr) | Q.negative(expr) | Q.positive(expr) | Q.positive_infinite(expr), @@ -420,7 +420,7 @@ def _(expr, assumptions): # ExtendedNonPositivePredicate [email protected](object) # type: ignore [email protected](object) def _(expr, assumptions): return ask( Q.negative_infinite(expr) | Q.negative(expr) | Q.zero(expr), @@ -429,7 +429,7 @@ def _(expr, assumptions): # ExtendedNonNegativePredicate [email protected](object) # type: ignore [email protected](object) def _(expr, assumptions): return ask( Q.zero(expr) | Q.positive(expr) | Q.positive_infinite(expr), diff --git a/sympy/assumptions/handlers/sets.py b/sympy/assumptions/handlers/sets.py index 377ce28eae07..b53bcfedef30 100644 --- a/sympy/assumptions/handlers/sets.py +++ b/sympy/assumptions/handlers/sets.py @@ -41,19 +41,19 @@ def _IntegerPredicate_number(expr, assumptions): def _(expr, assumptions): return True [email protected]_many(Exp1, GoldenRatio, ImaginaryUnit, Infinity, # type: ignore [email protected]_many(Exp1, GoldenRatio, ImaginaryUnit, Infinity, NegativeInfinity, Pi, Rational, TribonacciConstant) def _(expr, assumptions): return False [email protected](Expr) # type: ignore [email protected](Expr) def _(expr, assumptions): ret = expr.is_integer if ret is None: raise MDNotImplementedError return ret [email protected]_many(Add, Pow) # type: ignore [email protected]_many(Add, Pow) def _(expr, assumptions): """ * Integer + Integer -> Integer @@ -64,7 +64,7 @@ def _(expr, assumptions): return _IntegerPredicate_number(expr, assumptions) return test_closed_group(expr, assumptions, Q.integer) [email protected](Mul) # type: ignore [email protected](Mul) def _(expr, assumptions): """ * Integer*Integer -> Integer @@ -92,38 +92,38 @@ def _(expr, assumptions): return _output [email protected](Abs) # type: ignore [email protected](Abs) def _(expr, assumptions): return ask(Q.integer(expr.args[0]), assumptions) [email protected]_many(Determinant, MatrixElement, Trace) # type: ignore [email protected]_many(Determinant, MatrixElement, Trace) def _(expr, assumptions): return ask(Q.integer_elements(expr.args[0]), assumptions) # RationalPredicate [email protected](Rational) # type: ignore [email protected](Rational) def _(expr, assumptions): return True [email protected](Float) # type: ignore [email protected](Float) def _(expr, assumptions): return None [email protected]_many(Exp1, GoldenRatio, ImaginaryUnit, Infinity, # type: ignore [email protected]_many(Exp1, GoldenRatio, ImaginaryUnit, Infinity, NegativeInfinity, Pi, TribonacciConstant) def _(expr, assumptions): return False [email protected](Expr) # type: ignore [email protected](Expr) def _(expr, assumptions): ret = expr.is_rational if ret is None: raise MDNotImplementedError return ret [email protected]_many(Add, Mul) # type: ignore [email protected]_many(Add, Mul) def _(expr, assumptions): """ * Rational + Rational -> Rational @@ -135,7 +135,7 @@ def _(expr, assumptions): return False return test_closed_group(expr, assumptions, Q.rational) [email protected](Pow) # type: ignore [email protected](Pow) def _(expr, assumptions): """ * Rational ** Integer -> Rational @@ -154,25 +154,25 @@ def _(expr, assumptions): if ask(Q.prime(expr.base), assumptions): return False [email protected]_many(asin, atan, cos, sin, tan) # type: ignore [email protected]_many(asin, atan, cos, sin, tan) def _(expr, assumptions): x = expr.args[0] if ask(Q.rational(x), assumptions): return ask(~Q.nonzero(x), assumptions) [email protected](exp) # type: ignore [email protected](exp) def _(expr, assumptions): x = expr.exp if ask(Q.rational(x), assumptions): return ask(~Q.nonzero(x), assumptions) [email protected]_many(acot, cot) # type: ignore [email protected]_many(acot, cot) def _(expr, assumptions): x = expr.args[0] if ask(Q.rational(x), assumptions): return False [email protected]_many(acos, log) # type: ignore [email protected]_many(acos, log) def _(expr, assumptions): x = expr.args[0] if ask(Q.rational(x), assumptions): @@ -181,14 +181,14 @@ def _(expr, assumptions): # IrrationalPredicate [email protected](Expr) # type: ignore [email protected](Expr) def _(expr, assumptions): ret = expr.is_irrational if ret is None: raise MDNotImplementedError return ret [email protected](Basic) # type: ignore [email protected](Basic) def _(expr, assumptions): _real = ask(Q.real(expr), assumptions) if _real: @@ -211,23 +211,23 @@ def _RealPredicate_number(expr, assumptions): # allow None to be returned if we couldn't show for sure # that i was 0 [email protected]_many(Abs, Exp1, Float, GoldenRatio, im, Pi, Rational, # type: ignore [email protected]_many(Abs, Exp1, Float, GoldenRatio, im, Pi, Rational, re, TribonacciConstant) def _(expr, assumptions): return True [email protected]_many(ImaginaryUnit, Infinity, NegativeInfinity) # type: ignore [email protected]_many(ImaginaryUnit, Infinity, NegativeInfinity) def _(expr, assumptions): return False [email protected](Expr) # type: ignore [email protected](Expr) def _(expr, assumptions): ret = expr.is_real if ret is None: raise MDNotImplementedError return ret [email protected](Add) # type: ignore [email protected](Add) def _(expr, assumptions): """ * Real + Real -> Real @@ -237,7 +237,7 @@ def _(expr, assumptions): return _RealPredicate_number(expr, assumptions) return test_closed_group(expr, assumptions, Q.real) [email protected](Mul) # type: ignore [email protected](Mul) def _(expr, assumptions): """ * Real*Real -> Real @@ -257,7 +257,7 @@ def _(expr, assumptions): else: return result [email protected](Pow) # type: ignore [email protected](Pow) def _(expr, assumptions): """ * Real**Integer -> Real @@ -321,29 +321,29 @@ def _(expr, assumptions): elif ask(Q.negative(expr.base), assumptions): return False [email protected]_many(cos, sin) # type: ignore [email protected]_many(cos, sin) def _(expr, assumptions): if ask(Q.real(expr.args[0]), assumptions): return True [email protected](exp) # type: ignore [email protected](exp) def _(expr, assumptions): return ask( Q.integer(expr.exp/I/pi) | Q.real(expr.exp), assumptions ) [email protected](log) # type: ignore [email protected](log) def _(expr, assumptions): return ask(Q.positive(expr.args[0]), assumptions) [email protected]_many(Determinant, MatrixElement, Trace) # type: ignore [email protected]_many(Determinant, MatrixElement, Trace) def _(expr, assumptions): return ask(Q.real_elements(expr.args[0]), assumptions) # ExtendedRealPredicate [email protected](object) # type: ignore [email protected](object) def _(expr, assumptions): return ask(Q.negative_infinite(expr) | Q.negative(expr) @@ -352,7 +352,7 @@ def _(expr, assumptions): | Q.positive_infinite(expr), assumptions) [email protected]_many(Infinity, NegativeInfinity) # type: ignore [email protected]_many(Infinity, NegativeInfinity) def _(expr, assumptions): return True diff --git a/sympy/assumptions/sathandlers.py b/sympy/assumptions/sathandlers.py index b96154b60db2..48579a87274e 100644 --- a/sympy/assumptions/sathandlers.py +++ b/sympy/assumptions/sathandlers.py @@ -200,7 +200,7 @@ def __call__(self, expr): ## Abs ## -@class_fact_registry.multiregister(Abs) # type: ignore +@class_fact_registry.multiregister(Abs) def _(expr): arg = expr.args[0] return [Q.nonnegative(expr), @@ -213,7 +213,7 @@ def _(expr): ### Add ## -@class_fact_registry.multiregister(Add) # type: ignore +@class_fact_registry.multiregister(Add) def _(expr): return [allargs(x, Q.positive(x), expr) >> Q.positive(expr), allargs(x, Q.negative(x), expr) >> Q.negative(expr), @@ -223,7 +223,7 @@ def _(expr): exactlyonearg(x, ~Q.integer(x), expr) >> ~Q.integer(expr), ] -@class_fact_registry.register(Add) # type: ignore +@class_fact_registry.register(Add) def _(expr): allargs_real = allargs(x, Q.real(x), expr) onearg_irrational = exactlyonearg(x, Q.irrational(x), expr) @@ -232,7 +232,7 @@ def _(expr): ### Mul ### -@class_fact_registry.multiregister(Mul) # type: ignore +@class_fact_registry.multiregister(Mul) def _(expr): return [Equivalent(Q.zero(expr), anyarg(x, Q.zero(x), expr)), allargs(x, Q.positive(x), expr) >> Q.positive(expr), @@ -243,7 +243,7 @@ def _(expr): allargs(x, Q.commutative(x), expr) >> Q.commutative(expr), ] -@class_fact_registry.register(Mul) # type: ignore +@class_fact_registry.register(Mul) def _(expr): # Implicitly assumes Mul has more than one arg # Would be allargs(x, Q.prime(x) | Q.composite(x)) except 1 is composite @@ -252,20 +252,20 @@ def _(expr): allargs_prime = allargs(x, Q.prime(x), expr) return Implies(allargs_prime, ~Q.prime(expr)) -@class_fact_registry.register(Mul) # type: ignore +@class_fact_registry.register(Mul) def _(expr): # General Case: Odd number of imaginary args implies mul is imaginary(To be implemented) allargs_imag_or_real = allargs(x, Q.imaginary(x) | Q.real(x), expr) onearg_imaginary = exactlyonearg(x, Q.imaginary(x), expr) return Implies(allargs_imag_or_real, Implies(onearg_imaginary, Q.imaginary(expr))) -@class_fact_registry.register(Mul) # type: ignore +@class_fact_registry.register(Mul) def _(expr): allargs_real = allargs(x, Q.real(x), expr) onearg_irrational = exactlyonearg(x, Q.irrational(x), expr) return Implies(allargs_real, Implies(onearg_irrational, Q.irrational(expr))) -@class_fact_registry.register(Mul) # type: ignore +@class_fact_registry.register(Mul) def _(expr): # Including the integer qualification means we don't need to add any facts # for odd, since the assumptions already know that every integer is @@ -277,7 +277,7 @@ def _(expr): ### MatMul ### -@class_fact_registry.register(MatMul) # type: ignore +@class_fact_registry.register(MatMul) def _(expr): allargs_square = allargs(x, Q.square(x), expr) allargs_invertible = allargs(x, Q.invertible(x), expr) @@ -286,7 +286,7 @@ def _(expr): ### Pow ### -@class_fact_registry.multiregister(Pow) # type: ignore +@class_fact_registry.multiregister(Pow) def _(expr): base, exp = expr.base, expr.exp return [ @@ -312,7 +312,7 @@ def _(expr): Q.composite: lambda o: o.is_composite, } -@class_fact_registry.multiregister(Number, NumberSymbol, ImaginaryUnit) # type: ignore +@class_fact_registry.multiregister(Number, NumberSymbol, ImaginaryUnit) def _(expr): ret = [] for p, getter in _old_assump_getters.items(): diff --git a/sympy/sets/handlers/add.py b/sympy/sets/handlers/add.py index a1b6f9bbd8ee..8c07b25ed19d 100644 --- a/sympy/sets/handlers/add.py +++ b/sympy/sets/handlers/add.py @@ -1,8 +1,7 @@ from sympy.core.numbers import oo, Infinity, NegativeInfinity from sympy.core.singleton import S -from sympy.core.symbol import symbols from sympy.core import Basic, Expr -from sympy.multipledispatch import dispatch +from sympy.multipledispatch import Dispatcher from sympy.sets import Interval, FiniteSet @@ -10,21 +9,22 @@ # XXX: The functions in this module are clearly not tested and are broken in a # number of ways. -_x, _y = symbols("x y") +_set_add = Dispatcher('_set_add') +_set_sub = Dispatcher('_set_sub') -@dispatch(Basic, Basic) # type: ignore # noqa:F811 -def _set_add(x, y): # noqa:F811 +@_set_add.register(Basic, Basic) +def _(x, y): return None -@dispatch(Expr, Expr) # type: ignore # noqa:F811 -def _set_add(x, y): # noqa:F811 +@_set_add.register(Expr, Expr) +def _(x, y): return x+y -@dispatch(Interval, Interval) # type: ignore # noqa:F811 -def _set_add(x, y): # noqa:F811 +@_set_add.register(Interval, Interval) +def _(x, y): """ Additions in interval arithmetic https://en.wikipedia.org/wiki/Interval_arithmetic @@ -33,31 +33,31 @@ def _set_add(x, y): # noqa:F811 x.left_open or y.left_open, x.right_open or y.right_open) -@dispatch(Interval, Infinity) # type: ignore # noqa:F811 -def _set_add(x, y): # noqa:F811 +@_set_add.register(Interval, Infinity) +def _(x, y): if x.start is S.NegativeInfinity: return Interval(-oo, oo) return FiniteSet({S.Infinity}) -@dispatch(Interval, NegativeInfinity) # type: ignore # noqa:F811 -def _set_add(x, y): # noqa:F811 +@_set_add.register(Interval, NegativeInfinity) +def _(x, y): if x.end is S.Infinity: return Interval(-oo, oo) return FiniteSet({S.NegativeInfinity}) -@dispatch(Basic, Basic) # type: ignore -def _set_sub(x, y): # noqa:F811 +@_set_sub.register(Basic, Basic) +def _(x, y): return None -@dispatch(Expr, Expr) # type: ignore # noqa:F811 -def _set_sub(x, y): # noqa:F811 +@_set_sub.register(Expr, Expr) +def _(x, y): return x-y -@dispatch(Interval, Interval) # type: ignore # noqa:F811 -def _set_sub(x, y): # noqa:F811 +@_set_sub.register(Interval, Interval) +def _(x, y): """ Subtractions in interval arithmetic https://en.wikipedia.org/wiki/Interval_arithmetic @@ -66,14 +66,14 @@ def _set_sub(x, y): # noqa:F811 x.left_open or y.right_open, x.right_open or y.left_open) -@dispatch(Interval, Infinity) # type: ignore # noqa:F811 -def _set_sub(x, y): # noqa:F811 +@_set_sub.register(Interval, Infinity) +def _(x, y): if x.start is S.NegativeInfinity: return Interval(-oo, oo) return FiniteSet(-oo) -@dispatch(Interval, NegativeInfinity) # type: ignore # noqa:F811 -def _set_sub(x, y): # noqa:F811 +@_set_sub.register(Interval, NegativeInfinity) +def _(x, y): if x.start is S.NegativeInfinity: return Interval(-oo, oo) return FiniteSet(-oo) diff --git a/sympy/sets/handlers/functions.py b/sympy/sets/handlers/functions.py index 88df9e2cda6d..2529dbfd4584 100644 --- a/sympy/sets/handlers/functions.py +++ b/sympy/sets/handlers/functions.py @@ -8,7 +8,7 @@ from sympy.functions.elementary.exponential import exp, log from sympy.functions.elementary.miscellaneous import Min, Max from sympy.logic.boolalg import true -from sympy.multipledispatch import dispatch +from sympy.multipledispatch import Dispatcher from sympy.sets import (imageset, Interval, FiniteSet, Union, ImageSet, Intersection, Range, Complement) from sympy.sets.sets import EmptySet, is_function_invertible_in_set @@ -20,17 +20,19 @@ FunctionUnion = (FunctionClass, Lambda) +_set_function = Dispatcher('_set_function') -@dispatch(FunctionClass, Set) # type: ignore # noqa:F811 -def _set_function(f, x): # noqa:F811 + +@_set_function.register(FunctionClass, Set) +def _(f, x): return None -@dispatch(FunctionUnion, FiniteSet) # type: ignore # noqa:F811 -def _set_function(f, x): # noqa:F811 +@_set_function.register(FunctionUnion, FiniteSet) +def _(f, x): return FiniteSet(*map(f, x)) -@dispatch(Lambda, Interval) # type: ignore # noqa:F811 -def _set_function(f, x): # noqa:F811 +@_set_function.register(Lambda, Interval) +def _(f, x): from sympy.solvers.solveset import solveset from sympy.series import limit # TODO: handle functions with infinitely many solutions (eg, sin, tan) @@ -120,36 +122,36 @@ def _set_function(f, x): # noqa:F811 for i in range(0, len(sing) - 1)]) + \ imageset(f, Interval(sing[-1], x.end, True, x.right_open)) -@dispatch(FunctionClass, Interval) # type: ignore # noqa:F811 -def _set_function(f, x): # noqa:F811 +@_set_function.register(FunctionClass, Interval) +def _(f, x): if f == exp: return Interval(exp(x.start), exp(x.end), x.left_open, x.right_open) elif f == log: return Interval(log(x.start), log(x.end), x.left_open, x.right_open) return ImageSet(Lambda(_x, f(_x)), x) -@dispatch(FunctionUnion, Union) # type: ignore # noqa:F811 -def _set_function(f, x): # noqa:F811 +@_set_function.register(FunctionUnion, Union) +def _(f, x): return Union(*(imageset(f, arg) for arg in x.args)) -@dispatch(FunctionUnion, Intersection) # type: ignore # noqa:F811 -def _set_function(f, x): # noqa:F811 +@_set_function.register(FunctionUnion, Intersection) +def _(f, x): # If the function is invertible, intersect the maps of the sets. if is_function_invertible_in_set(f, x): return Intersection(*(imageset(f, arg) for arg in x.args)) else: return ImageSet(Lambda(_x, f(_x)), x) -@dispatch(FunctionUnion, EmptySet) # type: ignore # noqa:F811 -def _set_function(f, x): # noqa:F811 +@_set_function.register(FunctionUnion, EmptySet) +def _(f, x): return x -@dispatch(FunctionUnion, Set) # type: ignore # noqa:F811 -def _set_function(f, x): # noqa:F811 +@_set_function.register(FunctionUnion, Set) +def _(f, x): return ImageSet(Lambda(_x, f(_x)), x) -@dispatch(FunctionUnion, Range) # type: ignore # noqa:F811 -def _set_function(f, self): # noqa:F811 +@_set_function.register(FunctionUnion, Range) +def _(f, self): if not self: return S.EmptySet if not isinstance(f.expr, Expr): @@ -172,8 +174,8 @@ def _set_function(f, self): # noqa:F811 if F != expr: return imageset(x, F, Range(self.size)) -@dispatch(FunctionUnion, Integers) # type: ignore # noqa:F811 -def _set_function(f, self): # noqa:F811 +@_set_function.register(FunctionUnion, Integers) +def _(f, self): expr = f.expr if not isinstance(expr, Expr): return @@ -225,8 +227,8 @@ def _set_function(f, self): # noqa:F811 return ImageSet(Lambda(n, expr), S.Integers) -@dispatch(FunctionUnion, Naturals) # type: ignore # noqa:F811 -def _set_function(f, self): # noqa:F811 +@_set_function.register(FunctionUnion, Naturals) +def _(f, self): expr = f.expr if not isinstance(expr, Expr): return @@ -252,8 +254,8 @@ def _set_function(f, self): # noqa:F811 return Range(c, -oo, step) -@dispatch(FunctionUnion, Reals) # type: ignore # noqa:F811 -def _set_function(f, self): # noqa:F811 +@_set_function.register(FunctionUnion, Reals) +def _(f, self): expr = f.expr if not isinstance(expr, Expr): return diff --git a/sympy/sets/handlers/intersection.py b/sympy/sets/handlers/intersection.py index 1980251c5d97..9305721b3cb7 100644 --- a/sympy/sets/handlers/intersection.py +++ b/sympy/sets/handlers/intersection.py @@ -6,7 +6,7 @@ from sympy.core.symbol import (Dummy, symbols) from sympy.sets.fancysets import ComplexRegion from sympy.sets.sets import (FiniteSet, Intersection, Interval, Set, Union) -from sympy.multipledispatch import dispatch +from sympy.multipledispatch import Dispatcher from sympy.sets.conditionset import ConditionSet from sympy.sets.fancysets import (Integers, Naturals, Reals, Range, ImageSet, Rationals) @@ -14,28 +14,31 @@ from sympy.simplify.radsimp import numer -@dispatch(ConditionSet, ConditionSet) # type: ignore # noqa:F811 -def intersection_sets(a, b): # noqa:F811 +intersection_sets = Dispatcher('intersection_sets') + + +@intersection_sets.register(ConditionSet, ConditionSet) +def _(a, b): return None -@dispatch(ConditionSet, Set) # type: ignore # noqa:F811 -def intersection_sets(a, b): # noqa:F811 +@intersection_sets.register(ConditionSet, Set) +def _(a, b): return ConditionSet(a.sym, a.condition, Intersection(a.base_set, b)) -@dispatch(Naturals, Integers) # type: ignore # noqa:F811 -def intersection_sets(a, b): # noqa:F811 +@intersection_sets.register(Naturals, Integers) +def _(a, b): return a -@dispatch(Naturals, Naturals) # type: ignore # noqa:F811 -def intersection_sets(a, b): # noqa:F811 +@intersection_sets.register(Naturals, Naturals) +def _(a, b): return a if a is S.Naturals else b -@dispatch(Interval, Naturals) # type: ignore # noqa:F811 -def intersection_sets(a, b): # noqa:F811 +@intersection_sets.register(Interval, Naturals) +def _(a, b): return intersection_sets(b, a) -@dispatch(ComplexRegion, Set) # type: ignore # noqa:F811 -def intersection_sets(self, other): # noqa:F811 +@intersection_sets.register(ComplexRegion, Set) +def _(self, other): if other.is_ComplexRegion: # self in rectangular form if (not self.polar) and (not other.polar): @@ -81,12 +84,12 @@ def intersection_sets(self, other): # noqa:F811 new_interval = Union(*new_interval) return Intersection(new_interval, other) -@dispatch(Integers, Reals) # type: ignore # noqa:F811 -def intersection_sets(a, b): # noqa:F811 +@intersection_sets.register(Integers, Reals) +def _(a, b): return a -@dispatch(Range, Interval) # type: ignore # noqa:F811 -def intersection_sets(a, b): # noqa:F811 +@intersection_sets.register(Range, Interval) +def _(a, b): # Check that there are no symbolic arguments if not all(i.is_number for i in a.args + b.args[:2]): return @@ -106,12 +109,12 @@ def intersection_sets(a, b): # noqa:F811 end -= 1 return intersection_sets(a, Range(start, end + 1)) -@dispatch(Range, Naturals) # type: ignore # noqa:F811 -def intersection_sets(a, b): # noqa:F811 +@intersection_sets.register(Range, Naturals) +def _(a, b): return intersection_sets(a, Interval(b.inf, S.Infinity)) -@dispatch(Range, Range) # type: ignore # noqa:F811 -def intersection_sets(a, b): # noqa:F811 +@intersection_sets.register(Range, Range) +def _(a, b): # Check that there are no symbolic range arguments if not all(all(v.is_number for v in r.args) for r in [a, b]): return None @@ -226,13 +229,13 @@ def _updated_range(r, first): return Range(start, stop, step) -@dispatch(Range, Integers) # type: ignore # noqa:F811 -def intersection_sets(a, b): # noqa:F811 +@intersection_sets.register(Range, Integers) +def _(a, b): return a -@dispatch(ImageSet, Set) # type: ignore # noqa:F811 -def intersection_sets(self, other): # noqa:F811 +@intersection_sets.register(ImageSet, Set) +def _(self, other): from sympy.solvers.diophantine import diophantine # Only handle the straight-forward univariate case @@ -395,15 +398,15 @@ def _solution_union(exprs, sym): return -@dispatch(ProductSet, ProductSet) # type: ignore # noqa:F811 -def intersection_sets(a, b): # noqa:F811 +@intersection_sets.register(ProductSet, ProductSet) +def _(a, b): if len(b.args) != len(a.args): return S.EmptySet return ProductSet(*(i.intersect(j) for i, j in zip(a.sets, b.sets))) -@dispatch(Interval, Interval) # type: ignore # noqa:F811 -def intersection_sets(a, b): # noqa:F811 +@intersection_sets.register(Interval, Interval) +def _(a, b): # handle (-oo, oo) infty = S.NegativeInfinity, S.Infinity if a == Interval(*infty): @@ -449,39 +452,39 @@ def intersection_sets(a, b): # noqa:F811 return Interval(start, end, left_open, right_open) -@dispatch(EmptySet, Set) # type: ignore # noqa:F811 -def intersection_sets(a, b): # noqa:F811 +@intersection_sets.register(EmptySet, Set) +def _(a, b): return S.EmptySet -@dispatch(UniversalSet, Set) # type: ignore # noqa:F811 -def intersection_sets(a, b): # noqa:F811 +@intersection_sets.register(UniversalSet, Set) +def _(a, b): return b -@dispatch(FiniteSet, FiniteSet) # type: ignore # noqa:F811 -def intersection_sets(a, b): # noqa:F811 +@intersection_sets.register(FiniteSet, FiniteSet) +def _(a, b): return FiniteSet(*(a._elements & b._elements)) -@dispatch(FiniteSet, Set) # type: ignore # noqa:F811 -def intersection_sets(a, b): # noqa:F811 +@intersection_sets.register(FiniteSet, Set) +def _(a, b): try: return FiniteSet(*[el for el in a if el in b]) except TypeError: return None # could not evaluate `el in b` due to symbolic ranges. -@dispatch(Set, Set) # type: ignore # noqa:F811 -def intersection_sets(a, b): # noqa:F811 +@intersection_sets.register(Set, Set) +def _(a, b): return None -@dispatch(Integers, Rationals) # type: ignore # noqa:F811 -def intersection_sets(a, b): # noqa:F811 +@intersection_sets.register(Integers, Rationals) +def _(a, b): return a -@dispatch(Naturals, Rationals) # type: ignore # noqa:F811 -def intersection_sets(a, b): # noqa:F811 +@intersection_sets.register(Naturals, Rationals) +def _(a, b): return a -@dispatch(Rationals, Reals) # type: ignore # noqa:F811 -def intersection_sets(a, b): # noqa:F811 +@intersection_sets.register(Rationals, Reals) +def _(a, b): return a def _intlike_interval(a, b): @@ -494,10 +497,10 @@ def _intlike_interval(a, b): except ValueError: return None -@dispatch(Integers, Interval) # type: ignore # noqa:F811 -def intersection_sets(a, b): # noqa:F811 +@intersection_sets.register(Integers, Interval) +def _(a, b): return _intlike_interval(a, b) -@dispatch(Naturals, Interval) # type: ignore # noqa:F811 -def intersection_sets(a, b): # noqa:F811 +@intersection_sets.register(Naturals, Interval) +def _(a, b): return _intlike_interval(a, b) diff --git a/sympy/sets/handlers/issubset.py b/sympy/sets/handlers/issubset.py index f39c594101cf..cc23e8bf56f1 100644 --- a/sympy/sets/handlers/issubset.py +++ b/sympy/sets/handlers/issubset.py @@ -4,17 +4,21 @@ from sympy.core.relational import Eq from sympy.sets.sets import FiniteSet, Interval, Set, Union, ProductSet from sympy.sets.fancysets import Complexes, Reals, Range, Rationals -from sympy.multipledispatch import dispatch +from sympy.multipledispatch import Dispatcher _inf_sets = [S.Naturals, S.Naturals0, S.Integers, S.Rationals, S.Reals, S.Complexes] -@dispatch(Set, Set) # type: ignore # noqa:F811 -def is_subset_sets(a, b): # noqa:F811 + +is_subset_sets = Dispatcher('is_subset_sets') + + +@is_subset_sets.register(Set, Set) +def _(a, b): return None -@dispatch(Interval, Interval) # type: ignore # noqa:F811 -def is_subset_sets(a, b): # noqa:F811 +@is_subset_sets.register(Interval, Interval) +def _(a, b): # This is correct but can be made more comprehensive... if fuzzy_bool(a.start < b.start): return False @@ -25,15 +29,15 @@ def is_subset_sets(a, b): # noqa:F811 if (b.right_open and not a.right_open and fuzzy_bool(Eq(a.end, b.end))): return False -@dispatch(Interval, FiniteSet) # type: ignore # noqa:F811 -def is_subset_sets(a_interval, b_fs): # noqa:F811 +@is_subset_sets.register(Interval, FiniteSet) +def _(a_interval, b_fs): # An Interval can only be a subset of a finite set if it is finite # which can only happen if it has zero measure. if fuzzy_not(a_interval.measure.is_zero): return False -@dispatch(Interval, Union) # type: ignore # noqa:F811 -def is_subset_sets(a_interval, b_u): # noqa:F811 +@is_subset_sets.register(Interval, Union) +def _(a_interval, b_u): if all(isinstance(s, (Interval, FiniteSet)) for s in b_u.args): intervals = [s for s in b_u.args if isinstance(s, Interval)] if all(fuzzy_bool(a_interval.start < s.start) for s in intervals): @@ -48,14 +52,14 @@ def is_subset_sets(a_interval, b_u): # noqa:F811 if all(no_overlap(s, a_interval) for s in intervals): return False -@dispatch(Range, Range) # type: ignore # noqa:F811 -def is_subset_sets(a, b): # noqa:F811 +@is_subset_sets.register(Range, Range) +def _(a, b): if a.step == b.step == 1: return fuzzy_and([fuzzy_bool(a.start >= b.start), fuzzy_bool(a.stop <= b.stop)]) -@dispatch(Range, Interval) # type: ignore # noqa:F811 -def is_subset_sets(a_range, b_interval): # noqa:F811 +@is_subset_sets.register(Range, Interval) +def _(a_range, b_interval): if a_range.step.is_positive: if b_interval.left_open and a_range.inf.is_finite: cond_left = a_range.inf > b_interval.left @@ -67,8 +71,8 @@ def is_subset_sets(a_range, b_interval): # noqa:F811 cond_right = a_range.sup <= b_interval.right return fuzzy_and([cond_left, cond_right]) -@dispatch(Range, FiniteSet) # type: ignore # noqa:F811 -def is_subset_sets(a_range, b_finiteset): # noqa:F811 +@is_subset_sets.register(Range, FiniteSet) +def _(a_range, b_finiteset): try: a_size = a_range.size except ValueError: @@ -101,40 +105,40 @@ def is_subset_sets(a_range, b_finiteset): # noqa:F811 return True return None -@dispatch(Interval, Range) # type: ignore # noqa:F811 -def is_subset_sets(a_interval, b_range): # noqa:F811 +@is_subset_sets.register(Interval, Range) +def _(a_interval, b_range): if a_interval.measure.is_extended_nonzero: return False -@dispatch(Interval, Rationals) # type: ignore # noqa:F811 -def is_subset_sets(a_interval, b_rationals): # noqa:F811 +@is_subset_sets.register(Interval, Rationals) +def _(a_interval, b_rationals): if a_interval.measure.is_extended_nonzero: return False -@dispatch(Range, Complexes) # type: ignore # noqa:F811 -def is_subset_sets(a, b): # noqa:F811 +@is_subset_sets.register(Range, Complexes) +def _(a, b): return True -@dispatch(Complexes, Interval) # type: ignore # noqa:F811 -def is_subset_sets(a, b): # noqa:F811 +@is_subset_sets.register(Complexes, Interval) +def _(a, b): return False -@dispatch(Complexes, Range) # type: ignore # noqa:F811 -def is_subset_sets(a, b): # noqa:F811 +@is_subset_sets.register(Complexes, Range) +def _(a, b): return False -@dispatch(Complexes, Rationals) # type: ignore # noqa:F811 -def is_subset_sets(a, b): # noqa:F811 +@is_subset_sets.register(Complexes, Rationals) +def _(a, b): return False -@dispatch(Rationals, Reals) # type: ignore # noqa:F811 -def is_subset_sets(a, b): # noqa:F811 +@is_subset_sets.register(Rationals, Reals) +def _(a, b): return True -@dispatch(Rationals, Range) # type: ignore # noqa:F811 -def is_subset_sets(a, b): # noqa:F811 +@is_subset_sets.register(Rationals, Range) +def _(a, b): return False -@dispatch(ProductSet, FiniteSet) # type: ignore # noqa:F811 -def is_subset_sets(a_ps, b_fs): # noqa:F811 +@is_subset_sets.register(ProductSet, FiniteSet) +def _(a_ps, b_fs): return fuzzy_and(b_fs.contains(x) for x in a_ps) diff --git a/sympy/sets/handlers/mul.py b/sympy/sets/handlers/mul.py index 984ac0571930..0dedc8068b79 100644 --- a/sympy/sets/handlers/mul.py +++ b/sympy/sets/handlers/mul.py @@ -1,27 +1,32 @@ from sympy.core import Basic, Expr from sympy.core.numbers import oo from sympy.core.symbol import symbols -from sympy.multipledispatch import dispatch +from sympy.multipledispatch import Dispatcher from sympy.sets.setexpr import set_mul from sympy.sets.sets import Interval, Set + _x, _y = symbols("x y") -@dispatch(Basic, Basic) # type: ignore # noqa:F811 -def _set_mul(x, y): # noqa:F811 +_set_mul = Dispatcher('_set_mul') +_set_div = Dispatcher('_set_div') + + +@_set_mul.register(Basic, Basic) +def _(x, y): return None -@dispatch(Set, Set) # type: ignore # noqa:F811 -def _set_mul(x, y): # noqa:F811 +@_set_mul.register(Set, Set) +def _(x, y): return None -@dispatch(Expr, Expr) # type: ignore # noqa:F811 -def _set_mul(x, y): # noqa:F811 +@_set_mul.register(Expr, Expr) +def _(x, y): return x*y -@dispatch(Interval, Interval) # type: ignore # noqa:F811 -def _set_mul(x, y): # noqa:F811 +@_set_mul.register(Interval, Interval) +def _(x, y): """ Multiplications in interval arithmetic https://en.wikipedia.org/wiki/Interval_arithmetic @@ -43,20 +48,20 @@ def _set_mul(x, y): # noqa:F811 maxopen ) -@dispatch(Basic, Basic) # type: ignore # noqa:F811 -def _set_div(x, y): # noqa:F811 +@_set_div.register(Basic, Basic) +def _(x, y): return None -@dispatch(Expr, Expr) # type: ignore # noqa:F811 -def _set_div(x, y): # noqa:F811 +@_set_div.register(Expr, Expr) +def _(x, y): return x/y -@dispatch(Set, Set) # type: ignore # noqa:F811 # noqa:F811 -def _set_div(x, y): # noqa:F811 +@_set_div.register(Set, Set) +def _(x, y): return None -@dispatch(Interval, Interval) # type: ignore # noqa:F811 -def _set_div(x, y): # noqa:F811 +@_set_div.register(Interval, Interval) +def _(x, y): """ Divisions in interval arithmetic https://en.wikipedia.org/wiki/Interval_arithmetic diff --git a/sympy/sets/handlers/power.py b/sympy/sets/handlers/power.py index 2e510deb1653..3cad4ee49ab2 100644 --- a/sympy/sets/handlers/power.py +++ b/sympy/sets/handlers/power.py @@ -7,30 +7,33 @@ from sympy.sets.fancysets import ImageSet from sympy.sets.setexpr import set_div from sympy.sets.sets import Set, Interval, FiniteSet, Union -from sympy.multipledispatch import dispatch +from sympy.multipledispatch import Dispatcher _x, _y = symbols("x y") -@dispatch(Basic, Basic) # type: ignore # noqa:F811 -def _set_pow(x, y): # noqa:F811 +_set_pow = Dispatcher('_set_pow') + + +@_set_pow.register(Basic, Basic) +def _(x, y): return None -@dispatch(Set, Set) # type: ignore # noqa:F811 -def _set_pow(x, y): # noqa:F811 +@_set_pow.register(Set, Set) +def _(x, y): return ImageSet(Lambda((_x, _y), (_x ** _y)), x, y) -@dispatch(Expr, Expr) # type: ignore # noqa:F811 -def _set_pow(x, y): # noqa:F811 +@_set_pow.register(Expr, Expr) +def _(x, y): return x**y -@dispatch(Interval, Zero) # type: ignore # noqa:F811 -def _set_pow(x, z): # noqa:F811 +@_set_pow.register(Interval, Zero) +def _(x, z): return FiniteSet(S.One) -@dispatch(Interval, Integer) # type: ignore # noqa:F811 -def _set_pow(x, exponent): # noqa:F811 +@_set_pow.register(Interval, Integer) +def _(x, exponent): """ Powers in interval arithmetic https://en.wikipedia.org/wiki/Interval_arithmetic @@ -77,8 +80,8 @@ def _set_pow(x, exponent): # noqa:F811 else: return Interval(S.Zero, sleft, S.Zero not in x, left_open) -@dispatch(Interval, Infinity) # type: ignore # noqa:F811 -def _set_pow(b, e): # noqa:F811 +@_set_pow.register(Interval, Infinity) +def _(b, e): # TODO: add logic for open intervals? if b.start.is_nonnegative: if b.end < 1: @@ -99,6 +102,6 @@ def _set_pow(b, e): # noqa:F811 return Interval(0, oo) return Interval(-oo, oo) -@dispatch(Interval, NegativeInfinity) # type: ignore # noqa:F811 -def _set_pow(b, e): # noqa:F811 +@_set_pow.register(Interval, NegativeInfinity) +def _(b, e): return _set_pow(set_div(S.One, b), oo) diff --git a/sympy/sets/handlers/union.py b/sympy/sets/handlers/union.py index fee36c83b97b..35ccf8f6d743 100644 --- a/sympy/sets/handlers/union.py +++ b/sympy/sets/handlers/union.py @@ -4,43 +4,46 @@ Interval, ProductSet, Set, Union, UniversalSet) from sympy.sets.fancysets import (ComplexRegion, Naturals, Naturals0, Integers, Rationals, Reals) -from sympy.multipledispatch import dispatch +from sympy.multipledispatch import Dispatcher -@dispatch(Naturals0, Naturals) # type: ignore # noqa:F811 -def union_sets(a, b): # noqa:F811 +union_sets = Dispatcher('union_sets') + + +@union_sets.register(Naturals0, Naturals) +def _(a, b): return a -@dispatch(Rationals, Naturals) # type: ignore # noqa:F811 -def union_sets(a, b): # noqa:F811 +@union_sets.register(Rationals, Naturals) +def _(a, b): return a -@dispatch(Rationals, Naturals0) # type: ignore # noqa:F811 -def union_sets(a, b): # noqa:F811 +@union_sets.register(Rationals, Naturals0) +def _(a, b): return a -@dispatch(Reals, Naturals) # type: ignore # noqa:F811 -def union_sets(a, b): # noqa:F811 +@union_sets.register(Reals, Naturals) +def _(a, b): return a -@dispatch(Reals, Naturals0) # type: ignore # noqa:F811 -def union_sets(a, b): # noqa:F811 +@union_sets.register(Reals, Naturals0) +def _(a, b): return a -@dispatch(Reals, Rationals) # type: ignore # noqa:F811 -def union_sets(a, b): # noqa:F811 +@union_sets.register(Reals, Rationals) +def _(a, b): return a -@dispatch(Integers, Set) # type: ignore # noqa:F811 -def union_sets(a, b): # noqa:F811 +@union_sets.register(Integers, Set) +def _(a, b): intersect = Intersection(a, b) if intersect == a: return b elif intersect == b: return a -@dispatch(ComplexRegion, Set) # type: ignore # noqa:F811 -def union_sets(a, b): # noqa:F811 +@union_sets.register(ComplexRegion, Set) +def _(a, b): if b.is_subset(S.Reals): # treat a subset of reals as a complex region b = ComplexRegion.from_real(b) @@ -54,17 +57,17 @@ def union_sets(a, b): # noqa:F811 return ComplexRegion(Union(a.sets, b.sets), polar=True) return None -@dispatch(EmptySet, Set) # type: ignore # noqa:F811 -def union_sets(a, b): # noqa:F811 +@union_sets.register(EmptySet, Set) +def _(a, b): return b -@dispatch(UniversalSet, Set) # type: ignore # noqa:F811 -def union_sets(a, b): # noqa:F811 +@union_sets.register(UniversalSet, Set) +def _(a, b): return a -@dispatch(ProductSet, ProductSet) # type: ignore # noqa:F811 -def union_sets(a, b): # noqa:F811 +@union_sets.register(ProductSet, ProductSet) +def _(a, b): if b.is_subset(a): return a if len(b.sets) != len(a.sets): @@ -78,14 +81,14 @@ def union_sets(a, b): # noqa:F811 return Union(a1, b1) * a2 return None -@dispatch(ProductSet, Set) # type: ignore # noqa:F811 -def union_sets(a, b): # noqa:F811 +@union_sets.register(ProductSet, Set) +def _(a, b): if b.is_subset(a): return a return None -@dispatch(Interval, Interval) # type: ignore # noqa:F811 -def union_sets(a, b): # noqa:F811 +@union_sets.register(Interval, Interval) +def _(a, b): if a._is_comparable(b): from sympy.functions.elementary.miscellaneous import Min, Max # Non-overlapping intervals @@ -104,12 +107,12 @@ def union_sets(a, b): # noqa:F811 (b.end != end or b.right_open)) return Interval(start, end, left_open, right_open) -@dispatch(Interval, UniversalSet) # type: ignore # noqa:F811 -def union_sets(a, b): # noqa:F811 +@union_sets.register(Interval, UniversalSet) +def _(a, b): return S.UniversalSet -@dispatch(Interval, Set) # type: ignore # noqa:F811 -def union_sets(a, b): # noqa:F811 +@union_sets.register(Interval, Set) +def _(a, b): # If I have open end points and these endpoints are contained in b # But only in case, when endpoints are finite. Because # interval does not contain oo or -oo. @@ -127,18 +130,18 @@ def union_sets(a, b): # noqa:F811 return {new_a, b} return None -@dispatch(FiniteSet, FiniteSet) # type: ignore # noqa:F811 -def union_sets(a, b): # noqa:F811 +@union_sets.register(FiniteSet, FiniteSet) +def _(a, b): return FiniteSet(*(a._elements | b._elements)) -@dispatch(FiniteSet, Set) # type: ignore # noqa:F811 -def union_sets(a, b): # noqa:F811 +@union_sets.register(FiniteSet, Set) +def _(a, b): # If `b` set contains one of my elements, remove it from `a` if any(b.contains(x) == True for x in a): return { FiniteSet(*[x for x in a if b.contains(x) != True]), b} return None -@dispatch(Set, Set) # type: ignore # noqa:F811 -def union_sets(a, b): # noqa:F811 +@union_sets.register(Set, Set) +def _(a, b): return None diff --git a/sympy/stats/sampling/sample_numpy.py b/sympy/stats/sampling/sample_numpy.py index ff4856593dcc..a50b17d256f1 100644 --- a/sympy/stats/sampling/sample_numpy.py +++ b/sympy/stats/sampling/sample_numpy.py @@ -17,65 +17,65 @@ def do_sample_numpy(dist, size, rand_state): # CRV: -@do_sample_numpy.register(BetaDistribution) # type: ignore +@do_sample_numpy.register(BetaDistribution) def _(dist: BetaDistribution, size, rand_state): return rand_state.beta(a=float(dist.alpha), b=float(dist.beta), size=size) -@do_sample_numpy.register(ChiSquaredDistribution) # type: ignore +@do_sample_numpy.register(ChiSquaredDistribution) def _(dist: ChiSquaredDistribution, size, rand_state): return rand_state.chisquare(df=float(dist.k), size=size) -@do_sample_numpy.register(ExponentialDistribution) # type: ignore +@do_sample_numpy.register(ExponentialDistribution) def _(dist: ExponentialDistribution, size, rand_state): return rand_state.exponential(1 / float(dist.rate), size=size) -@do_sample_numpy.register(GammaDistribution) # type: ignore +@do_sample_numpy.register(GammaDistribution) def _(dist: GammaDistribution, size, rand_state): return rand_state.gamma(float(dist.k), float(dist.theta), size=size) -@do_sample_numpy.register(LogNormalDistribution) # type: ignore +@do_sample_numpy.register(LogNormalDistribution) def _(dist: LogNormalDistribution, size, rand_state): return rand_state.lognormal(float(dist.mean), float(dist.std), size=size) -@do_sample_numpy.register(NormalDistribution) # type: ignore +@do_sample_numpy.register(NormalDistribution) def _(dist: NormalDistribution, size, rand_state): return rand_state.normal(float(dist.mean), float(dist.std), size=size) -@do_sample_numpy.register(ParetoDistribution) # type: ignore +@do_sample_numpy.register(ParetoDistribution) def _(dist: ParetoDistribution, size, rand_state): return (numpy.random.pareto(a=float(dist.alpha), size=size) + 1) * float(dist.xm) -@do_sample_numpy.register(UniformDistribution) # type: ignore +@do_sample_numpy.register(UniformDistribution) def _(dist: UniformDistribution, size, rand_state): return rand_state.uniform(low=float(dist.left), high=float(dist.right), size=size) # DRV: -@do_sample_numpy.register(GeometricDistribution) # type: ignore +@do_sample_numpy.register(GeometricDistribution) def _(dist: GeometricDistribution, size, rand_state): return rand_state.geometric(p=float(dist.p), size=size) -@do_sample_numpy.register(PoissonDistribution) # type: ignore +@do_sample_numpy.register(PoissonDistribution) def _(dist: PoissonDistribution, size, rand_state): return rand_state.poisson(lam=float(dist.lamda), size=size) -@do_sample_numpy.register(ZetaDistribution) # type: ignore +@do_sample_numpy.register(ZetaDistribution) def _(dist: ZetaDistribution, size, rand_state): return rand_state.zipf(a=float(dist.s), size=size) # FRV: -@do_sample_numpy.register(BinomialDistribution) # type: ignore +@do_sample_numpy.register(BinomialDistribution) def _(dist: BinomialDistribution, size, rand_state): return rand_state.binomial(n=int(dist.n), p=float(dist.p), size=size) diff --git a/sympy/stats/sampling/sample_pymc3.py b/sympy/stats/sampling/sample_pymc3.py index e3c6f8f3aae9..a20e3858e16d 100644 --- a/sympy/stats/sampling/sample_pymc3.py +++ b/sympy/stats/sampling/sample_pymc3.py @@ -17,81 +17,81 @@ def do_sample_pymc3(dist): # CRV: -@do_sample_pymc3.register(BetaDistribution) # type: ignore +@do_sample_pymc3.register(BetaDistribution) def _(dist: BetaDistribution): return pymc3.Beta('X', alpha=float(dist.alpha), beta=float(dist.beta)) -@do_sample_pymc3.register(CauchyDistribution) # type: ignore +@do_sample_pymc3.register(CauchyDistribution) def _(dist: CauchyDistribution): return pymc3.Cauchy('X', alpha=float(dist.x0), beta=float(dist.gamma)) -@do_sample_pymc3.register(ChiSquaredDistribution) # type: ignore +@do_sample_pymc3.register(ChiSquaredDistribution) def _(dist: ChiSquaredDistribution): return pymc3.ChiSquared('X', nu=float(dist.k)) -@do_sample_pymc3.register(ExponentialDistribution) # type: ignore +@do_sample_pymc3.register(ExponentialDistribution) def _(dist: ExponentialDistribution): return pymc3.Exponential('X', lam=float(dist.rate)) -@do_sample_pymc3.register(GammaDistribution) # type: ignore +@do_sample_pymc3.register(GammaDistribution) def _(dist: GammaDistribution): return pymc3.Gamma('X', alpha=float(dist.k), beta=1 / float(dist.theta)) -@do_sample_pymc3.register(LogNormalDistribution) # type: ignore +@do_sample_pymc3.register(LogNormalDistribution) def _(dist: LogNormalDistribution): return pymc3.Lognormal('X', mu=float(dist.mean), sigma=float(dist.std)) -@do_sample_pymc3.register(NormalDistribution) # type: ignore +@do_sample_pymc3.register(NormalDistribution) def _(dist: NormalDistribution): return pymc3.Normal('X', float(dist.mean), float(dist.std)) -@do_sample_pymc3.register(GaussianInverseDistribution) # type: ignore +@do_sample_pymc3.register(GaussianInverseDistribution) def _(dist: GaussianInverseDistribution): return pymc3.Wald('X', mu=float(dist.mean), lam=float(dist.shape)) -@do_sample_pymc3.register(ParetoDistribution) # type: ignore +@do_sample_pymc3.register(ParetoDistribution) def _(dist: ParetoDistribution): return pymc3.Pareto('X', alpha=float(dist.alpha), m=float(dist.xm)) -@do_sample_pymc3.register(UniformDistribution) # type: ignore +@do_sample_pymc3.register(UniformDistribution) def _(dist: UniformDistribution): return pymc3.Uniform('X', lower=float(dist.left), upper=float(dist.right)) # DRV: -@do_sample_pymc3.register(GeometricDistribution) # type: ignore +@do_sample_pymc3.register(GeometricDistribution) def _(dist: GeometricDistribution): return pymc3.Geometric('X', p=float(dist.p)) -@do_sample_pymc3.register(NegativeBinomialDistribution) # type: ignore +@do_sample_pymc3.register(NegativeBinomialDistribution) def _(dist: NegativeBinomialDistribution): return pymc3.NegativeBinomial('X', mu=float((dist.p * dist.r) / (1 - dist.p)), alpha=float(dist.r)) -@do_sample_pymc3.register(PoissonDistribution) # type: ignore +@do_sample_pymc3.register(PoissonDistribution) def _(dist: PoissonDistribution): return pymc3.Poisson('X', mu=float(dist.lamda)) # FRV: -@do_sample_pymc3.register(BernoulliDistribution) # type: ignore +@do_sample_pymc3.register(BernoulliDistribution) def _(dist: BernoulliDistribution): return pymc3.Bernoulli('X', p=float(dist.p)) -@do_sample_pymc3.register(BinomialDistribution) # type: ignore +@do_sample_pymc3.register(BinomialDistribution) def _(dist: BinomialDistribution): return pymc3.Binomial('X', n=int(dist.n), p=float(dist.p)) diff --git a/sympy/stats/sampling/sample_scipy.py b/sympy/stats/sampling/sample_scipy.py index 21af16f8342f..f12508f68844 100644 --- a/sympy/stats/sampling/sample_scipy.py +++ b/sympy/stats/sampling/sample_scipy.py @@ -24,7 +24,7 @@ def do_sample_scipy(dist, size, seed): # CRV -@do_sample_scipy.register(SingleContinuousDistribution) # type: ignore +@do_sample_scipy.register(SingleContinuousDistribution) def _(dist: SingleContinuousDistribution, size, seed): # if we don't need to make a handmade pdf, we won't import scipy.stats @@ -41,66 +41,66 @@ def _pdf(dist, x): return scipy_rv.rvs(size=size, random_state=seed) -@do_sample_scipy.register(ChiSquaredDistribution) # type: ignore +@do_sample_scipy.register(ChiSquaredDistribution) def _(dist: ChiSquaredDistribution, size, seed): # same parametrisation return scipy.stats.chi2.rvs(df=float(dist.k), size=size, random_state=seed) -@do_sample_scipy.register(ExponentialDistribution) # type: ignore +@do_sample_scipy.register(ExponentialDistribution) def _(dist: ExponentialDistribution, size, seed): # https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.expon.html#scipy.stats.expon return scipy.stats.expon.rvs(scale=1 / float(dist.rate), size=size, random_state=seed) -@do_sample_scipy.register(GammaDistribution) # type: ignore +@do_sample_scipy.register(GammaDistribution) def _(dist: GammaDistribution, size, seed): # https://stackoverflow.com/questions/42150965/how-to-plot-gamma-distribution-with-alpha-and-beta-parameters-in-python return scipy.stats.gamma.rvs(a=float(dist.k), scale=float(dist.theta), size=size, random_state=seed) -@do_sample_scipy.register(LogNormalDistribution) # type: ignore +@do_sample_scipy.register(LogNormalDistribution) def _(dist: LogNormalDistribution, size, seed): # https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.lognorm.html return scipy.stats.lognorm.rvs(scale=float(exp(dist.mean)), s=float(dist.std), size=size, random_state=seed) -@do_sample_scipy.register(NormalDistribution) # type: ignore +@do_sample_scipy.register(NormalDistribution) def _(dist: NormalDistribution, size, seed): return scipy.stats.norm.rvs(loc=float(dist.mean), scale=float(dist.std), size=size, random_state=seed) -@do_sample_scipy.register(ParetoDistribution) # type: ignore +@do_sample_scipy.register(ParetoDistribution) def _(dist: ParetoDistribution, size, seed): # https://stackoverflow.com/questions/42260519/defining-pareto-distribution-in-python-scipy return scipy.stats.pareto.rvs(b=float(dist.alpha), scale=float(dist.xm), size=size, random_state=seed) -@do_sample_scipy.register(StudentTDistribution) # type: ignore +@do_sample_scipy.register(StudentTDistribution) def _(dist: StudentTDistribution, size, seed): return scipy.stats.t.rvs(df=float(dist.nu), size=size, random_state=seed) -@do_sample_scipy.register(UniformDistribution) # type: ignore +@do_sample_scipy.register(UniformDistribution) def _(dist: UniformDistribution, size, seed): # https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.uniform.html return scipy.stats.uniform.rvs(loc=float(dist.left), scale=float(dist.right - dist.left), size=size, random_state=seed) -@do_sample_scipy.register(BetaDistribution) # type: ignore +@do_sample_scipy.register(BetaDistribution) def _(dist: BetaDistribution, size, seed): # same parametrisation return scipy.stats.beta.rvs(a=float(dist.alpha), b=float(dist.beta), size=size, random_state=seed) -@do_sample_scipy.register(CauchyDistribution) # type: ignore +@do_sample_scipy.register(CauchyDistribution) def _(dist: CauchyDistribution, size, seed): return scipy.stats.cauchy.rvs(loc=float(dist.x0), scale=float(dist.gamma), size=size, random_state=seed) # DRV: -@do_sample_scipy.register(DiscreteDistributionHandmade) # type: ignore +@do_sample_scipy.register(DiscreteDistributionHandmade) def _(dist: DiscreteDistributionHandmade, size, seed): from scipy.stats import rv_discrete @@ -116,44 +116,44 @@ def _pmf(dist, x): return scipy_rv.rvs(size=size, random_state=seed) -@do_sample_scipy.register(GeometricDistribution) # type: ignore +@do_sample_scipy.register(GeometricDistribution) def _(dist: GeometricDistribution, size, seed): return scipy.stats.geom.rvs(p=float(dist.p), size=size, random_state=seed) -@do_sample_scipy.register(LogarithmicDistribution) # type: ignore +@do_sample_scipy.register(LogarithmicDistribution) def _(dist: LogarithmicDistribution, size, seed): return scipy.stats.logser.rvs(p=float(dist.p), size=size, random_state=seed) -@do_sample_scipy.register(NegativeBinomialDistribution) # type: ignore +@do_sample_scipy.register(NegativeBinomialDistribution) def _(dist: NegativeBinomialDistribution, size, seed): return scipy.stats.nbinom.rvs(n=float(dist.r), p=float(dist.p), size=size, random_state=seed) -@do_sample_scipy.register(PoissonDistribution) # type: ignore +@do_sample_scipy.register(PoissonDistribution) def _(dist: PoissonDistribution, size, seed): return scipy.stats.poisson.rvs(mu=float(dist.lamda), size=size, random_state=seed) -@do_sample_scipy.register(SkellamDistribution) # type: ignore +@do_sample_scipy.register(SkellamDistribution) def _(dist: SkellamDistribution, size, seed): return scipy.stats.skellam.rvs(mu1=float(dist.mu1), mu2=float(dist.mu2), size=size, random_state=seed) -@do_sample_scipy.register(YuleSimonDistribution) # type: ignore +@do_sample_scipy.register(YuleSimonDistribution) def _(dist: YuleSimonDistribution, size, seed): return scipy.stats.yulesimon.rvs(alpha=float(dist.rho), size=size, random_state=seed) -@do_sample_scipy.register(ZetaDistribution) # type: ignore +@do_sample_scipy.register(ZetaDistribution) def _(dist: ZetaDistribution, size, seed): return scipy.stats.zipf.rvs(a=float(dist.s), size=size, random_state=seed) # FRV: -@do_sample_scipy.register(SingleFiniteDistribution) # type: ignore +@do_sample_scipy.register(SingleFiniteDistribution) def _(dist: SingleFiniteDistribution, size, seed): # scipy can handle with custom distributions diff --git a/sympy/tensor/array/expressions/array_expressions.py b/sympy/tensor/array/expressions/array_expressions.py index 75573cbb3200..75f534696d56 100644 --- a/sympy/tensor/array/expressions/array_expressions.py +++ b/sympy/tensor/array/expressions/array_expressions.py @@ -33,7 +33,7 @@ class _ArrayExpr(Expr): - pass + shape : tTuple[Expr, ...] class ArraySymbol(_ArrayExpr): diff --git a/sympy/tensor/array/expressions/arrayexpr_derivatives.py b/sympy/tensor/array/expressions/arrayexpr_derivatives.py index 20ca1b283892..97fa4659f5d5 100644 --- a/sympy/tensor/array/expressions/arrayexpr_derivatives.py +++ b/sympy/tensor/array/expressions/arrayexpr_derivatives.py @@ -10,9 +10,11 @@ from sympy.matrices.expressions.transpose import Transpose from sympy.combinatorics.permutations import _af_invert from sympy.matrices.expressions.applyfunc import ElementwiseApplyFunction -from sympy.tensor.array.expressions.array_expressions import ZeroArray, ArraySymbol, ArrayTensorProduct, \ - ArrayAdd, PermuteDims, ArrayDiagonal, ArrayElementwiseApplyFunc, get_rank, \ - get_shape, ArrayContraction, _array_tensor_product, _array_contraction, _array_diagonal, _array_add, _permute_dims +from sympy.tensor.array.expressions.array_expressions import ( + _ArrayExpr, ZeroArray, ArraySymbol, ArrayTensorProduct, ArrayAdd, + PermuteDims, ArrayDiagonal, ArrayElementwiseApplyFunc, get_rank, + get_shape, ArrayContraction, _array_tensor_product, _array_contraction, + _array_diagonal, _array_add, _permute_dims) from sympy.tensor.array.expressions.conv_matrix_to_array import convert_matrix_to_array @@ -21,12 +23,12 @@ def array_derive(expr, x): raise NotImplementedError(f"not implemented for type {type(expr)}") -@array_derive.register(Expr) # type: ignore -def _(expr: Expr, x: Expr): - return ZeroArray(*x.shape) # type: ignore +@array_derive.register(Expr) +def _(expr: Expr, x: _ArrayExpr): + return ZeroArray(*x.shape) -@array_derive.register(ArrayTensorProduct) # type: ignore +@array_derive.register(ArrayTensorProduct) def _(expr: ArrayTensorProduct, x: Expr): args = expr.args addend_list = [] @@ -56,33 +58,33 @@ def _(expr: ArrayTensorProduct, x: Expr): return _array_add(*addend_list) -@array_derive.register(ArraySymbol) # type: ignore -def _(expr: ArraySymbol, x: Expr): +@array_derive.register(ArraySymbol) +def _(expr: ArraySymbol, x: _ArrayExpr): if expr == x: return _permute_dims( ArrayTensorProduct.fromiter(Identity(i) for i in expr.shape), [2*i for i in range(len(expr.shape))] + [2*i+1 for i in range(len(expr.shape))] ) - return ZeroArray(*(x.shape + expr.shape)) # type: ignore + return ZeroArray(*(x.shape + expr.shape)) -@array_derive.register(MatrixSymbol) # type: ignore -def _(expr: MatrixSymbol, x: Expr): +@array_derive.register(MatrixSymbol) +def _(expr: MatrixSymbol, x: _ArrayExpr): m, n = expr.shape if expr == x: return _permute_dims( _array_tensor_product(Identity(m), Identity(n)), [0, 2, 1, 3] ) - return ZeroArray(*(x.shape + expr.shape)) # type: ignore + return ZeroArray(*(x.shape + expr.shape)) -@array_derive.register(Identity) # type: ignore -def _(expr: Identity, x: Expr): - return ZeroArray(*(x.shape + expr.shape)) # type: ignore +@array_derive.register(Identity) +def _(expr: Identity, x: _ArrayExpr): + return ZeroArray(*(x.shape + expr.shape)) -@array_derive.register(Transpose) # type: ignore +@array_derive.register(Transpose) def _(expr: Transpose, x: Expr): # D(A.T, A) ==> (m,n,i,j) ==> D(A_ji, A_mn) = d_mj d_ni # D(B.T, A) ==> (m,n,i,j) ==> D(B_ji, A_mn) @@ -90,7 +92,7 @@ def _(expr: Transpose, x: Expr): return _permute_dims(fd, [0, 1, 3, 2]) -@array_derive.register(Inverse) # type: ignore +@array_derive.register(Inverse) def _(expr: Inverse, x: Expr): mat = expr.I dexpr = array_derive(mat, x) @@ -100,7 +102,7 @@ def _(expr: Inverse, x: Expr): return pp -@array_derive.register(ElementwiseApplyFunction) # type: ignore +@array_derive.register(ElementwiseApplyFunction) def _(expr: ElementwiseApplyFunction, x: Expr): assert get_rank(expr) == 2 assert get_rank(x) == 2 @@ -116,7 +118,7 @@ def _(expr: ElementwiseApplyFunction, x: Expr): return td -@array_derive.register(ArrayElementwiseApplyFunc) # type: ignore +@array_derive.register(ArrayElementwiseApplyFunc) def _(expr: ArrayElementwiseApplyFunc, x: Expr): fdiff = expr._get_function_fdiff() subexpr = expr.expr @@ -131,18 +133,18 @@ def _(expr: ArrayElementwiseApplyFunc, x: Expr): return _array_diagonal(tp, *diag_indices) -@array_derive.register(MatrixExpr) # type: ignore +@array_derive.register(MatrixExpr) def _(expr: MatrixExpr, x: Expr): cg = convert_matrix_to_array(expr) return array_derive(cg, x) -@array_derive.register(HadamardProduct) # type: ignore +@array_derive.register(HadamardProduct) def _(expr: HadamardProduct, x: Expr): raise NotImplementedError() -@array_derive.register(ArrayContraction) # type: ignore +@array_derive.register(ArrayContraction) def _(expr: ArrayContraction, x: Expr): fd = array_derive(expr.expr, x) rank_x = len(get_shape(x)) @@ -151,7 +153,7 @@ def _(expr: ArrayContraction, x: Expr): return _array_contraction(fd, *new_contraction_indices) -@array_derive.register(ArrayDiagonal) # type: ignore +@array_derive.register(ArrayDiagonal) def _(expr: ArrayDiagonal, x: Expr): dsubexpr = array_derive(expr.expr, x) rank_x = len(get_shape(x)) @@ -159,12 +161,12 @@ def _(expr: ArrayDiagonal, x: Expr): return _array_diagonal(dsubexpr, *diag_indices) -@array_derive.register(ArrayAdd) # type: ignore +@array_derive.register(ArrayAdd) def _(expr: ArrayAdd, x: Expr): return _array_add(*[array_derive(arg, x) for arg in expr.args]) -@array_derive.register(PermuteDims) # type: ignore +@array_derive.register(PermuteDims) def _(expr: PermuteDims, x: Expr): de = array_derive(expr.expr, x) perm = [0, 1] + [i + 2 for i in expr.permutation.array_form] diff --git a/sympy/tensor/array/expressions/conv_array_to_matrix.py b/sympy/tensor/array/expressions/conv_array_to_matrix.py index 50d250dfe1bf..fdad839ceabf 100644 --- a/sympy/tensor/array/expressions/conv_array_to_matrix.py +++ b/sympy/tensor/array/expressions/conv_array_to_matrix.py @@ -172,7 +172,7 @@ def _array2matrix(expr): return expr -@_array2matrix.register(ZeroArray) # type: ignore +@_array2matrix.register(ZeroArray) def _(expr: ZeroArray): if get_rank(expr) == 2: return ZeroMatrix(*expr.shape) @@ -180,12 +180,12 @@ def _(expr: ZeroArray): return expr -@_array2matrix.register(ArrayTensorProduct) # type: ignore +@_array2matrix.register(ArrayTensorProduct) def _(expr: ArrayTensorProduct): return _a2m_tensor_product(*[_array2matrix(arg) for arg in expr.args]) -@_array2matrix.register(ArrayContraction) # type: ignore +@_array2matrix.register(ArrayContraction) def _(expr: ArrayContraction): expr = expr.flatten_contraction_of_diagonal() expr = identify_removable_identity_matrices(expr) @@ -226,7 +226,7 @@ def _(expr: ArrayContraction): return _array_contraction(ret, *expr.contraction_indices) -@_array2matrix.register(ArrayDiagonal) # type: ignore +@_array2matrix.register(ArrayDiagonal) def _(expr: ArrayDiagonal): pexpr = _array_diagonal(_array2matrix(expr.expr), *expr.diagonal_indices) pexpr = identify_hadamard_products(pexpr) @@ -237,7 +237,7 @@ def _(expr: ArrayDiagonal): return _array2matrix(pexpr) -@_array2matrix.register(PermuteDims) # type: ignore +@_array2matrix.register(PermuteDims) def _(expr: PermuteDims): if expr.permutation.array_form == [1, 0]: return _a2m_transpose(_array2matrix(expr.expr)) @@ -283,23 +283,22 @@ def _(expr: PermuteDims): p2 = permuted[2*i+1] if p1 // 2 != p2 // 2: return _permute_dims(mat_mul_lines, permutation) - pos = p1 // 2 if p1 > p2: - args_array[i] = _a2m_transpose(mat_mul_lines.args[pos]) # type: ignore + args_array[i] = _a2m_transpose(mat_mul_lines.args[p1 // 2]) else: - args_array[i] = mat_mul_lines.args[pos] # type: ignore + args_array[i] = mat_mul_lines.args[p1 // 2] return _a2m_tensor_product(*args_array) else: return expr -@_array2matrix.register(ArrayAdd) # type: ignore +@_array2matrix.register(ArrayAdd) def _(expr: ArrayAdd): addends = [_array2matrix(arg) for arg in expr.args] return _a2m_add(*addends) -@_array2matrix.register(ArrayElementwiseApplyFunc) # type: ignore +@_array2matrix.register(ArrayElementwiseApplyFunc) def _(expr: ArrayElementwiseApplyFunc): subexpr = _array2matrix(expr.expr) if isinstance(subexpr, MatrixExpr): @@ -315,7 +314,7 @@ def _(expr: ArrayElementwiseApplyFunc): return ArrayElementwiseApplyFunc(expr.function, subexpr) -@_array2matrix.register(ArrayElement) # type: ignore +@_array2matrix.register(ArrayElement) def _(expr: ArrayElement): ret = _array2matrix(expr.name) if isinstance(ret, MatrixExpr): @@ -328,7 +327,7 @@ def _remove_trivial_dims(expr): return expr, [] -@_remove_trivial_dims.register(ArrayTensorProduct) # type: ignore +@_remove_trivial_dims.register(ArrayTensorProduct) def _(expr: ArrayTensorProduct): # Recognize expressions like [x, y] with shape (k, 1, k, 1) as `x*y.T`. # The matrix expression has to be equivalent to the tensor product of the @@ -403,7 +402,7 @@ def _(expr: ArrayTensorProduct): return newexpr, newremoved -@_remove_trivial_dims.register(ArrayAdd) # type: ignore +@_remove_trivial_dims.register(ArrayAdd) def _(expr: ArrayAdd): rec = [_remove_trivial_dims(arg) for arg in expr.args] newargs, removed = zip(*rec) @@ -412,7 +411,7 @@ def _(expr: ArrayAdd): return _a2m_add(*newargs), removed[0] -@_remove_trivial_dims.register(PermuteDims) # type: ignore +@_remove_trivial_dims.register(PermuteDims) def _(expr: PermuteDims): subexpr, subremoved = _remove_trivial_dims(expr.expr) p = expr.permutation.array_form @@ -429,7 +428,7 @@ def _(expr: PermuteDims): return newexpr, premoved -@_remove_trivial_dims.register(ArrayContraction) # type: ignore +@_remove_trivial_dims.register(ArrayContraction) def _(expr: ArrayContraction): new_expr, removed0 = _array_contraction_to_diagonal_multiple_identity(expr) if new_expr != expr: @@ -483,21 +482,21 @@ def _remove_diagonalized_identity_matrices(expr: ArrayDiagonal): return editor.to_array_contraction(), removed -@_remove_trivial_dims.register(ArrayDiagonal) # type: ignore +@_remove_trivial_dims.register(ArrayDiagonal) def _(expr: ArrayDiagonal): newexpr, removed = _remove_trivial_dims(expr.expr) shifts = list(accumulate([0] + [1 if i in removed else 0 for i in range(get_rank(expr.expr))])) - new_diag_indices = {i: tuple(j for j in i if j not in removed) for i in expr.diagonal_indices} - for old_diag_tuple, new_diag_tuple in new_diag_indices.items(): + new_diag_indices_map = {i: tuple(j for j in i if j not in removed) for i in expr.diagonal_indices} + for old_diag_tuple, new_diag_tuple in new_diag_indices_map.items(): if len(new_diag_tuple) == 1: removed = [i for i in removed if i not in old_diag_tuple] - new_diag_indices = [tuple(j - shifts[j] for j in i) for i in new_diag_indices.values()] # type: ignore + new_diag_indices = [tuple(j - shifts[j] for j in i) for i in new_diag_indices_map.values()] rank = get_rank(expr.expr) removed = ArrayDiagonal._push_indices_up(expr.diagonal_indices, removed, rank) removed = sorted({i for i in removed}) # If there are single axes to diagonalize remaining, it means that their # corresponding dimension has been removed, they no longer need diagonalization: - new_diag_indices = [i for i in new_diag_indices if len(i) > 0] # type: ignore + new_diag_indices = [i for i in new_diag_indices if len(i) > 0] if len(new_diag_indices) > 0: newexpr2 = _array_diagonal(newexpr, *new_diag_indices, allow_trivial_diags=True) else: @@ -510,7 +509,7 @@ def _(expr: ArrayDiagonal): return newexpr2, removed -@_remove_trivial_dims.register(ElementwiseApplyFunction) # type: ignore +@_remove_trivial_dims.register(ElementwiseApplyFunction) def _(expr: ElementwiseApplyFunction): subexpr, removed = _remove_trivial_dims(expr.expr) if subexpr.shape == (1, 1): @@ -519,7 +518,7 @@ def _(expr: ElementwiseApplyFunction): return ElementwiseApplyFunction(expr.function, subexpr), [] -@_remove_trivial_dims.register(ArrayElementwiseApplyFunc) # type: ignore +@_remove_trivial_dims.register(ArrayElementwiseApplyFunc) def _(expr: ArrayElementwiseApplyFunc): subexpr, removed = _remove_trivial_dims(expr.expr) return ArrayElementwiseApplyFunc(expr.function, subexpr), removed
{ "difficulty": "low", "estimated_review_effort": 2, "problem_domain": "Test Suite / CI Enhancements" }
xonsh__xonsh-5349@0e6731e
xonsh/xonsh
Python
5,349
EnvPath methods (append, remove, add, insert) prepare the path
Closes #2468 Before: ```xsh $PATH.append(p"~/node_modules/.bin") $PATH.remove(p"~/node_modules/.bin") # ValueError $PATH.append("~/node_modules/.bin") $PATH.remove(p"~/node_modules/.bin") # ValueError $PATH.append($HOME+"/node_modules/.bin") $PATH.remove(p"~/node_modules/.bin") # ValueError $PATH.prepend("/path") # 'EnvPath' object has no attribute 'prepend' ``` After: All cases are working well. ## For community ⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
2024-04-20T19:29:48Z
Can't remove Path objects from $PATH If you add a `Path` object to `$PATH`, it is coerced to a string. However, the same coercion doesn't happen with `.remove()` For example, this fails: ``` 🐚 $PATH.append(p"~/node_modules/.bin") 🐚 $PATH.remove(p"~/node_modules/.bin") ValueError ``` (The work-around is to cast it to a `str` in the second line.) ## For community ⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
If anybody wants to add the path temporarily there is `swap`: ```python print($PATH) with ${...}.swap(PATH=$PATH+[p"~/node_modules/.bin"]): print($PATH) print($PATH) # ['/bin', '/usr/games', '/usr/local/games', '/snap/bin'] # ['/bin', '/usr/games', '/usr/local/games', '/snap/bin', PosixPath('/home/user/node_modules/.bin')] # ['/bin', '/usr/games', '/usr/local/games', '/snap/bin'] ``` Oh this is actually a bit more confusing than it looks. `$PATH.append(p"/usr/foo")` actually doesn't coerce the path to a string. It's coerced to a string when `$PATH` is inherited to subprocesses, but other operations are a bit more subtle. A simple `$PATH` "command" in xonsh doesn't print its `repr()`, or its `str()`, but uses xonsh's pretty printing mechanism. `EnvPath._repr_pretty_` iterates over `self`: https://github.com/xonsh/xonsh/blob/17a43bd073323ed60395413590bce4b2bc62c8e1/xonsh/tools.py#L248-L254 That calls `EnvPath.__getitem__()`: https://github.com/xonsh/xonsh/blob/17a43bd073323ed60395413590bce4b2bc62c8e1/xonsh/tools.py#L214-L219 Which calls `_expand_path()`, which eventually calls (on Unix) `os.path.expanduser()`. *That* transparently takes `pathlib.Path`s but returns strings! All of this means that `$PATH` will look like a list of strings, and `$PATH[-1]` will return a string, but the actual values aren't: ```xonsh @ $PATH = ["/usr/foo", p"/usr/bar"] @ $PATH EnvPath( ['/usr/foo', '/usr/bar'] ) @ print(str($PATH)) ['/usr/foo', PosixPath('/usr/bar')] ``` Given that… I'm not sure what the right solution is here. `EnvPath` could override basically every `MutableSequence` method (like `.append` and `.remove`) to normalize the arguments, or `EnvPath` could normalize its contents on `__setitem__` and `insert` — but in that case, normalize them to what? `bytes` and `pathlib.Path` both would have compelling arguments.
[ { "body": "If you add a `Path` object to `$PATH`, it is coerced to a string. However, the same coercion doesn't happen with `.remove()`\r\n\r\nFor example, this fails:\r\n\r\n```\r\n🐚 $PATH.append(p\"~/node_modules/.bin\")\r\n🐚 $PATH.remove(p\"~/node_modules/.bin\")\r\nValueError\r\n```\r\n\r\n(The work-around is to cast it to a `str` in the second line.)\r\n\r\n## For community\r\n⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**\r\n", "number": 2468, "title": "Can't remove Path objects from $PATH" } ]
e30dd960e2ec91b2df71ec95417a8b97645fb898
{ "head_commit": "0e6731e94ce0ba06ad00d927baf7fa5e053a9a71", "head_commit_message": "[pre-commit.ci] auto fixes from pre-commit.com hooks\n\nfor more information, see https://pre-commit.ci", "patch_to_review": "diff --git a/news/path_imp.rst b/news/path_imp.rst\nnew file mode 100644\nindex 0000000000..6abbb59149\n--- /dev/null\n+++ b/news/path_imp.rst\n@@ -0,0 +1,23 @@\n+**Added:**\n+\n+* EnvPath shows warning if path that added does not exists.\n+\n+**Changed:**\n+\n+* EnvPath methods (append, remove, add, insert) prepare the path before add.\n+\n+**Deprecated:**\n+\n+* <news item>\n+\n+**Removed:**\n+\n+* <news item>\n+\n+**Fixed:**\n+\n+* <news item>\n+\n+**Security:**\n+\n+* <news item>\ndiff --git a/tests/test_tools.py b/tests/test_tools.py\nindex f37edc0e6f..d96c9e384d 100644\n--- a/tests/test_tools.py\n+++ b/tests/test_tools.py\n@@ -1014,6 +1014,46 @@ def test_env_path_add_pathlib():\n ]\n \n \n+def test_env_path_append_remove_pathlib_path():\n+ path = EnvPath()\n+\n+ # Append-remove\n+ path.append(os.sep.join([\"home\", \"dino\"]))\n+ path.remove(os.sep.join([\"home\", \"dino\"]))\n+\n+ path.append(os.sep.join([\"~\", \"dino\"]))\n+ path.remove(pathlib.Path(os.sep.join([\"~\", \"dino\"])))\n+\n+ path.append(pathlib.Path(os.sep.join([\"~\", \"dino\"])))\n+ path.remove(pathlib.Path(os.sep.join([\"~\", \"dino\"])))\n+\n+ path.append(pathlib.Path(os.sep.join([\"~\", \"dino\"])))\n+ path.remove(os.sep.join([\"~\", \"dino\"]))\n+\n+ path.append(\n+ pathlib.Path(os.sep.join([str(pathlib.Path(\"~\").expanduser()), \"dino\"]))\n+ )\n+ path.remove(os.sep.join([\"~\", \"dino\"]))\n+\n+ path.append(pathlib.Path(os.sep.join([\"~\", \"dino\"])))\n+ path.remove(os.sep.join([str(pathlib.Path(\"~\").expanduser()), \"dino\"]))\n+\n+ # Insert-remove\n+ path.insert(0, os.sep.join([\"home\", \"dino\"]))\n+ path.remove(os.sep.join([\"home\", \"dino\"]))\n+\n+ path.insert(0, os.sep.join([\"~\", \"dino\"]))\n+ path.remove(pathlib.Path(os.sep.join([\"~\", \"dino\"])))\n+\n+ path.insert(0, pathlib.Path(os.sep.join([\"~\", \"dino\"])))\n+ path.remove(pathlib.Path(os.sep.join([\"~\", \"dino\"])))\n+\n+ path.insert(0, pathlib.Path(os.sep.join([\"~\", \"dino\"])))\n+ path.remove(os.sep.join([\"~\", \"dino\"]))\n+\n+ assert path == []\n+\n+\n # helper\n def expand(path):\n return os.path.expanduser(os.path.expandvars(path))\ndiff --git a/xonsh/tools.py b/xonsh/tools.py\nindex 5a06156147..c390e95ac5 100644\n--- a/xonsh/tools.py\n+++ b/xonsh/tools.py\n@@ -228,8 +228,28 @@ def __len__(self):\n def __delitem__(self, key):\n self._l.__delitem__(key)\n \n+ @staticmethod\n+ def _prepare_path(p):\n+ return str(expand_path(p))\n+\n+ @staticmethod\n+ def _check_path(p):\n+ if not pathlib.Path(p).exists():\n+ print(f\"EnvPath warning: path {repr(p)} does not exists.\", file=sys.stderr)\n+\n def insert(self, index, value):\n- self._l.insert(index, value)\n+ self._l.insert(index, self._prepare_path(value))\n+ self._check_path(value)\n+\n+ def append(self, value):\n+ self._l.append(self._prepare_path(value))\n+ self._check_path(value)\n+\n+ def remove(self, value):\n+ try:\n+ self._l.remove(self._prepare_path(value))\n+ except ValueError:\n+ print(f\"EnvPath warning: path {repr(value)} not found.\", file=sys.stderr)\n \n @property\n def paths(self):\n@@ -292,13 +312,14 @@ def add(self, data, front=False, replace=False):\n None\n \n \"\"\"\n- data = str(expand_path(data))\n+ data = self._prepare_path(data)\n if data not in self._l:\n self._l.insert(0 if front else len(self._l), data)\n elif replace:\n # https://stackoverflow.com/a/25251306/1621381\n self._l = list(filter(lambda x: x != data, self._l))\n self._l.insert(0 if front else len(self._l), data)\n+ self._check_path(data)\n \n \n class FlexibleFormatter(string.Formatter):\n" }
[ { "diff_hunk": "@@ -228,8 +228,28 @@ def __len__(self):\n def __delitem__(self, key):\n self._l.__delitem__(key)\n \n+ @staticmethod\n+ def _prepare_path(p):\n+ return str(expand_path(p))\n+\n+ @staticmethod\n+ def _check_path(p):\n+ if not pathlib.Path(p).exists():\n+ print(f\"EnvPath warning: path {repr(p)} does not exists.\", file=sys.stderr)", "line": null, "original_line": 238, "original_start_line": null, "path": "xonsh/tools.py", "start_line": null, "text": "@user1:\nIt is not needed to check by default. Many of the times, the dirs in $PATH would have actually been deleted or yet to be created. \n\n@author:\n@user1 removed" } ]
791fa0c2ae5917700f978ab687c51c9bc3eb5e6e
diff --git a/news/path_imp.rst b/news/path_imp.rst new file mode 100644 index 0000000000..20704b3278 --- /dev/null +++ b/news/path_imp.rst @@ -0,0 +1,23 @@ +**Added:** + +* Added PATH.prepend(path) to add path to the beginning. + +**Changed:** + +* EnvPath methods (append, remove, add, insert) prepare the path before action. + +**Deprecated:** + +* <news item> + +**Removed:** + +* <news item> + +**Fixed:** + +* <news item> + +**Security:** + +* <news item> diff --git a/tests/test_tools.py b/tests/test_tools.py index f37edc0e6f..0cddc26532 100644 --- a/tests/test_tools.py +++ b/tests/test_tools.py @@ -1014,6 +1014,46 @@ def test_env_path_add_pathlib(): ] +def test_env_path_append_remove_pathlib_path(): + path = EnvPath() + + # Append-remove + path.append(os.sep.join(["home", "dino"])) + path.remove(os.sep.join(["home", "dino"])) + + path.append(os.sep.join(["~", "dino"])) + path.remove(pathlib.Path(os.sep.join(["~", "dino"]))) + + path.append(pathlib.Path(os.sep.join(["~", "dino"]))) + path.remove(pathlib.Path(os.sep.join(["~", "dino"]))) + + path.append(pathlib.Path(os.sep.join(["~", "dino"]))) + path.remove(os.sep.join(["~", "dino"])) + + path.append( + pathlib.Path(os.sep.join([str(pathlib.Path("~").expanduser()), "dino"])) + ) + path.remove(os.sep.join(["~", "dino"])) + + path.append(pathlib.Path(os.sep.join(["~", "dino"]))) + path.remove(os.sep.join([str(pathlib.Path("~").expanduser()), "dino"])) + + # Insert-remove + path.insert(0, os.sep.join(["home", "dino"])) + path.remove(os.sep.join(["home", "dino"])) + + path.insert(0, os.sep.join(["~", "dino"])) + path.remove(pathlib.Path(os.sep.join(["~", "dino"]))) + + path.insert(0, pathlib.Path(os.sep.join(["~", "dino"]))) + path.remove(pathlib.Path(os.sep.join(["~", "dino"]))) + + path.prepend(pathlib.Path(os.sep.join(["~", "dino"]))) + path.remove(os.sep.join(["~", "dino"])) + + assert path == [] + + # helper def expand(path): return os.path.expanduser(os.path.expandvars(path)) diff --git a/xonsh/tools.py b/xonsh/tools.py index 5a06156147..139148ee97 100644 --- a/xonsh/tools.py +++ b/xonsh/tools.py @@ -228,8 +228,24 @@ def __len__(self): def __delitem__(self, key): self._l.__delitem__(key) + @staticmethod + def _prepare_path(p): + return str(expand_path(p)) + def insert(self, index, value): - self._l.insert(index, value) + self._l.insert(index, self._prepare_path(value)) + + def append(self, value): + self._l.append(self._prepare_path(value)) + + def prepend(self, value): + self._l.insert(0, self._prepare_path(value)) + + def remove(self, value): + try: + self._l.remove(self._prepare_path(value)) + except ValueError: + print(f"EnvPath warning: path {repr(value)} not found.", file=sys.stderr) @property def paths(self): @@ -292,7 +308,7 @@ def add(self, data, front=False, replace=False): None """ - data = str(expand_path(data)) + data = self._prepare_path(data) if data not in self._l: self._l.insert(0 if front else len(self._l), data) elif replace:
{ "difficulty": "medium", "estimated_review_effort": 4, "problem_domain": "Bug Fixes" }
sympy__sympy-22696@4bab3f6
sympy/sympy
Python
22,696
physics/optics: Updated Medium to inherit from Basic
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> Fixes #22664 #### Brief description of what is fixed or changed Changed `Medium` to inherit from `Basic` instead of `Symbol` and to store its arguments in `args` after checking if any arguments are `None`. This makes `Medium` hashable and also makes `sprepr` work correctly when called on a `Medium` object. Also changed a condition in the `__new__` method of `Medium` that checks whether its arguments are consistent to not raise a `TypeError` due to `Relational` not having a well-defined truth value. #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> * physics.optics * changed Medium to inherit from Basic instead of Symbol <!-- END RELEASE NOTES -->
2021-12-17T10:12:50Z
`Medium` not hashable This shows up as an issue in the LaTeX-printing, but is probably an issue elsewhere as well. ``` from sympy.physics.optics import Medium Medium('m') ``` gives ``` Traceback (most recent call last): File "C:\Users\Oscar\miniconda3\lib\site-packages\IPython\core\formatters.py", line 345, in __call__ return method() File "C:\Users\Oscar\sympy\sympy\core\_print_helpers.py", line 64, in _repr_latex_ s = latex(self, mode='plain') File "C:\Users\Oscar\sympy\sympy\printing\printer.py", line 373, in __call__ return self.__wrapped__(*args, **kwargs) File "C:\Users\Oscar\sympy\sympy\printing\latex.py", line 3011, in latex return LatexPrinter(settings).doprint(expr) File "C:\Users\Oscar\sympy\sympy\printing\latex.py", line 254, in doprint tex = Printer.doprint(self, expr) File "C:\Users\Oscar\sympy\sympy\printing\printer.py", line 293, in doprint return self._str(self._print(expr)) File "C:\Users\Oscar\sympy\sympy\printing\printer.py", line 332, in _print return printmethod(expr, **kwargs) File "C:\Users\Oscar\sympy\sympy\printing\latex.py", line 1569, in _print_Symbol if expr in self._settings['symbol_names']: TypeError: unhashable type: 'Medium' ```
It also needs a proper `srepr` method as any additional arguments are not returned. ``` In [10]: srepr(Medium('m', 3)) Out[10]: "Medium('m')" ``` The problem is a badly written `__new__` method: https://github.com/sympy/sympy/blob/81cd2630631e315586f796646a7f9268e71bf484/sympy/physics/optics/medium.py#L71-L89 It's simply not storing its arguments in `_args`. Actually, considering it is a `Symbol`, it shouldn't show (and store) any arguments and just print `m`. The if statements also don't catch all possible inputs. Could the issue also be because Medium does not properly override `__hash__`?. Creating a`__hash__` method as ``` def __hash__(self): return super().__hash__() ``` fixes the issue. As an aside, how should `_hashable_content` be implemented for Medium, considering that `__eq__` only checks if the refractive indices are equal. I guess the hashable content should only include the refractive index so that `__hash__` and `__eq__` remain consistent. Am I right here? @ThePauliPrinciple it is indeed not really following the traditional design guidelines, but has been hanging around for eight years or so. (Still no-one seems to have tried to render a LaTeX for it...) @NikhilSDate yes, that is one of the options. I agree with you that there should be some resemblance between `__hash__` and `__eq__`, but I am not in a position to say what is important for `Medium`. Still, probably the best way is to change `Medium` from inheriting `Symbol` to `Expr`(?) and then store all the arguments to make it a "proper" SymPy class. It may be that it solves the hash issue as well then. What is more suprising is that this class can even exist. Since it is a subclass of `Basic`, it should be using `__slots__`, which should trigger an error of `__eq__` is defined but not `__hash__` as far as I am aware. If it is converted to an `Expr`/`Basic` and stores all its arguments, there should be no reason to define `__eq__` (although less than/greater than might still be required). It should then however be rewritten to not store it's arguments as `None`, which I think can currently happen. Using `_sympify` would also be preferable. (but means it can only happen after checking for `None`. If there is a consensus that such a change is needed, I can go ahead and change `Medium` so that it inherits from `Expr` and stores its arguments in `_args`. Should I go ahead?
[ { "body": "This shows up as an issue in the LaTeX-printing, but is probably an issue elsewhere as well.\r\n\r\n```\r\nfrom sympy.physics.optics import Medium\r\nMedium('m')\r\n```\r\ngives\r\n```\r\nTraceback (most recent call last):\r\n\r\n File \"C:\\Users\\Oscar\\miniconda3\\lib\\site-packages\\IPython\\core\\formatters.py\", line 345, in __call__\r\n return method()\r\n\r\n File \"C:\\Users\\Oscar\\sympy\\sympy\\core\\_print_helpers.py\", line 64, in _repr_latex_\r\n s = latex(self, mode='plain')\r\n\r\n File \"C:\\Users\\Oscar\\sympy\\sympy\\printing\\printer.py\", line 373, in __call__\r\n return self.__wrapped__(*args, **kwargs)\r\n\r\n File \"C:\\Users\\Oscar\\sympy\\sympy\\printing\\latex.py\", line 3011, in latex\r\n return LatexPrinter(settings).doprint(expr)\r\n\r\n File \"C:\\Users\\Oscar\\sympy\\sympy\\printing\\latex.py\", line 254, in doprint\r\n tex = Printer.doprint(self, expr)\r\n\r\n File \"C:\\Users\\Oscar\\sympy\\sympy\\printing\\printer.py\", line 293, in doprint\r\n return self._str(self._print(expr))\r\n\r\n File \"C:\\Users\\Oscar\\sympy\\sympy\\printing\\printer.py\", line 332, in _print\r\n return printmethod(expr, **kwargs)\r\n\r\n File \"C:\\Users\\Oscar\\sympy\\sympy\\printing\\latex.py\", line 1569, in _print_Symbol\r\n if expr in self._settings['symbol_names']:\r\n\r\nTypeError: unhashable type: 'Medium'\r\n```", "number": 22664, "title": "`Medium` not hashable" } ]
88ed7abb488da615b007dd2ed5404312caef473c
{ "head_commit": "4bab3f6f5b30ea9bd211fa7e45f7a900e43ea88f", "head_commit_message": "removed check for empty name", "patch_to_review": "diff --git a/sympy/physics/optics/medium.py b/sympy/physics/optics/medium.py\nindex e47179c22bb0..204161864203 100644\n--- a/sympy/physics/optics/medium.py\n+++ b/sympy/physics/optics/medium.py\n@@ -7,7 +7,8 @@\n \n __all__ = ['Medium']\n \n-from sympy.core.symbol import Symbol\n+from sympy.core.basic import Basic\n+from sympy.core.symbol import Str\n from sympy.core.sympify import sympify\n from sympy.functions.elementary.miscellaneous import sqrt\n from sympy.physics.units import speed_of_light, u0, e0\n@@ -18,7 +19,7 @@\n _u0mksa = u0.convert_to(meter*kilogram/(ampere**2*second**2))\n \n \n-class Medium(Symbol):\n+class Medium(Basic):\n \n \"\"\"\n This class represents an optical medium. The prime reason to implement this is\n@@ -69,23 +70,31 @@ class Medium(Symbol):\n \"\"\"\n \n def __new__(cls, name, permittivity=None, permeability=None, n=None):\n- obj = super().__new__(cls, name)\n- obj._permittivity = sympify(permittivity)\n- obj._permeability = sympify(permeability)\n- obj._n = sympify(n)\n+ if not isinstance(name, Str):\n+ name = Str(name)\n+\n if n is not None:\n if permittivity is not None and permeability is None:\n- obj._permeability = n**2/(c**2*obj._permittivity)\n+ permeability = n**2/(c**2*permittivity)\n if permeability is not None and permittivity is None:\n- obj._permittivity = n**2/(c**2*obj._permeability)\n+ permittivity = n**2/(c**2*permeability)\n if permittivity is not None and permittivity is not None:\n- if abs(n - c*sqrt(obj._permittivity*obj._permeability)) > 1e-6:\n- raise ValueError(\"Values are not consistent.\")\n+ expr = abs(n - c*sqrt(permittivity*permeability))\n+ expr = expr.subs({meter: 1, second: 1})\n+ if len(expr.free_symbols) == 0 and expr > 1e-6:\n+ raise ValueError(\"Values are not consistent.\")\n elif permittivity is not None and permeability is not None:\n- obj._n = c*sqrt(permittivity*permeability)\n+ n = c*sqrt(permittivity*permeability)\n elif permittivity is None and permeability is None:\n- obj._permittivity = _e0mksa\n- obj._permeability = _u0mksa\n+ permittivity = _e0mksa\n+ permeability = _u0mksa\n+ n = c*sqrt(permittivity*permeability)\n+ args = list(map(sympify, (permittivity, permeability, n)))\n+ obj = super().__new__(cls, name, *args)\n+ obj.name = name\n+ obj._permittivity = args[0]\n+ obj._permeability = args[1]\n+ obj._n = args[2]\n return obj\n \n @property\n" }
[ { "diff_hunk": "@@ -69,23 +70,32 @@ class Medium(Symbol):\n \"\"\"\n \n def __new__(cls, name, permittivity=None, permeability=None, n=None):\n- obj = super().__new__(cls, name)\n- obj._permittivity = sympify(permittivity)\n- obj._permeability = sympify(permeability)\n- obj._n = sympify(n)\n+\n+ if not isinstance(name, Str):\n+ name = Str(name)\n+\n if n is not None:\n if permittivity is not None and permeability is None:\n- obj._permeability = n**2/(c**2*obj._permittivity)\n+ permeability = n**2/(c**2*permittivity)\n if permeability is not None and permittivity is None:\n- obj._permittivity = n**2/(c**2*obj._permeability)\n+ permittivity = n**2/(c**2*permeability)\n if permittivity is not None and permittivity is not None:\n- if abs(n - c*sqrt(obj._permittivity*obj._permeability)) > 1e-6:\n- raise ValueError(\"Values are not consistent.\")\n+ expr = abs(n - c*sqrt(permittivity*permeability))\n+ expr = expr.subs({meter: 1, second: 1})\n+ if len(expr.free_symbols) == 0 and expr > 1e-6:\n+ raise ValueError(\"Values are not consistent.\")\n elif permittivity is not None and permeability is not None:\n- obj._n = c*sqrt(permittivity*permeability)\n+ n = c*sqrt(permittivity*permeability)\n elif permittivity is None and permeability is None:\n- obj._permittivity = _e0mksa\n- obj._permeability = _u0mksa\n+ permittivity = _e0mksa\n+ permeability = _u0mksa\n+ n = c*sqrt(permittivity*permeability)\n+ args = list(map(sympify, (permittivity, permeability, n)))", "line": null, "original_line": 92, "original_start_line": null, "path": "sympy/physics/optics/medium.py", "start_line": null, "text": "@user1:\n`_sympify` is prefered over `sympify` (it is stricter and shouldn't return non-Basic types, which `sympify` might)\n\n@author:\nAlright, I'll make the change." } ]
ecd1631ec57386beb08df701e63c1886e5c85837
diff --git a/sympy/core/tests/test_args.py b/sympy/core/tests/test_args.py index e027d9ad0f46..19f43a01bcfb 100644 --- a/sympy/core/tests/test_args.py +++ b/sympy/core/tests/test_args.py @@ -4920,6 +4920,16 @@ def test_sympy__physics__optics__medium__Medium(): assert _test_args(Medium('m')) +def test_sympy__physics__optics__medium__MediumN(): + from sympy.physics.optics.medium import Medium + assert _test_args(Medium('m', n=2)) + + +def test_sympy__physics__optics__medium__MediumPP(): + from sympy.physics.optics.medium import Medium + assert _test_args(Medium('m', permittivity=2, permeability=2)) + + def test_sympy__tensor__array__expressions__array_expressions__ArrayContraction(): from sympy.tensor.array.expressions.array_expressions import ArrayContraction from sympy.tensor.indexed import IndexedBase diff --git a/sympy/physics/optics/medium.py b/sympy/physics/optics/medium.py index e47179c22bb0..764b68caad58 100644 --- a/sympy/physics/optics/medium.py +++ b/sympy/physics/optics/medium.py @@ -7,8 +7,9 @@ __all__ = ['Medium'] -from sympy.core.symbol import Symbol -from sympy.core.sympify import sympify +from sympy.core.basic import Basic +from sympy.core.symbol import Str +from sympy.core.sympify import _sympify from sympy.functions.elementary.miscellaneous import sqrt from sympy.physics.units import speed_of_light, u0, e0 @@ -18,7 +19,7 @@ _u0mksa = u0.convert_to(meter*kilogram/(ampere**2*second**2)) -class Medium(Symbol): +class Medium(Basic): """ This class represents an optical medium. The prime reason to implement this is @@ -69,50 +70,35 @@ class Medium(Symbol): """ def __new__(cls, name, permittivity=None, permeability=None, n=None): - obj = super().__new__(cls, name) - obj._permittivity = sympify(permittivity) - obj._permeability = sympify(permeability) - obj._n = sympify(n) + if not isinstance(name, Str): + name = Str(name) + + permittivity = _sympify(permittivity) if permittivity is not None else permittivity + permeability = _sympify(permeability) if permeability is not None else permeability + n = _sympify(n) if n is not None else n + if n is not None: if permittivity is not None and permeability is None: - obj._permeability = n**2/(c**2*obj._permittivity) - if permeability is not None and permittivity is None: - obj._permittivity = n**2/(c**2*obj._permeability) - if permittivity is not None and permittivity is not None: - if abs(n - c*sqrt(obj._permittivity*obj._permeability)) > 1e-6: - raise ValueError("Values are not consistent.") + permeability = n**2/(c**2*permittivity) + return MediumPP(name, permittivity, permeability) + elif permeability is not None and permittivity is None: + permittivity = n**2/(c**2*permeability) + return MediumPP(name, permittivity, permeability) + elif permittivity is not None and permittivity is not None: + raise ValueError("Specifying all of permittivity, permeability, and n is not allowed") + else: + return MediumN(name, n) elif permittivity is not None and permeability is not None: - obj._n = c*sqrt(permittivity*permeability) + return MediumPP(name, permittivity, permeability) elif permittivity is None and permeability is None: - obj._permittivity = _e0mksa - obj._permeability = _u0mksa - return obj + return MediumPP(name, _e0mksa, _u0mksa) + else: + raise ValueError("Arguments are underspecified. Either specify n or any two of permittivity, " + "permeability, and n") @property - def intrinsic_impedance(self): - """ - Returns intrinsic impedance of the medium. - - Explanation - =========== - - The intrinsic impedance of a medium is the ratio of the - transverse components of the electric and magnetic fields - of the electromagnetic wave travelling in the medium. - In a region with no electrical conductivity it simplifies - to the square root of ratio of magnetic permeability to - electric permittivity. - - Examples - ======== - - >>> from sympy.physics.optics import Medium - >>> m = Medium('m') - >>> m.intrinsic_impedance - 149896229*pi*kilogram*meter**2/(1250000*ampere**2*second**3) - - """ - return sqrt(self._permeability/self._permittivity) + def name(self): + return self.args[0] @property def speed(self): @@ -131,10 +117,7 @@ def speed(self): True """ - if self._permittivity is not None and self._permeability is not None: - return 1/sqrt(self._permittivity*self._permeability) - else: - return c/self._n + return c / self.n @property def refractive_index(self): @@ -152,6 +135,87 @@ def refractive_index(self): """ return (c/self.speed) + +class MediumN(Medium): + + """ + Represents an optical medium for which only the refractive index is known. + Useful for simple ray optics. + + This class should never be instantiated directly. + Instead it should be instantiated indirectly by instantiating Medium with + only n specified. + + Examples + ======== + >>> from sympy.physics.optics import Medium + >>> m = Medium('m', n=2) + >>> m + MediumN(Str('m'), 2) + """ + + def __new__(cls, name, n): + obj = super(Medium, cls).__new__(cls, name, n) + return obj + + @property + def n(self): + return self.args[1] + + +class MediumPP(Medium): + """ + Represents an optical medium for which the permittivity and permeability are known. + + This class should never be instantiated directly. Instead it should be + instantiated indirectly by instantiating Medium with any two of + permittivity, permeability, and n specified, or by not specifying any + of permittivity, permeability, or n, in which case default values for + permittivity and permeability will be used. + + Examples + ======== + >>> from sympy.physics.optics import Medium + >>> from sympy.abc import epsilon, mu + >>> m1 = Medium('m1', permittivity=epsilon, permeability=mu) + >>> m1 + MediumPP(Str('m1'), epsilon, mu) + >>> m2 = Medium('m2') + >>> m2 + MediumPP(Str('m2'), 625000*ampere**2*second**4/(22468879468420441*pi*kilogram*meter**3), pi*kilogram*meter/(2500000*ampere**2*second**2)) + """ + + + def __new__(cls, name, permittivity, permeability): + obj = super(Medium, cls).__new__(cls, name, permittivity, permeability) + return obj + + @property + def intrinsic_impedance(self): + """ + Returns intrinsic impedance of the medium. + + Explanation + =========== + + The intrinsic impedance of a medium is the ratio of the + transverse components of the electric and magnetic fields + of the electromagnetic wave travelling in the medium. + In a region with no electrical conductivity it simplifies + to the square root of ratio of magnetic permeability to + electric permittivity. + + Examples + ======== + + >>> from sympy.physics.optics import Medium + >>> m = Medium('m') + >>> m.intrinsic_impedance + 149896229*pi*kilogram*meter**2/(1250000*ampere**2*second**3) + + """ + return sqrt(self.permeability / self.permittivity) + @property def permittivity(self): """ @@ -166,7 +230,7 @@ def permittivity(self): 625000*ampere**2*second**4/(22468879468420441*pi*kilogram*meter**3) """ - return self._permittivity + return self.args[1] @property def permeability(self): @@ -182,24 +246,8 @@ def permeability(self): pi*kilogram*meter/(2500000*ampere**2*second**2) """ - return self._permeability - - def __str__(self): - from sympy.printing import sstr - return type(self).__name__ + ': ' + sstr([self._permittivity, - self._permeability, self._n]) - - def __lt__(self, other): - """ - Compares based on refractive index of the medium. - """ - return self.refractive_index < other.refractive_index - - def __gt__(self, other): - return not self < other + return self.args[2] - def __eq__(self, other): - return self.refractive_index == other.refractive_index - - def __ne__(self, other): - return not self == other + @property + def n(self): + return c*sqrt(self.permittivity*self.permeability) diff --git a/sympy/physics/optics/tests/test_medium.py b/sympy/physics/optics/tests/test_medium.py index f3c96682db21..dfbb485f5b8e 100644 --- a/sympy/physics/optics/tests/test_medium.py +++ b/sympy/physics/optics/tests/test_medium.py @@ -27,13 +27,11 @@ def test_medium(): # by small amount from its value in vacuum. m3 = Medium('m3', 9.0*10**(-12)*s**4*A**2/(m**3*kg), 1.45*10**(-6)*kg*m/(A**2*s**2)) assert m3.refractive_index > m1.refractive_index - assert m3 > m1 assert m3 != m1 # Decreasing electric permittivity and magnetic permeability # by small amount from its value in vacuum. m4 = Medium('m4', 7.0*10**(-12)*s**4*A**2/(m**3*kg), 1.15*10**(-6)*kg*m/(A**2*s**2)) assert m4.refractive_index < m1.refractive_index - assert m4 < m1 m5 = Medium('m5', permittivity=710*10**(-12)*s**4*A**2/(m**3*kg), n=1.33) assert abs(m5.intrinsic_impedance - 6.24845417765552*kg*m**2/(A**2*s**3)) \ < 1e-12*kg*m**2/(A**2*s**3) @@ -45,5 +43,6 @@ def test_medium(): < 1e-20*kg*m/(A**2*s**2) m6 = Medium('m6', None, mu, n) assert m6.permittivity == n**2/(c**2*mu) - assert Medium('m7') == Medium('m8', e0, u0) # test for equality + # test for equality of refractive indices + assert Medium('m7').refractive_index == Medium('m8', e0, u0).refractive_index raises(ValueError, lambda:Medium('m9', e0, u0, 2))
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Bug Fixes" }
xonsh__xonsh-4969@d5ccd11
xonsh/xonsh
Python
4,969
fix: web config tool including \r in prompt
<!--- Thanks for opening a PR on xonsh! Please include a news entry with your PR to help keep our changelog up to date! There are instructions available here: https://xon.sh/devguide.html#changelog --> <!--- If there is specific issue / feature request that this PR is addressing, please link to the corresponding issue by using the `#issuenumber` syntax. Thanks again! --> hi guys, this is my first commit. please let me know if i've done anything wrong :) previously using the xonsh web config tool to set a prompt with a newline added "^M" at the end of the prompt. this fix strips all \r characters in the config file and strips all \r characters in the prompt page's post request. closes #4960 ## For community ⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
2022-10-18T00:13:42Z
Web config tool includes \r in prompt When using the web config (xonfig web), newlines are interpreted as "\r\n". On macOS (and Linux, I assume) this adds a "^M" to the prompt, before the newline. ## xonfig ``` +------------------+--------------------------------+ | xonsh | 0.13.3 | | Python | 3.10.7 | | PLY | 3.11 | | have readline | True | | prompt toolkit | 3.0.31 | | shell type | prompt_toolkit | | history backend | json | | pygments | 2.13.0 | | on posix | True | | on linux | False | | on darwin | True | | on windows | False | | on cygwin | False | | on msys2 | False | | is superuser | False | | default encoding | utf-8 | | xonsh encoding | utf-8 | | encoding errors | surrogateescape | | xontrib | [] | +------------------+--------------------------------+ ``` ## Expected Behavior I would expect the "\r" to be stripped from the text field before the prompt is set. ## Current Behavior I was able to confirm that the "\r" is being added to the prompt variable by checking my .xonshrc after using the web config. Removing this value removes the "^M" from my prompt. ## Steps to Reproduce * Run xonfig web from your shell. * Select a multi-line prompt, and set your .xonshrc * Exit and restart xonsh. You'll see a "^M" before the new line. ## For community ⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
i'm happy to try working on this, could someone assign me the issue? All yours, @peipacut -- we historically haven't relied on issue assignment, you should feel free to pick up any open issues and just ping in the issue that you're going to work on them to avoid duplicate effort.
[ { "body": "When using the web config (xonfig web), newlines are interpreted as \"\\r\\n\". On macOS (and Linux, I assume) this adds a \"^M\" to the prompt, before the newline. \r\n## xonfig\r\n```\r\n+------------------+--------------------------------+\r\n| xonsh | 0.13.3 |\r\n| Python | 3.10.7 |\r\n| PLY | 3.11 |\r\n| have readline | True |\r\n| prompt toolkit | 3.0.31 |\r\n| shell type | prompt_toolkit |\r\n| history backend | json |\r\n| pygments | 2.13.0 |\r\n| on posix | True |\r\n| on linux | False |\r\n| on darwin | True |\r\n| on windows | False |\r\n| on cygwin | False |\r\n| on msys2 | False |\r\n| is superuser | False |\r\n| default encoding | utf-8 |\r\n| xonsh encoding | utf-8 |\r\n| encoding errors | surrogateescape |\r\n| xontrib | [] |\r\n+------------------+--------------------------------+\r\n```\r\n\r\n## Expected Behavior\r\nI would expect the \"\\r\" to be stripped from the text field before the prompt is set. \r\n\r\n## Current Behavior\r\nI was able to confirm that the \"\\r\" is being added to the prompt variable by checking my .xonshrc after using the web config. Removing this value removes the \"^M\" from my prompt.\r\n\r\n## Steps to Reproduce\r\n* Run xonfig web from your shell.\r\n* Select a multi-line prompt, and set your .xonshrc\r\n* Exit and restart xonsh. You'll see a \"^M\" before the new line.\r\n\r\n## For community\r\n⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**\r\n", "number": 4960, "title": "Web config tool includes \\r in prompt" } ]
ed11f319fad27dd452808e1b88e1f1776f8d8772
{ "head_commit": "d5ccd117e710f8f3b086be5d23f7628702ec835a", "head_commit_message": "[pre-commit.ci] auto fixes from pre-commit.com hooks\n\nfor more information, see https://pre-commit.ci", "patch_to_review": "diff --git a/xonsh/webconfig/file_writes.py b/xonsh/webconfig/file_writes.py\nindex 1e4b4aec21..4388b4371d 100644\n--- a/xonsh/webconfig/file_writes.py\n+++ b/xonsh/webconfig/file_writes.py\n@@ -1,5 +1,6 @@\n \"\"\"functions to update rc files\"\"\"\n import os\n+import re\n import typing as tp\n \n RENDERERS: tp.List[tp.Callable] = []\n@@ -39,7 +40,7 @@ def config_to_xonsh(\n for func in RENDERERS:\n lines.extend(func(config))\n lines.append(suffix)\n- return \"\\n\".join(lines)\n+ return re.sub(r\"\\\\r\", \"\", \"\\n\".join(lines))\n \n \n def insert_into_xonshrc(\ndiff --git a/xonsh/webconfig/routes.py b/xonsh/webconfig/routes.py\nindex 6568a27a3e..f9b8e36522 100644\n--- a/xonsh/webconfig/routes.py\n+++ b/xonsh/webconfig/routes.py\n@@ -1,5 +1,6 @@\n import cgi\n import inspect\n+import re\n import sys\n from typing import TYPE_CHECKING\n \n@@ -239,10 +240,13 @@ def get(self):\n \n def post(self, data: \"cgi.FieldStorage\"):\n if data:\n- prompt = data.getvalue(self.var_name)\n+ prompt = data.getvalue(self.var_name).replace(\"\\r\", \"\")\n self.env[self.var_name] = prompt\n self.update_rc(prompt=prompt)\n \n+ def ee(self, str):\n+ return re.sub(r\"\\\\r\", \"\", str)\n+\n \n class XontribsPage(Routes):\n path = \"/xontribs\"\n" }
[ { "diff_hunk": "@@ -239,10 +240,13 @@ def get(self):\n \n def post(self, data: \"cgi.FieldStorage\"):\n if data:\n- prompt = data.getvalue(self.var_name)\n+ prompt = data.getvalue(self.var_name).replace(\"\\r\", \"\")\n self.env[self.var_name] = prompt\n self.update_rc(prompt=prompt)\n \n+ def ee(self, str):\n+ return re.sub(r\"\\\\r\", \"\", str)\n+", "line": null, "original_line": 249, "original_start_line": 247, "path": "xonsh/webconfig/routes.py", "start_line": null, "text": "@user1:\nIs this being used or is this left over from debugging?\n\n@author:\nremoved" } ]
2ad920c2f9c243e8be42534d0a81ba2ebfb7a2f2
diff --git a/xonsh/webconfig/file_writes.py b/xonsh/webconfig/file_writes.py index 1e4b4aec21..4388b4371d 100644 --- a/xonsh/webconfig/file_writes.py +++ b/xonsh/webconfig/file_writes.py @@ -1,5 +1,6 @@ """functions to update rc files""" import os +import re import typing as tp RENDERERS: tp.List[tp.Callable] = [] @@ -39,7 +40,7 @@ def config_to_xonsh( for func in RENDERERS: lines.extend(func(config)) lines.append(suffix) - return "\n".join(lines) + return re.sub(r"\\r", "", "\n".join(lines)) def insert_into_xonshrc( diff --git a/xonsh/webconfig/routes.py b/xonsh/webconfig/routes.py index 6568a27a3e..92e8218952 100644 --- a/xonsh/webconfig/routes.py +++ b/xonsh/webconfig/routes.py @@ -239,7 +239,7 @@ def get(self): def post(self, data: "cgi.FieldStorage"): if data: - prompt = data.getvalue(self.var_name) + prompt = data.getvalue(self.var_name).replace("\r", "") self.env[self.var_name] = prompt self.update_rc(prompt=prompt)
{ "difficulty": "low", "estimated_review_effort": 2, "problem_domain": "Bug Fixes" }
xonsh__xonsh-5340@468ba13
xonsh/xonsh
Python
5,340
Fix jobs.py: list index out of range
Fixes https://github.com/xonsh/xonsh/issues/2544#issuecomment-2059957967 Closes #2544 ## For community ⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
2024-04-17T09:14:59Z
"command&" misparsed `true&` is misparsed. It should execute `true` in the background, but instead treats the whole thing as the command name. `true &` works fine. ## For community ⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
Thanks for reporting! I don't think that this is easily fixable, since one of the ways that the parser works is that it says whitespace counts in subprocess mode. Addressing this would special case backgrounding to have whitespace not matter for the last token. That said, we probably would still want this.... This also affects `bash -c 'sleep 5 &'` invocation. Example: > `xonsh $` indicates a xonsh prompt > `bash $` indicates a bash prompt ``` console xonsh $ import time; start = time.time(); bash -c 'sleep 5 &'; print(time.time() - start) 5.01800537109375 bash $ time bash -c 'sleep 5 &' real 0m0.014s user 0m0.008s sys 0m0.005s ``` This make tools that use background jobs unusable in the xonsh. For example `pass -c something` (https://www.passwordstore.org/) just blocks the shell(until password is not deleted from clipboard). Hi @corpix - I think bash is doing something very strange here.... I am not sure that is the correct behaviour to replicate. Though the pass case is a notable usecase. I'm having similar behavior with `nvim-qt`; instead of backgrounding like it does in other shells, it blocks the `xonsh` session until I exit `nvim-qt`. Seems like that's probably a separate issue from this one... if I don't find an existing issue for it, I'll open one. Hi @whitelynx - that seems like a separate issue. Would you mind opening up another issue please? Actually, it looks like my issue is a duplicate of #2060... I'll try the hack from that issue. On current master I can repeat this with more errors: ```xsh echo& ``` ```xsh TRACE SUBPROC: (['echo'], '&'), captured=hiddenobject xonsh: To log full traceback to a file set: $XONSH_TRACEBACK_LOGFILE = <filename> Traceback (most recent call last): File "/Users/pc/.local/mamba-env/Users/pc/git/trace/lib/python3.12/site-packages/xonsh/procs/specs.py", line 474, in _run_binary p = self.cls(cmd, bufsize=bufsize, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/pc/.local/mamba-env/Users/pc/git/trace/lib/python3.12/subprocess.py", line 1026, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "/Users/pc/.local/mamba-env/Users/pc/git/trace/lib/python3.12/subprocess.py", line 1955, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) FileNotFoundError: [Errno 2] No such file or directory: 'echo' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/Users/pc/.local/mamba-env/Users/pc/git/trace/lib/python3.12/site-packages/xonsh/procs/pipelines.py", line 167, in __init__ proc = spec.run(pipeline_group=pipeline_group) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/pc/.local/mamba-env/Users/pc/git/trace/lib/python3.12/site-packages/xonsh/procs/specs.py", line 456, in run p = self._run_binary(kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/pc/.local/mamba-env/Users/pc/git/trace/lib/python3.12/site-packages/xonsh/procs/specs.py", line 493, in _run_binary raise xt.XonshError(e) from ex xonsh.tools.XonshError: xonsh: subprocess mode: command not found: 'echo' xonsh: To log full traceback to a file set: $XONSH_TRACEBACK_LOGFILE = <filename> Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/pc/.local/mamba-env/Users/pc/git/trace/lib/python3.12/site-packages/xonsh/built_ins.py", line 206, in subproc_captured_hiddenobject return xonsh.procs.specs.run_subproc(cmds, captured="hiddenobject", envs=envs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/pc/.local/mamba-env/Users/pc/git/trace/lib/python3.12/site-packages/xonsh/procs/specs.py", line 908, in run_subproc return _run_specs(specs, cmds) ^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/pc/.local/mamba-env/Users/pc/git/trace/lib/python3.12/site-packages/xonsh/procs/specs.py", line 922, in _run_specs xj.add_job( File "/Users/pc/.local/mamba-env/Users/pc/git/trace/lib/python3.12/site-packages/xonsh/jobs.py", line 376, in add_job print_one_job(num) File "/Users/pc/.local/mamba-env/Users/pc/git/trace/lib/python3.12/site-packages/xonsh/jobs.py", line 354, in print_one_job info = format_job_string(num) ^^^^^^^^^^^^^^^^^^^^^^ File "/Users/pc/.local/mamba-env/Users/pc/git/trace/lib/python3.12/site-packages/xonsh/jobs.py", line 347, in format_job_string pid = job["pids"][-1] ~~~~~~~~~~~^^^^ IndexError: list index out of range ``` I fixed the exception. Other cases works fine: ```xsh true& # [1]+ running: true & true & # [1]+ running: true & qwe& # xonsh: subprocess mode: command not found: 'qwe' # [1]+ running: qwe & qwe & # xonsh: subprocess mode: command not found: 'qwe' # [1]+ running: qwe & aliases['qqq']='echo 123' qqq& # [1]+ running: qqq & # 123 qqq & # [1]+ running: qqq & # 123 ```
[ { "body": "`true&` is misparsed. It should execute `true` in the background, but instead treats the whole thing as the command name.\r\n\r\n`true &` works fine.\r\n\r\n## For community\r\n⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**\r\n", "number": 2544, "title": "\"command&\" misparsed" } ]
6c94d4ad6c844d50d2a39319dba4aee35295f2f1
{ "head_commit": "468ba134594db90c39fdc968750e67612f5b0d7b", "head_commit_message": "test: update venv activator test", "patch_to_review": "diff --git a/news/fix-jobs-index.rst b/news/fix-jobs-index.rst\nnew file mode 100644\nindex 0000000000..9f735037bc\n--- /dev/null\n+++ b/news/fix-jobs-index.rst\n@@ -0,0 +1,23 @@\n+**Added:**\n+\n+* <news item>\n+\n+**Changed:**\n+\n+* <news item>\n+\n+**Deprecated:**\n+\n+* <news item>\n+\n+**Removed:**\n+\n+* <news item>\n+\n+**Fixed:**\n+\n+* Jobs: fixed \"index out of range\" exception.\n+\n+**Security:**\n+\n+* <news item>\ndiff --git a/tests/test_virtualenv_activator.py b/tests/test_virtualenv_activator.py\nindex 345e7969de..96d490cd1a 100644\n--- a/tests/test_virtualenv_activator.py\n+++ b/tests/test_virtualenv_activator.py\n@@ -8,9 +8,7 @@\n def test_xonsh_activator(tmp_path):\n # Create virtualenv\n venv_dir = tmp_path / \"venv\"\n- assert b\"XonshActivator\" in check_output(\n- [sys.executable, \"-m\", \"virtualenv\", str(venv_dir)]\n- )\n+ check_output([sys.executable, \"-m\", \"venv\", str(venv_dir)])\n assert venv_dir.is_dir()\n \n # Check activation script created\ndiff --git a/xonsh/jobs.py b/xonsh/jobs.py\nindex 097346a1b0..652967500b 100644\n--- a/xonsh/jobs.py\n+++ b/xonsh/jobs.py\n@@ -344,9 +344,9 @@ def format_job_string(num: int) -> str:\n pos = \"+\" if tasks[0] == num else \"-\" if tasks[1] == num else \" \"\n status = job[\"status\"]\n cmd = \" \".join([\" \".join(i) if isinstance(i, list) else i for i in job[\"cmds\"]])\n- pid = job[\"pids\"][-1]\n+ pid = f\"({job['pids'][-1]})\" if job[\"pids\"] else \"\"\n bg = \" &\" if job[\"bg\"] else \"\"\n- return f\"[{num}]{pos} {status}: {cmd}{bg} ({pid})\"\n+ return f\"[{num}]{pos} {status}: {cmd}{bg} {pid}\"\n \n \n def print_one_job(num, outfile=sys.stdout):\n" }
[ { "diff_hunk": "@@ -8,9 +8,7 @@\n def test_xonsh_activator(tmp_path):\n # Create virtualenv\n venv_dir = tmp_path / \"venv\"\n- assert b\"XonshActivator\" in check_output(\n- [sys.executable, \"-m\", \"virtualenv\", str(venv_dir)]\n- )\n+ check_output([sys.executable, \"-m\", \"venv\", str(venv_dir)])", "line": null, "original_line": 11, "original_start_line": null, "path": "tests/test_virtualenv_activator.py", "start_line": null, "text": "@user1:\nI am checking this\n\n@user1:\n@user2-code @user3 this module better be extracted to its own plugin like suggested by the maintainer https://github.com/xonsh/xonsh/issues/3689 . The plugin doesn't need to depend on Xonsh at all. It can be a pure Python file as well. \n\n@user1:\nAlso there is a recent release from virtualenv which could have broken this test. https://pypi.org/project/virtualenv/20.25.2/ . Also this is not a stdlib and we have vox for supporting venvs . So I suggest moving this code to a separate project." } ]
7ac0af6adbaa6d25c9ef2b41b91e9aa9b516f751
diff --git a/news/fix-jobs-index.rst b/news/fix-jobs-index.rst new file mode 100644 index 0000000000..9f735037bc --- /dev/null +++ b/news/fix-jobs-index.rst @@ -0,0 +1,23 @@ +**Added:** + +* <news item> + +**Changed:** + +* <news item> + +**Deprecated:** + +* <news item> + +**Removed:** + +* <news item> + +**Fixed:** + +* Jobs: fixed "index out of range" exception. + +**Security:** + +* <news item> diff --git a/xonsh/jobs.py b/xonsh/jobs.py index 097346a1b0..652967500b 100644 --- a/xonsh/jobs.py +++ b/xonsh/jobs.py @@ -344,9 +344,9 @@ def format_job_string(num: int) -> str: pos = "+" if tasks[0] == num else "-" if tasks[1] == num else " " status = job["status"] cmd = " ".join([" ".join(i) if isinstance(i, list) else i for i in job["cmds"]]) - pid = job["pids"][-1] + pid = f"({job['pids'][-1]})" if job["pids"] else "" bg = " &" if job["bg"] else "" - return f"[{num}]{pos} {status}: {cmd}{bg} ({pid})" + return f"[{num}]{pos} {status}: {cmd}{bg} {pid}" def print_one_job(num, outfile=sys.stdout):
{ "difficulty": "low", "estimated_review_effort": 2, "problem_domain": "Bug Fixes" }
sympy__sympy-22677@cc0c63a
sympy/sympy
Python
22,677
MatrixElement symbol property added; validate shape compatible with indices
#### Brief description of what is fixed or changed fixes #22676 ##### symbol attribute has been added MatrixElement behaves more like Indexed in terms of argument access: ```python >>> IndexedBase('x')[0].base x >>> MatrixSymbol('x',2,2)[0,0].symbol x ``` ##### invalid substitutions no longer succeed In master, ```python X = MatrixSymbol("X", 2, 2) Y = MatrixSymbol("Y", 1, 2) assert X[1, 1].subs(X, Y) == Y[1, 1] >>> Y[1,1] Traceback (most recent call last): ... IndexError: indices out of bounds ``` in this PR that substitution no longer succeeds so you don't end up with an invalid expression <!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### Other comments There is some divergence in how MatrixSymbol/IndexedBase and MatrixElement/Indexed behave in terms of free symbols and attributes. This should have a look by someone that works with matrix expressions to see if these changes make sense. - [x] QUESTION: is there are reason to store Indexed.label as a Symbol but MatrixSymbol as a Str? ANSWER: no (see #22391 and #22272) #### Release Notes <!-- BEGIN RELEASE NOTES --> * matrices * replacement of MatrixSymbol in MatrixElement requires shape to be compatible with indices to succeed * MatrixElement now has `symbol` attributes (analogous to `base` attribute of `IndexedBase` ) * `eval_hyper_sum` returns as unchanged arguments for which `is_hypergeometric` is False and those which produce singularities at `a` or `b + 1` where `and` and `b` are the lower and upper limits of evaluation, respectively. * stats * removed deprecation warning about using `evaluate=False` <!-- END RELEASE NOTES -->
2021-12-15T18:43:24Z
MatrixElement allows bad subs ```python >>> m22 = MatrixSymbol("A",2,2) >>> m11 = MatrixSymbol("B",1,1) >>> m22.subs(m22,m11) == m11 # this is ok, just changing a symbol without context True >>> m22[1,1].subs(m22, m11) # this is not ok -- it indicates a non-existent element in B B[1, 1] ``` If you enter the element manually you will get the error: ```python >>> B[1, 1] Traceback (most recent call last): File "<stdin>", line 1, in <module> KeyError: (1, 1) ``` Such a substitution should either fail to change the object or else raise a KeyError.
[ { "body": "```python\r\n>>> m22 = MatrixSymbol(\"A\",2,2)\r\n>>> m11 = MatrixSymbol(\"B\",1,1)\r\n>>> m22.subs(m22,m11) == m11 # this is ok, just changing a symbol without context\r\nTrue\r\n>>> m22[1,1].subs(m22, m11) # this is not ok -- it indicates a non-existent element in B\r\nB[1, 1]\r\n```\r\nIf you enter the element manually you will get the error:\r\n```python\r\n>>> B[1, 1]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\nKeyError: (1, 1)\r\n```\r\nSuch a substitution should either fail to change the object or else raise a KeyError.", "number": 22676, "title": "MatrixElement allows bad subs" } ]
44588dbb8c7ab833a8acb0cad94e395db82685e5
{ "head_commit": "cc0c63ab621e52f1e4ec2a27981e44208437b910", "head_commit_message": "watch for illegals at both limits", "patch_to_review": "diff --git a/sympy/concrete/summations.py b/sympy/concrete/summations.py\nindex 799667109e3c..f2c74da5eaaf 100644\n--- a/sympy/concrete/summations.py\n+++ b/sympy/concrete/summations.py\n@@ -28,6 +28,7 @@\n from sympy.polys.partfrac import apart\n from sympy.polys.polyerrors import PolynomialError, PolificationFailed\n from sympy.polys.polytools import parallel_poly_from_expr, Poly, factor\n+from sympy.polys.polyutils import illegal\n from sympy.polys.rationaltools import together\n from sympy.series.limitseq import limit_seq\n from sympy.series.order import O\n@@ -1295,6 +1296,9 @@ def _eval_sum_hyper(f, i, a):\n def eval_sum_hyper(f, i_a_b):\n i, a, b = i_a_b\n \n+ if f.is_hypergeometric(i) is False:\n+ return\n+\n if (b - a).is_Integer:\n # We are never going to do better than doing the sum in the obvious way\n return None\n@@ -1307,10 +1311,15 @@ def eval_sum_hyper(f, i_a_b):\n if res is not None:\n return Piecewise(res, (old_sum, True))\n else:\n+ n_illegal = lambda x: sum(x.count(_) for _ in illegal)\n+ had = n_illegal(f)\n+ # check that no extra illegals are introduced\n res1 = _eval_sum_hyper(f, i, a)\n+ if res1 is None or n_illegal(res1) > had:\n+ return\n res2 = _eval_sum_hyper(f, i, b + 1)\n- if res1 is None or res2 is None:\n- return None\n+ if res2 is None or n_illegal(res2) > had:\n+ return\n (res1, cond1), (res2, cond2) = res1, res2\n cond = And(cond1, cond2)\n if cond == False:\ndiff --git a/sympy/concrete/tests/test_gosper.py b/sympy/concrete/tests/test_gosper.py\nindex 70cc488cb844..fc81b70769d7 100644\n--- a/sympy/concrete/tests/test_gosper.py\n+++ b/sympy/concrete/tests/test_gosper.py\n@@ -8,8 +8,8 @@\n from sympy.functions.special.gamma_functions import gamma\n from sympy.polys.polytools import Poly\n from sympy.simplify.simplify import simplify\n-from sympy.abc import a, b, j, k, m, n, r, x\n from sympy.concrete.gosper import gosper_normal, gosper_sum, gosper_term\n+from sympy.abc import a, b, j, k, m, n, r, x\n \n \n def test_gosper_normal():\ndiff --git a/sympy/concrete/tests/test_sums_products.py b/sympy/concrete/tests/test_sums_products.py\nindex 10d7d14e0d80..93e924250596 100644\n--- a/sympy/concrete/tests/test_sums_products.py\n+++ b/sympy/concrete/tests/test_sums_products.py\n@@ -29,7 +29,6 @@\n from sympy.simplify.combsimp import combsimp\n from sympy.simplify.simplify import simplify\n from sympy.tensor.indexed import (Idx, Indexed, IndexedBase)\n-from sympy.abc import a, b, c, d, k, m, x, y, z\n from sympy.concrete.summations import (\n telescopic, _dummy_with_inherited_properties_concrete, eval_sum_residue)\n from sympy.concrete.expr_with_intlimits import ReorderError\n@@ -38,6 +37,7 @@\n from sympy.matrices import (Matrix, SparseMatrix,\n ImmutableDenseMatrix, ImmutableSparseMatrix)\n from sympy.core.mod import Mod\n+from sympy.abc import a, b, c, d, k, m, x, y, z\n \n n = Symbol('n', integer=True)\n f, g = symbols('f g', cls=Function)\n@@ -1186,8 +1186,9 @@ def test_issue_14640():\n \n def test_issue_15943():\n s = Sum(binomial(n, k)*factorial(n - k), (k, 0, n)).doit().rewrite(gamma)\n- assert s == -E*(n + 1)*gamma(n + 1)*lowergamma(n + 1, 1)/gamma(n + 2\n- ) + E*gamma(n + 1)\n+ assert s == Sum(gamma(n + 1)/gamma(k + 1), (k, 0, n))\n+ s = s.doit()\n+ assert s == (-E*(n + 1)*lowergamma(n + 1, 1)/factorial(n + 1) + E)*gamma(n + 1)\n assert s.simplify() == E*(factorial(n) - lowergamma(n + 1, 1))\n \n \n@@ -1586,3 +1587,10 @@ def test_process_limits():\n raises(TypeError, lambda: D(x, x > 0))\n raises(ValueError, lambda: D(x, Interval(1, 3)))\n raises(NotImplementedError, lambda: D(x, (x, union)))\n+\n+\n+def test_pr_22677():\n+ b = Symbol('b', integer=True, positive=True)\n+ assert Sum(1/x**2,(x, 0, b)).doit() == Sum(x**(-2), (x, 0, b))\n+ assert Sum(1/(x - b)**2,(x, 0, b-1)).doit() == Sum(\n+ (-b + x)**(-2), (x, 0, b - 1))\ndiff --git a/sympy/matrices/expressions/matexpr.py b/sympy/matrices/expressions/matexpr.py\nindex 78db834466e6..ccbf3588a00a 100644\n--- a/sympy/matrices/expressions/matexpr.py\n+++ b/sympy/matrices/expressions/matexpr.py\n@@ -274,8 +274,8 @@ def is_valid(idx):\n return isinstance(idx, (int, Integer, Symbol, Expr))\n return (is_valid(i) and is_valid(j) and\n (self.rows is None or\n- (0 <= i) != False and (i < self.rows) != False) and\n- (0 <= j) != False and (j < self.cols) != False)\n+ (i >= -self.rows) != False and (i < self.rows) != False) and\n+ (j >= -self.cols) != False and (j < self.cols) != False)\n \n def __getitem__(self, key):\n if not isinstance(key, tuple) and isinstance(key, slice):\n@@ -589,10 +589,14 @@ class MatrixElement(Expr):\n def __new__(cls, name, n, m):\n n, m = map(_sympify, (n, m))\n from sympy.matrices.matrices import MatrixBase\n- if isinstance(name, (MatrixBase,)):\n+ if isinstance(name, MatrixBase):\n if n.is_Integer and m.is_Integer:\n return name[n, m]\n- if isinstance(name, str):\n+ name = _sympify(name) # change mutable into immutable\n+ elif isinstance(name, MatrixSymbol):\n+ if not name.valid_index(n, m):\n+ raise IndexError('indices out of range')\n+ elif isinstance(name, str):\n name = Symbol(name)\n else:\n name = _sympify(name)\n@@ -601,6 +605,10 @@ def __new__(cls, name, n, m):\n obj = Expr.__new__(cls, name, n, m)\n return obj\n \n+ @property\n+ def symbol(self):\n+ return self.args[0]\n+\n def doit(self, **kwargs):\n deep = kwargs.get('deep', True)\n if deep:\ndiff --git a/sympy/matrices/expressions/tests/test_matexpr.py b/sympy/matrices/expressions/tests/test_matexpr.py\nindex ff0c30e8537d..7f21fb1016bf 100644\n--- a/sympy/matrices/expressions/tests/test_matexpr.py\n+++ b/sympy/matrices/expressions/tests/test_matexpr.py\n@@ -2,7 +2,7 @@\n from sympy.core.exprtools import gcd_terms\n from sympy.core.function import (diff, expand)\n from sympy.core.relational import Eq\n-from sympy.core.symbol import (Dummy, Symbol)\n+from sympy.core.symbol import (Dummy, Symbol, Str)\n from sympy.functions.special.tensor_functions import KroneckerDelta\n from sympy.matrices.dense import zeros\n from sympy.polys.polytools import factor\n@@ -48,10 +48,13 @@ def test_matrix_symbol_creation():\n raises(ValueError, lambda: MatrixSymbol('A', n, n))\n \n \n-def test_shape():\n+def test_matexpr_properties():\n assert A.shape == (n, m)\n assert (A*B).shape == (n, l)\n raises(ShapeError, lambda: B*A)\n+ assert A[0, 1].indices == (0, 1)\n+ assert A[0, 0].symbol == A\n+ assert A[0, 0].symbol.name == 'A'\n \n \n def test_matexpr():\n@@ -61,7 +64,7 @@ def test_matexpr():\n assert (A*B).shape == (n, l)\n \n \n-def test_subs():\n+def test_matexpr_subs():\n A = MatrixSymbol('A', n, m)\n B = MatrixSymbol('B', m, l)\n C = MatrixSymbol('C', m, l)\n@@ -70,6 +73,30 @@ def test_subs():\n assert (A*B).subs(B, C) == A*C\n assert (A*B).subs(l, n).is_square\n \n+ W = MatrixSymbol(\"W\", 3, 3)\n+ X = MatrixSymbol(\"X\", 2, 2)\n+ Y = MatrixSymbol(\"Y\", 1, 2)\n+ Z = MatrixSymbol(\"Z\", n, 2)\n+ # no restrictions on Symbol replacement\n+ assert X.subs(X, Y) == Y\n+ # it might be better to just change the name\n+ y = Str('y')\n+ assert X.subs(Str(\"X\"), y).args == (y, 2, 2)\n+ # it's ok to introduce a wider matrix\n+ assert X[1, 1].subs(X, W) == W[1, 1]\n+ # but for a given MatrixExpression, only change\n+ # name if indexing on the new shape is valid.\n+ # Here, X is 2,2; Y is 1,2 and Y[1, 1] is out\n+ # of range so an error is raised\n+ raises(IndexError, lambda: X[1, 1].subs(X, Y))\n+ # here, [0, 1] is in range so the subs succeeds\n+ assert X[0, 1].subs(X, Y) == Y[0, 1]\n+ # and here the size of n will accept any index\n+ # in the first position\n+ assert W[2, 1].subs(W, Z) == Z[2, 1]\n+ # but not in the second position\n+ raises(IndexError, lambda: W[2, 2].subs(W, Z))\n+\n A = SparseMatrix([[1, 2], [3, 4]])\n B = Matrix([[1, 2], [3, 4]])\n C, D = MatrixSymbol('C', 2, 2), MatrixSymbol('D', 2, 2)\n@@ -187,11 +214,15 @@ def test_invariants():\n assert obj == obj.__class__(*obj.args)\n \n \n-def test_indexing():\n+def test_matexpr_indexing():\n A = MatrixSymbol('A', n, m)\n A[1, 2]\n A[l, k]\n- A[l+1, k+1]\n+ A[l + 1, k + 1]\n+ A = MatrixSymbol('A', 2, 1)\n+ for i in range(-2, 2):\n+ for j in range(-1, 1):\n+ A[i, j]\n \n \n def test_single_indexing():\ndiff --git a/sympy/matrices/expressions/tests/test_trace.py b/sympy/matrices/expressions/tests/test_trace.py\nindex f00bd31a8181..3bd66bec2377 100644\n--- a/sympy/matrices/expressions/tests/test_trace.py\n+++ b/sympy/matrices/expressions/tests/test_trace.py\n@@ -8,6 +8,8 @@\n )\n from sympy.matrices.expressions.special import OneMatrix\n from sympy.testing.pytest import raises\n+from sympy.abc import i\n+\n \n n = symbols('n', integer=True)\n A = MatrixSymbol('A', n, n)\n@@ -95,8 +97,9 @@ def test_trace_constant_factor():\n assert trace(MatMul(2, X)) == 10\n \n \n-def test_rewrite():\n- assert isinstance(trace(A).rewrite(Sum), Sum)\n+def test_trace_rewrite():\n+ assert trace(A).rewrite(Sum) == Sum(A[i, i], (i, 0, n - 1))\n+ assert trace(eye(3)).rewrite(Sum) == 3\n \n \n def test_trace_normalize():\ndiff --git a/sympy/matrices/expressions/trace.py b/sympy/matrices/expressions/trace.py\nindex edecdd4b3924..8b462a5c4a5f 100644\n--- a/sympy/matrices/expressions/trace.py\n+++ b/sympy/matrices/expressions/trace.py\n@@ -2,7 +2,7 @@\n from sympy.core.expr import Expr, ExprBuilder\n from sympy.core.singleton import S\n from sympy.core.sorting import default_sort_key\n-from sympy.core.symbol import Dummy\n+from sympy.core.symbol import uniquely_named_symbol\n from sympy.core.sympify import sympify\n from sympy.matrices.matrices import MatrixBase\n from sympy.matrices.common import NonSquareMatrixError\n@@ -144,8 +144,9 @@ def get_arg_key(x):\n \n def _eval_rewrite_as_Sum(self, expr, **kwargs):\n from sympy.concrete.summations import Sum\n- i = Dummy('i')\n- return Sum(self.arg[i, i], (i, 0, self.arg.rows-1)).doit()\n+ i = uniquely_named_symbol('i', expr)\n+ s = Sum(self.arg[i, i], (i, 0, self.arg.rows - 1))\n+ return s.doit()\n \n \n def trace(expr):\ndiff --git a/sympy/stats/rv.py b/sympy/stats/rv.py\nindex 34384d4e2bc8..31691e1a75c9 100644\n--- a/sympy/stats/rv.py\n+++ b/sympy/stats/rv.py\n@@ -39,11 +39,9 @@\n from sympy.sets.sets import FiniteSet, ProductSet, Intersection\n from sympy.solvers.solveset import solveset\n from sympy.external import import_module\n-from sympy.utilities.misc import filldedent\n from sympy.utilities.decorator import doctest_depends_on\n from sympy.utilities.exceptions import SymPyDeprecationWarning\n from sympy.utilities.iterables import iterable\n-import warnings\n \n \n x = Symbol('x')\n@@ -841,11 +839,6 @@ def probability(condition, given_condition=None, numsamples=None,\n from sympy.stats.symbolic_probability import Probability\n if evaluate:\n return Probability(condition, given_condition).doit(**kwargs)\n- ### TODO: Remove the user warnings in the future releases\n- message = (\"Since version 1.7, using `evaluate=False` returns `Probability` \"\n- \"object. If you want unevaluated Integral/Sum use \"\n- \"`P(condition, given_condition, evaluate=False).rewrite(Integral)`\")\n- warnings.warn(filldedent(message))\n return Probability(condition, given_condition)\n \n \ndiff --git a/sympy/stats/tests/test_discrete_rv.py b/sympy/stats/tests/test_discrete_rv.py\nindex 0e3abb6d48c8..b011872abb98 100644\n--- a/sympy/stats/tests/test_discrete_rv.py\n+++ b/sympy/stats/tests/test_discrete_rv.py\n@@ -1,7 +1,7 @@\n from sympy.concrete.summations import Sum\n from sympy.core.numbers import (I, Rational, oo, pi)\n from sympy.core.singleton import S\n-from sympy.core.symbol import (Dummy, Symbol)\n+from sympy.core.symbol import Symbol\n from sympy.functions.elementary.complexes import (im, re)\n from sympy.functions.elementary.exponential import log\n from sympy.functions.elementary.integers import floor\n@@ -24,7 +24,7 @@\n FlorySchulz, Poisson, Geometric, Hermite, Logarithmic,\n NegativeBinomial, Skellam, YuleSimon, Zeta,\n DiscreteRV)\n-from sympy.testing.pytest import slow, nocache_fail, raises, ignore_warnings\n+from sympy.testing.pytest import slow, nocache_fail, raises\n from sympy.stats.symbolic_probability import Expectation\n \n x = Symbol('x')\n@@ -43,14 +43,15 @@ def test_Poisson():\n l = 3\n x = Poisson('x', l)\n assert E(x) == l\n+ assert E(2*x) == 2*l\n assert variance(x) == l\n assert density(x) == PoissonDistribution(l)\n- with ignore_warnings(UserWarning): ### TODO: Restore tests once warnings are removed\n- assert isinstance(E(x, evaluate=False), Expectation)\n- assert isinstance(E(2*x, evaluate=False), Expectation)\n+ assert isinstance(E(x, evaluate=False), Expectation)\n+ assert isinstance(E(2*x, evaluate=False), Expectation)\n # issue 8248\n assert x.pspace.compute_expectation(1) == 1\n \n+\n def test_FlorySchulz():\n a = Symbol(\"a\")\n z = Symbol(\"z\")\n@@ -107,8 +108,7 @@ def test_Logarithmic():\n assert E(x) == -p / ((1 - p) * log(1 - p))\n assert variance(x) == -1/log(2)**2 + 2/log(2)\n assert E(2*x**2 + 3*x + 4) == 4 + 7 / log(2)\n- with ignore_warnings(UserWarning): ### TODO: Restore tests once warnings are removed\n- assert isinstance(E(x, evaluate=False), Expectation)\n+ assert isinstance(E(x, evaluate=False), Expectation)\n \n \n @nocache_fail\n@@ -120,8 +120,7 @@ def test_negative_binomial():\n # This hangs when run with the cache disabled:\n assert variance(x) == p*r / (1-p)**2\n assert E(x**5 + 2*x + 3) == Rational(9207, 4)\n- with ignore_warnings(UserWarning): ### TODO: Restore tests once warnings are removed\n- assert isinstance(E(x, evaluate=False), Expectation)\n+ assert isinstance(E(x, evaluate=False), Expectation)\n \n \n def test_skellam():\n@@ -148,8 +147,7 @@ def test_yule_simon():\n x = YuleSimon('x', rho)\n assert simplify(E(x)) == rho / (rho - 1)\n assert simplify(variance(x)) == rho**2 / ((rho - 1)**2 * (rho - 2))\n- with ignore_warnings(UserWarning): ### TODO: Restore tests once warnings are removed\n- assert isinstance(E(x, evaluate=False), Expectation)\n+ assert isinstance(E(x, evaluate=False), Expectation)\n # To test the cdf function\n assert cdf(x)(x) == Piecewise((-beta(floor(x), 4)*floor(x) + 1, x >= 1), (0, True))\n \n@@ -291,17 +289,12 @@ def test_conditional():\n def test_product_spaces():\n X1 = Geometric('X1', S.Half)\n X2 = Geometric('X2', Rational(1, 3))\n- #assert str(P(X1 + X2 < 3, evaluate=False)) == \"\"\"Sum(Piecewise((2**(X2 - n - 2)*(2/3)**(X2 - 1)/6, \"\"\"\\\n- # + \"\"\"(-X2 + n + 3 >= 1) & (-X2 + n + 3 < oo)), (0, True)), (X2, 1, oo), (n, -oo, -1))\"\"\"\n- n = Dummy('n')\n- with ignore_warnings(UserWarning): ### TODO: Restore tests once warnings are removed\n- assert P(X1 + X2 < 3, evaluate=False).rewrite(Sum).dummy_eq(Sum(Piecewise((2**(-n)/4,\n- n + 2 >= 1), (0, True)), (n, -oo, -1))/3)\n- #assert str(P(X1 + X2 > 3)) == \"\"\"Sum(Piecewise((2**(X2 - n - 2)*(2/3)**(X2 - 1)/6, \"\"\" +\\\n- # \"\"\"(-X2 + n + 3 >= 1) & (-X2 + n + 3 < oo)), (0, True)), (X2, 1, oo), (n, 1, oo))\"\"\"\n- assert P(X1 + X2 > 3).dummy_eq(Sum(Piecewise((2**(X2 - n - 2)*(Rational(2, 3))**(X2 - 1)/6,\n- -X2 + n + 3 >= 1), (0, True)),\n- (X2, 1, oo), (n, 1, oo)))\n-# assert str(P(Eq(X1 + X2, 3))) == \"\"\"Sum(Piecewise((2**(X2 - 2)*(2/3)**(X2 - 1)/6, \"\"\" +\\\n-# \"\"\"X2 <= 2), (0, True)), (X2, 1, oo))\"\"\"\n+ assert str(P(X1 + X2 < 3).rewrite(Sum)) == (\n+ \"Sum(Piecewise((1/(4*2**n), n >= -1), (0, True)), (n, -oo, -1))/3\")\n+ assert str(P(X1 + X2 > 3).rewrite(Sum)) == (\n+ \"Sum(Piecewise((2**(X2 - n - 2)*(2/3)**(X2 - 1)/6, \"\n+ \"X2 - n <= 2), (0, True)), (X2, 1, oo), (n, 1, oo))\")\n+ assert str(P(X1 + X2 > 3).rewrite(Sum)) == (\n+ \"Sum(Piecewise((2**(X2 - n - 2)*(2/3)**(X2 - 1)/6, \"\n+ \"X2 - n <= 2), (0, True)), (X2, 1, oo), (n, 1, oo))\")\n assert P(Eq(X1 + X2, 3)) == Rational(1, 12)\n" }
[ { "diff_hunk": "@@ -589,10 +589,14 @@ class MatrixElement(Expr):\n def __new__(cls, name, n, m):\n n, m = map(_sympify, (n, m))\n from sympy.matrices.matrices import MatrixBase\n- if isinstance(name, (MatrixBase,)):\n+ if isinstance(name, MatrixBase):\n if n.is_Integer and m.is_Integer:\n return name[n, m]\n- if isinstance(name, str):\n+ name = _sympify(name) # change mutable into immutable\n+ elif isinstance(name, MatrixSymbol):\n+ if not name.valid_index(n, m):", "line": null, "original_line": 597, "original_start_line": null, "path": "sympy/matrices/expressions/matexpr.py", "start_line": null, "text": "@user1:\nI think that `valid_index` should always be checked, not just for MatrixSymbol e.g.:\r\n```python\r\nIn [18]: n = Symbol('n', integer=True, positive=True)\r\n\r\nIn [19]: M = MatrixSymbol('M', n, n)\r\n\r\nIn [20]: M[3, m]\r\nOut[20]: (M)[3, m]\r\n\r\nIn [21]: M[3, m].subs(M, Matrix([[1, 2], [3, 4]]))\r\nOut[21]: \r\n⎛⎡1 2⎤⎞ \r\n⎜⎢ ⎥⎟[3, m]\r\n⎝⎣3 4⎦⎠ \r\n```\n\n@author:\ngood point! To me this seems like a strange constructor in that you are creating an unevaluated indexed object rather than simply indexing an object. It works, but having to put these checks here seems odd to me." } ]
bb8ce3b796e5f00a56e3222abff5652c7e5b7dcb
diff --git a/sympy/concrete/summations.py b/sympy/concrete/summations.py index 799667109e3c..f2c74da5eaaf 100644 --- a/sympy/concrete/summations.py +++ b/sympy/concrete/summations.py @@ -28,6 +28,7 @@ from sympy.polys.partfrac import apart from sympy.polys.polyerrors import PolynomialError, PolificationFailed from sympy.polys.polytools import parallel_poly_from_expr, Poly, factor +from sympy.polys.polyutils import illegal from sympy.polys.rationaltools import together from sympy.series.limitseq import limit_seq from sympy.series.order import O @@ -1295,6 +1296,9 @@ def _eval_sum_hyper(f, i, a): def eval_sum_hyper(f, i_a_b): i, a, b = i_a_b + if f.is_hypergeometric(i) is False: + return + if (b - a).is_Integer: # We are never going to do better than doing the sum in the obvious way return None @@ -1307,10 +1311,15 @@ def eval_sum_hyper(f, i_a_b): if res is not None: return Piecewise(res, (old_sum, True)) else: + n_illegal = lambda x: sum(x.count(_) for _ in illegal) + had = n_illegal(f) + # check that no extra illegals are introduced res1 = _eval_sum_hyper(f, i, a) + if res1 is None or n_illegal(res1) > had: + return res2 = _eval_sum_hyper(f, i, b + 1) - if res1 is None or res2 is None: - return None + if res2 is None or n_illegal(res2) > had: + return (res1, cond1), (res2, cond2) = res1, res2 cond = And(cond1, cond2) if cond == False: diff --git a/sympy/concrete/tests/test_gosper.py b/sympy/concrete/tests/test_gosper.py index 70cc488cb844..fc81b70769d7 100644 --- a/sympy/concrete/tests/test_gosper.py +++ b/sympy/concrete/tests/test_gosper.py @@ -8,8 +8,8 @@ from sympy.functions.special.gamma_functions import gamma from sympy.polys.polytools import Poly from sympy.simplify.simplify import simplify -from sympy.abc import a, b, j, k, m, n, r, x from sympy.concrete.gosper import gosper_normal, gosper_sum, gosper_term +from sympy.abc import a, b, j, k, m, n, r, x def test_gosper_normal(): diff --git a/sympy/concrete/tests/test_sums_products.py b/sympy/concrete/tests/test_sums_products.py index 10d7d14e0d80..a78100a81c3f 100644 --- a/sympy/concrete/tests/test_sums_products.py +++ b/sympy/concrete/tests/test_sums_products.py @@ -29,7 +29,6 @@ from sympy.simplify.combsimp import combsimp from sympy.simplify.simplify import simplify from sympy.tensor.indexed import (Idx, Indexed, IndexedBase) -from sympy.abc import a, b, c, d, k, m, x, y, z from sympy.concrete.summations import ( telescopic, _dummy_with_inherited_properties_concrete, eval_sum_residue) from sympy.concrete.expr_with_intlimits import ReorderError @@ -38,6 +37,7 @@ from sympy.matrices import (Matrix, SparseMatrix, ImmutableDenseMatrix, ImmutableSparseMatrix) from sympy.core.mod import Mod +from sympy.abc import a, b, c, d, k, m, x, y, z n = Symbol('n', integer=True) f, g = symbols('f g', cls=Function) @@ -1586,3 +1586,10 @@ def test_process_limits(): raises(TypeError, lambda: D(x, x > 0)) raises(ValueError, lambda: D(x, Interval(1, 3))) raises(NotImplementedError, lambda: D(x, (x, union))) + + +def test_pr_22677(): + b = Symbol('b', integer=True, positive=True) + assert Sum(1/x**2,(x, 0, b)).doit() == Sum(x**(-2), (x, 0, b)) + assert Sum(1/(x - b)**2,(x, 0, b-1)).doit() == Sum( + (-b + x)**(-2), (x, 0, b - 1)) diff --git a/sympy/matrices/expressions/matexpr.py b/sympy/matrices/expressions/matexpr.py index 78db834466e6..3c5b5f23ac6a 100644 --- a/sympy/matrices/expressions/matexpr.py +++ b/sympy/matrices/expressions/matexpr.py @@ -274,8 +274,8 @@ def is_valid(idx): return isinstance(idx, (int, Integer, Symbol, Expr)) return (is_valid(i) and is_valid(j) and (self.rows is None or - (0 <= i) != False and (i < self.rows) != False) and - (0 <= j) != False and (j < self.cols) != False) + (i >= -self.rows) != False and (i < self.rows) != False) and + (j >= -self.cols) != False and (j < self.cols) != False) def __getitem__(self, key): if not isinstance(key, tuple) and isinstance(key, slice): @@ -589,18 +589,26 @@ class MatrixElement(Expr): def __new__(cls, name, n, m): n, m = map(_sympify, (n, m)) from sympy.matrices.matrices import MatrixBase - if isinstance(name, (MatrixBase,)): - if n.is_Integer and m.is_Integer: - return name[n, m] if isinstance(name, str): name = Symbol(name) else: - name = _sympify(name) - if not isinstance(name.kind, MatrixKind): - raise TypeError("First argument of MatrixElement should be a matrix") + if isinstance(name, MatrixBase): + if n.is_Integer and m.is_Integer: + return name[n, m] + name = _sympify(name) # change mutable into immutable + else: + name = _sympify(name) + if not isinstance(name.kind, MatrixKind): + raise TypeError("First argument of MatrixElement should be a matrix") + if not getattr(name, 'valid_index', lambda n, m: True)(n, m): + raise IndexError('indices out of range') obj = Expr.__new__(cls, name, n, m) return obj + @property + def symbol(self): + return self.args[0] + def doit(self, **kwargs): deep = kwargs.get('deep', True) if deep: diff --git a/sympy/matrices/expressions/tests/test_matexpr.py b/sympy/matrices/expressions/tests/test_matexpr.py index ff0c30e8537d..980daea2c694 100644 --- a/sympy/matrices/expressions/tests/test_matexpr.py +++ b/sympy/matrices/expressions/tests/test_matexpr.py @@ -2,7 +2,7 @@ from sympy.core.exprtools import gcd_terms from sympy.core.function import (diff, expand) from sympy.core.relational import Eq -from sympy.core.symbol import (Dummy, Symbol) +from sympy.core.symbol import (Dummy, Symbol, Str) from sympy.functions.special.tensor_functions import KroneckerDelta from sympy.matrices.dense import zeros from sympy.polys.polytools import factor @@ -48,10 +48,13 @@ def test_matrix_symbol_creation(): raises(ValueError, lambda: MatrixSymbol('A', n, n)) -def test_shape(): +def test_matexpr_properties(): assert A.shape == (n, m) assert (A*B).shape == (n, l) raises(ShapeError, lambda: B*A) + assert A[0, 1].indices == (0, 1) + assert A[0, 0].symbol == A + assert A[0, 0].symbol.name == 'A' def test_matexpr(): @@ -61,7 +64,7 @@ def test_matexpr(): assert (A*B).shape == (n, l) -def test_subs(): +def test_matexpr_subs(): A = MatrixSymbol('A', n, m) B = MatrixSymbol('B', m, l) C = MatrixSymbol('C', m, l) @@ -70,6 +73,32 @@ def test_subs(): assert (A*B).subs(B, C) == A*C assert (A*B).subs(l, n).is_square + W = MatrixSymbol("W", 3, 3) + X = MatrixSymbol("X", 2, 2) + Y = MatrixSymbol("Y", 1, 2) + Z = MatrixSymbol("Z", n, 2) + # no restrictions on Symbol replacement + assert X.subs(X, Y) == Y + # it might be better to just change the name + y = Str('y') + assert X.subs(Str("X"), y).args == (y, 2, 2) + # it's ok to introduce a wider matrix + assert X[1, 1].subs(X, W) == W[1, 1] + # but for a given MatrixExpression, only change + # name if indexing on the new shape is valid. + # Here, X is 2,2; Y is 1,2 and Y[1, 1] is out + # of range so an error is raised + raises(IndexError, lambda: X[1, 1].subs(X, Y)) + # here, [0, 1] is in range so the subs succeeds + assert X[0, 1].subs(X, Y) == Y[0, 1] + # and here the size of n will accept any index + # in the first position + assert W[2, 1].subs(W, Z) == Z[2, 1] + # but not in the second position + raises(IndexError, lambda: W[2, 2].subs(W, Z)) + # any matrix should raise if invalid + raises(IndexError, lambda: W[2, 2].subs(W, zeros(2))) + A = SparseMatrix([[1, 2], [3, 4]]) B = Matrix([[1, 2], [3, 4]]) C, D = MatrixSymbol('C', 2, 2), MatrixSymbol('D', 2, 2) @@ -187,11 +216,15 @@ def test_invariants(): assert obj == obj.__class__(*obj.args) -def test_indexing(): +def test_matexpr_indexing(): A = MatrixSymbol('A', n, m) A[1, 2] A[l, k] - A[l+1, k+1] + A[l + 1, k + 1] + A = MatrixSymbol('A', 2, 1) + for i in range(-2, 2): + for j in range(-1, 1): + A[i, j] def test_single_indexing(): diff --git a/sympy/matrices/expressions/tests/test_trace.py b/sympy/matrices/expressions/tests/test_trace.py index f00bd31a8181..3bd66bec2377 100644 --- a/sympy/matrices/expressions/tests/test_trace.py +++ b/sympy/matrices/expressions/tests/test_trace.py @@ -8,6 +8,8 @@ ) from sympy.matrices.expressions.special import OneMatrix from sympy.testing.pytest import raises +from sympy.abc import i + n = symbols('n', integer=True) A = MatrixSymbol('A', n, n) @@ -95,8 +97,9 @@ def test_trace_constant_factor(): assert trace(MatMul(2, X)) == 10 -def test_rewrite(): - assert isinstance(trace(A).rewrite(Sum), Sum) +def test_trace_rewrite(): + assert trace(A).rewrite(Sum) == Sum(A[i, i], (i, 0, n - 1)) + assert trace(eye(3)).rewrite(Sum) == 3 def test_trace_normalize(): diff --git a/sympy/matrices/expressions/trace.py b/sympy/matrices/expressions/trace.py index edecdd4b3924..8b462a5c4a5f 100644 --- a/sympy/matrices/expressions/trace.py +++ b/sympy/matrices/expressions/trace.py @@ -2,7 +2,7 @@ from sympy.core.expr import Expr, ExprBuilder from sympy.core.singleton import S from sympy.core.sorting import default_sort_key -from sympy.core.symbol import Dummy +from sympy.core.symbol import uniquely_named_symbol from sympy.core.sympify import sympify from sympy.matrices.matrices import MatrixBase from sympy.matrices.common import NonSquareMatrixError @@ -144,8 +144,9 @@ def get_arg_key(x): def _eval_rewrite_as_Sum(self, expr, **kwargs): from sympy.concrete.summations import Sum - i = Dummy('i') - return Sum(self.arg[i, i], (i, 0, self.arg.rows-1)).doit() + i = uniquely_named_symbol('i', expr) + s = Sum(self.arg[i, i], (i, 0, self.arg.rows - 1)) + return s.doit() def trace(expr): diff --git a/sympy/stats/rv.py b/sympy/stats/rv.py index 34384d4e2bc8..31691e1a75c9 100644 --- a/sympy/stats/rv.py +++ b/sympy/stats/rv.py @@ -39,11 +39,9 @@ from sympy.sets.sets import FiniteSet, ProductSet, Intersection from sympy.solvers.solveset import solveset from sympy.external import import_module -from sympy.utilities.misc import filldedent from sympy.utilities.decorator import doctest_depends_on from sympy.utilities.exceptions import SymPyDeprecationWarning from sympy.utilities.iterables import iterable -import warnings x = Symbol('x') @@ -841,11 +839,6 @@ def probability(condition, given_condition=None, numsamples=None, from sympy.stats.symbolic_probability import Probability if evaluate: return Probability(condition, given_condition).doit(**kwargs) - ### TODO: Remove the user warnings in the future releases - message = ("Since version 1.7, using `evaluate=False` returns `Probability` " - "object. If you want unevaluated Integral/Sum use " - "`P(condition, given_condition, evaluate=False).rewrite(Integral)`") - warnings.warn(filldedent(message)) return Probability(condition, given_condition) diff --git a/sympy/stats/tests/test_discrete_rv.py b/sympy/stats/tests/test_discrete_rv.py index 0e3abb6d48c8..b011872abb98 100644 --- a/sympy/stats/tests/test_discrete_rv.py +++ b/sympy/stats/tests/test_discrete_rv.py @@ -1,7 +1,7 @@ from sympy.concrete.summations import Sum from sympy.core.numbers import (I, Rational, oo, pi) from sympy.core.singleton import S -from sympy.core.symbol import (Dummy, Symbol) +from sympy.core.symbol import Symbol from sympy.functions.elementary.complexes import (im, re) from sympy.functions.elementary.exponential import log from sympy.functions.elementary.integers import floor @@ -24,7 +24,7 @@ FlorySchulz, Poisson, Geometric, Hermite, Logarithmic, NegativeBinomial, Skellam, YuleSimon, Zeta, DiscreteRV) -from sympy.testing.pytest import slow, nocache_fail, raises, ignore_warnings +from sympy.testing.pytest import slow, nocache_fail, raises from sympy.stats.symbolic_probability import Expectation x = Symbol('x') @@ -43,14 +43,15 @@ def test_Poisson(): l = 3 x = Poisson('x', l) assert E(x) == l + assert E(2*x) == 2*l assert variance(x) == l assert density(x) == PoissonDistribution(l) - with ignore_warnings(UserWarning): ### TODO: Restore tests once warnings are removed - assert isinstance(E(x, evaluate=False), Expectation) - assert isinstance(E(2*x, evaluate=False), Expectation) + assert isinstance(E(x, evaluate=False), Expectation) + assert isinstance(E(2*x, evaluate=False), Expectation) # issue 8248 assert x.pspace.compute_expectation(1) == 1 + def test_FlorySchulz(): a = Symbol("a") z = Symbol("z") @@ -107,8 +108,7 @@ def test_Logarithmic(): assert E(x) == -p / ((1 - p) * log(1 - p)) assert variance(x) == -1/log(2)**2 + 2/log(2) assert E(2*x**2 + 3*x + 4) == 4 + 7 / log(2) - with ignore_warnings(UserWarning): ### TODO: Restore tests once warnings are removed - assert isinstance(E(x, evaluate=False), Expectation) + assert isinstance(E(x, evaluate=False), Expectation) @nocache_fail @@ -120,8 +120,7 @@ def test_negative_binomial(): # This hangs when run with the cache disabled: assert variance(x) == p*r / (1-p)**2 assert E(x**5 + 2*x + 3) == Rational(9207, 4) - with ignore_warnings(UserWarning): ### TODO: Restore tests once warnings are removed - assert isinstance(E(x, evaluate=False), Expectation) + assert isinstance(E(x, evaluate=False), Expectation) def test_skellam(): @@ -148,8 +147,7 @@ def test_yule_simon(): x = YuleSimon('x', rho) assert simplify(E(x)) == rho / (rho - 1) assert simplify(variance(x)) == rho**2 / ((rho - 1)**2 * (rho - 2)) - with ignore_warnings(UserWarning): ### TODO: Restore tests once warnings are removed - assert isinstance(E(x, evaluate=False), Expectation) + assert isinstance(E(x, evaluate=False), Expectation) # To test the cdf function assert cdf(x)(x) == Piecewise((-beta(floor(x), 4)*floor(x) + 1, x >= 1), (0, True)) @@ -291,17 +289,12 @@ def test_conditional(): def test_product_spaces(): X1 = Geometric('X1', S.Half) X2 = Geometric('X2', Rational(1, 3)) - #assert str(P(X1 + X2 < 3, evaluate=False)) == """Sum(Piecewise((2**(X2 - n - 2)*(2/3)**(X2 - 1)/6, """\ - # + """(-X2 + n + 3 >= 1) & (-X2 + n + 3 < oo)), (0, True)), (X2, 1, oo), (n, -oo, -1))""" - n = Dummy('n') - with ignore_warnings(UserWarning): ### TODO: Restore tests once warnings are removed - assert P(X1 + X2 < 3, evaluate=False).rewrite(Sum).dummy_eq(Sum(Piecewise((2**(-n)/4, - n + 2 >= 1), (0, True)), (n, -oo, -1))/3) - #assert str(P(X1 + X2 > 3)) == """Sum(Piecewise((2**(X2 - n - 2)*(2/3)**(X2 - 1)/6, """ +\ - # """(-X2 + n + 3 >= 1) & (-X2 + n + 3 < oo)), (0, True)), (X2, 1, oo), (n, 1, oo))""" - assert P(X1 + X2 > 3).dummy_eq(Sum(Piecewise((2**(X2 - n - 2)*(Rational(2, 3))**(X2 - 1)/6, - -X2 + n + 3 >= 1), (0, True)), - (X2, 1, oo), (n, 1, oo))) -# assert str(P(Eq(X1 + X2, 3))) == """Sum(Piecewise((2**(X2 - 2)*(2/3)**(X2 - 1)/6, """ +\ -# """X2 <= 2), (0, True)), (X2, 1, oo))""" + assert str(P(X1 + X2 < 3).rewrite(Sum)) == ( + "Sum(Piecewise((1/(4*2**n), n >= -1), (0, True)), (n, -oo, -1))/3") + assert str(P(X1 + X2 > 3).rewrite(Sum)) == ( + "Sum(Piecewise((2**(X2 - n - 2)*(2/3)**(X2 - 1)/6, " + "X2 - n <= 2), (0, True)), (X2, 1, oo), (n, 1, oo))") + assert str(P(X1 + X2 > 3).rewrite(Sum)) == ( + "Sum(Piecewise((2**(X2 - n - 2)*(2/3)**(X2 - 1)/6, " + "X2 - n <= 2), (0, True)), (X2, 1, oo), (n, 1, oo))") assert P(Eq(X1 + X2, 3)) == Rational(1, 12)
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Bug Fixes" }
sympy__sympy-22562@3ad091a
sympy/sympy
Python
22,562
Bugfix in `minimal_polynomial()` and `field_isomorphism()`.
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> Fixes #22561 #### Brief description of what is fixed or changed The `minimal_polynomial()` and `field_isomorphism()` functions were not always doing what they advertised to do. This was due to a misapprehension of what an instance of the `AlgebraicNumber` class represents. An `AlgebraicNumber` instance represents an element `alpha` of a number field `Q(theta)`. The primitive element `theta` for the number field is represented by its minimal polynomial, and a particular root thereof. The element `alpha` is represented as a polynomial in `theta`. The `minimal_polynomial()` and `field_isomorphism()` functions were written as if an `AlgebraicNumber` instance represented not `alpha` but `theta`. In other words, these functions operated on the primitive element of the number field, not on the represented element itself. We repair this as follows: * Provide new docstrings for the `AlgebraicNumber` class, explaining what it represents and how. * Provide new methods for the `AlgebraicNumber` class, making it easy to: - get the minimal polynomial for alpha - test whether alpha == theta - convert into an instance were alpha does equal theta * Update the `minimal_polynomial()` and `field_isomorphism()` functions to operate on alpha, not theta. * Provide new unit tests that failed before the fix. #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> * polys * Fixed related issues in `minimal_polynomial()` and `field_isomorphism()`. <!-- END RELEASE NOTES -->
2021-11-29T19:50:35Z
Some functions misinterpret `AlgebraicNumber`s. In certain cases, the functions `minimal_polynomial()` and `field_isomorphism()` fail to operate on the numbers that are actually represented by instances of `AlgebraicNumber`. ## Explanation An `AlgebraicNumber` instance represents an element `alpha` of a number field `Q(theta)`. The primitive element `theta` for the number field is represented by its minimal polynomial, and a particular root thereof. The element `alpha` is represented as a polynomial in `theta`. The `minimal_polynomial()` and `field_isomorphism()` functions are treating `AlgebraicNumber` instances as if they represent not `alpha` but `theta`. In other words, these functions are operating on the primitive element of the number field, not on the represented element itself. ## Example 1, `minimal_polynomial()` ```python >>> from sympy import AlgebraicNumber, minimal_polynomial, sqrt, S >>> from sympy.abc import x >>> theta = sqrt(2) + sqrt(3) >>> a = AlgebraicNumber(theta, [S(1) / 2, 0, S(-9) / 2, 0]) >>> a.as_expr() sqrt(2) ``` The `AlgebraicNumber` `a` represents the number `sqrt(2)`, as an element of the number field `Q(theta)`. (The coefficients `[1/2, 0, -9/2, 0]` in the definition of `a` correspond to the fact that `sqrt(2) = theta**3/2 - 9*theta/2`.) But when we ask for the minimal polynomial of `a`, expecting `x**2 - 2`, we instead get the minimal polynomial for `theta`: ```python >>> minimal_polynomial(a, x) x**4 - 10*x**2 + 1 >>> minimal_polynomial(theta, x) x**4 - 10*x**2 + 1 ``` ## Example 2, `field_isomorphism()` ```python >>> from sympy import to_number_field, field_isomorphism, sqrt >>> theta = sqrt(2) + sqrt(3) >>> eta = sqrt(2) + sqrt(5) >>> a = to_number_field(sqrt(2), theta) >>> b = to_number_field(sqrt(2), eta) >>> a.as_expr() sqrt(2) >>> b.as_expr() sqrt(2) >>> print(field_isomorphism(a, b)) None ``` Here, `a` and `b` are `AlgebraicNumber`s representing the number `sqrt(2)` in the fields `Q(theta)` and `Q(eta)`, respectively. When we ask whether there is a field isomorphism from `Q(a)` to `Q(b)` we are asking for an embedding from `Q(sqrt(2))` into `Q(sqrt(2))`. The answer should be `[1, 0]`, meaning we can map `a` to `1*b + 0`. But instead we get `None`, because what `field_isomorphism()` is actually telling us is that there is no embedding from `Q(theta)` into `Q(eta)` (which is true, but it's not the question we asked).
[ { "body": "In certain cases, the functions `minimal_polynomial()` and `field_isomorphism()`\r\nfail to operate on the numbers that are actually represented by instances of\r\n`AlgebraicNumber`.\r\n\r\n\r\n## Explanation\r\n\r\nAn `AlgebraicNumber` instance represents an element `alpha` of a number field `Q(theta)`. The primitive\r\nelement `theta` for the number field is represented by its minimal polynomial, and a particular root thereof.\r\nThe element `alpha` is represented as a polynomial in `theta`.\r\n\r\nThe `minimal_polynomial()` and `field_isomorphism()` functions are treating `AlgebraicNumber`\r\ninstances as if they represent not `alpha` but `theta`. In other words, these functions are operating\r\non the primitive element of the number field, not on the represented element itself.\r\n\r\n\r\n## Example 1, `minimal_polynomial()`\r\n\r\n```python\r\n>>> from sympy import AlgebraicNumber, minimal_polynomial, sqrt, S\r\n>>> from sympy.abc import x\r\n>>> theta = sqrt(2) + sqrt(3)\r\n>>> a = AlgebraicNumber(theta, [S(1) / 2, 0, S(-9) / 2, 0])\r\n>>> a.as_expr()\r\nsqrt(2)\r\n```\r\n\r\nThe `AlgebraicNumber` `a` represents the number `sqrt(2)`, as an element of the number field `Q(theta)`.\r\n\r\n(The coefficients `[1/2, 0, -9/2, 0]` in the definition of `a` correspond to the fact that `sqrt(2) = theta**3/2 - 9*theta/2`.)\r\n\r\nBut when we ask for the minimal polynomial of `a`, expecting `x**2 - 2`, we instead get the minimal polynomial for `theta`:\r\n\r\n\r\n```python\r\n>>> minimal_polynomial(a, x)\r\nx**4 - 10*x**2 + 1\r\n>>> minimal_polynomial(theta, x)\r\nx**4 - 10*x**2 + 1\r\n```\r\n\r\n\r\n## Example 2, `field_isomorphism()`\r\n\r\n```python\r\n>>> from sympy import to_number_field, field_isomorphism, sqrt\r\n>>> theta = sqrt(2) + sqrt(3)\r\n>>> eta = sqrt(2) + sqrt(5)\r\n>>> a = to_number_field(sqrt(2), theta)\r\n>>> b = to_number_field(sqrt(2), eta)\r\n>>> a.as_expr()\r\nsqrt(2)\r\n>>> b.as_expr()\r\nsqrt(2)\r\n>>> print(field_isomorphism(a, b))\r\nNone\r\n```\r\n\r\nHere, `a` and `b` are `AlgebraicNumber`s representing the number `sqrt(2)` in the fields\r\n`Q(theta)` and `Q(eta)`, respectively.\r\n\r\nWhen we ask whether there is a field isomorphism from `Q(a)` to `Q(b)` we are asking\r\nfor an embedding from `Q(sqrt(2))` into `Q(sqrt(2))`. The answer should be `[1, 0]`,\r\nmeaning we can map `a` to `1*b + 0`. But instead we get `None`, because what `field_isomorphism()`\r\nis actually telling us is that there is no embedding from `Q(theta)` into `Q(eta)` (which is true, but\r\nit's not the question we asked).\r\n\r\n", "number": 22561, "title": "Some functions misinterpret `AlgebraicNumber`s." } ]
b7f44335287bb9c410a8d35a24267519b3914516
{ "head_commit": "3ad091a534209bdc6a2a021e1f195e847b7ba847", "head_commit_message": "Bugfix in `minimal_polynomial()` and `field_isomorphism()`.\n\nThe `minimal_polynomial()` and `field_isomorphism()` functions were not always doing what they advertised to do.\nThis was due to a misapprehension of what an instance of the `AlgebraicNumber` class represents.\n\nAn `AlgebraicNumber` instance represents an element `alpha` of a number field `Q(theta)`. The primitive\nelement `theta` for the number field is represented by its minimal polynomial, and a particular root thereof.\nThe element `alpha` is represented as a polynomial in `theta`.\n\nThe `minimal_polynomial()` and `field_isomorphism()` functions were written as if an `AlgebraicNumber`\ninstance represented not `alpha` but `theta`. In other words, these functions operated on the primitive\nelement of the number field, not on the represented element itself.\n\nWe repair this as follows:\n* Provide new docstrings for the `AlgebraicNumber` class, explaining what it represents and how.\n* Provide new methods for the `AlgebraicNumber` class, making it easy to:\n - get the minimal polynomial for alpha\n - test whether alpha == theta\n - convert into an instance were alpha does equal theta\n* Update the `minimal_polynomial()` and `field_isomorphism()` functions to operate on alpha, not theta.\n* Provide new unit tests that failed before the fix.", "patch_to_review": "diff --git a/doc/src/modules/core.rst b/doc/src/modules/core.rst\nindex c6fbe3ec8f59..0ade07ed6df3 100644\n--- a/doc/src/modules/core.rst\n+++ b/doc/src/modules/core.rst\n@@ -129,6 +129,8 @@ AlgebraicNumber\n .. autoclass:: AlgebraicNumber\n :members:\n \n+ .. automethod:: AlgebraicNumber.__new__\n+\n NumberSymbol\n ^^^^^^^^^^^^\n .. autoclass:: NumberSymbol\ndiff --git a/sympy/core/numbers.py b/sympy/core/numbers.py\nindex 3b0bca77f5e7..accff0a8e23c 100644\n--- a/sympy/core/numbers.py\n+++ b/sympy/core/numbers.py\n@@ -2509,9 +2509,24 @@ def __invert__(self):\n \n \n class AlgebraicNumber(Expr):\n- \"\"\"Class for representing algebraic numbers in SymPy. \"\"\"\n+ r\"\"\"\n+ Class for representing algebraic numbers in SymPy.\n+\n+ Symbolically, an instance of this class represents an element\n+ $\\alpha \\in \\mathbb{Q}(\\theta) \\hookrightarrow \\mathbb{C}$. That is, the\n+ algebraic number $\\alpha$ is represented as an element of a particular\n+ number field $\\mathbb{Q}(\\theta)$, with a particular embedding of this\n+ field into the complex numbers.\n+\n+ Formally, the primitive element $\\theta$ is given by two data points: (1)\n+ its minimal polynomial (which defines $\\mathbb{Q}(\\theta)$), and (2) a\n+ particular complex number that is a root of this polynomial (which defines\n+ the embedding $\\mathbb{Q}(\\theta) \\hookrightarrow \\mathbb{C}$). Finally,\n+ the algebraic number $\\alpha$ which we represent is then given by the\n+ coefficients of a polynomial in $\\theta$.\n+ \"\"\"\n \n- __slots__ = ('rep', 'root', 'alias', 'minpoly')\n+ __slots__ = ('rep', 'root', 'alias', 'minpoly', '_own_minpoly')\n \n is_AlgebraicNumber = True\n is_algebraic = True\n@@ -2526,7 +2541,97 @@ class AlgebraicNumber(Expr):\n free_symbols: tSet[Basic] = set()\n \n def __new__(cls, expr, coeffs=None, alias=None, **args):\n- \"\"\"Construct a new algebraic number. \"\"\"\n+ r\"\"\"\n+ Construct a new algebraic number $\\alpha$ belonging to a number field\n+ $k = \\mathbb{Q}(\\theta)$.\n+\n+ Parameters\n+ ==========\n+\n+ expr : :py:class:`~.Expr`, or pair (m, r)\n+ This defines the primitive element $\\theta$ of the number field\n+ $k$. If *expr* is an :py:class:`~.AlgebraicNumber`, then our\n+ primitive element $\\theta$ will be the same as that of *expr*.\n+ If it is any other type of :py:class:`~.Expr`, then it itself will\n+ be our primitive element. Therefore it must express an algebraic\n+ quantity, and we will compute its minimal polynomial.\n+ Otherwise *expr* must be an ordered pair\n+ $(m, r)$ giving the minimal polynomial $m$, and a root $r$\n+ thereof, which together define $\\theta$. In this case $m$ may be\n+ either a univariate :py:class:`~.Poly` or any :py:class:`~.Expr`\n+ which represents the same, while $r$ must be some\n+ :py:class:`~.Expr` representing a complex number that is a root of\n+ $m$, including both explicit expressions, and instances of\n+ :py:class:`~.ComplexRootOf`.\n+\n+ coeffs : list, :py:class:`~.ANP`, None, optional (default=None)\n+ This defines the algebraic number $\\alpha$ as an element of $k$,\n+ as a linear combination of falling powers of $\\theta$.\n+ If a list, the elements should be integers or rational numbers.\n+ If an :py:class:`~.ANP`, we take its coefficients (using its\n+ :py:meth:`~.ANP.to_list()` method). If ``None``, then the list of\n+ coefficients defaults to ``[1, 0]``, meaning that $\\alpha = \\theta$\n+ is the primitive element of the field.\n+\n+ alias : str, :py:class:`~.Symbol`, None, optional (default=None)\n+ This is a way to provide a name for the primitive element. We\n+ described several ways in which the *expr* argument can define the\n+ value of the primitive element, but none of these methods gave it\n+ a name. Here, for example, *alias* could be set as\n+ ``Symbol('theta')``, in order to make this symbol appear when\n+ $\\alpha$ is rendered as a polynomial, using the\n+ :py:meth:`~.as_poly()` method.\n+\n+ Examples\n+ ========\n+\n+ >>> from sympy import AlgebraicNumber, sqrt, CRootOf, S\n+ >>> from sympy.abc import x, theta\n+\n+ *expr* an explicit algebraic number, *coeffs* ``None``:\n+\n+ >>> a0 = AlgebraicNumber(sqrt(2) + sqrt(3))\n+ >>> a0.minpoly_of_elt().as_expr(x)\n+ x**4 - 10*x**2 + 1\n+ >>> a0.n(10)\n+ 3.146264370\n+\n+ *expr* an explicit algebraic number, *coeffs* given:\n+\n+ >>> a1 = AlgebraicNumber(sqrt(2) + sqrt(3), [S(1)/2, 0, S(-9)/2, 0])\n+ >>> a1.minpoly_of_elt().as_expr(x)\n+ x**2 - 2\n+ >>> a1.n(10)\n+ 1.414213562\n+ >>> a1.primitive_elt()\n+ sqrt(2) + sqrt(3)\n+\n+ *expr* an :py:class:`~.AlgebraicNumber` instance, *alias* provided:\n+\n+ >>> a2 = AlgebraicNumber(a0, [S(1)/2, 0, S(-9)/2, 0], alias=theta)\n+ >>> a2.primitive_elt() == a0\n+ True\n+ >>> a2.as_expr() == a1.as_expr()\n+ True\n+ >>> a1.as_poly().as_expr()\n+ _x**3/2 - 9*_x/2\n+ >>> a2.as_poly().as_expr()\n+ theta**3/2 - 9*theta/2\n+\n+ *expr* a pair (poly, explicit root):\n+\n+ >>> f = x**2 - x - 1\n+ >>> a3 = AlgebraicNumber((f, (1 + sqrt(5))/2))\n+ >>> a3.primitive_elt().n(10)\n+ 1.618033989\n+\n+ *expr* a pair (poly, implicit root):\n+\n+ >>> a4 = AlgebraicNumber((f, CRootOf(f, -1)))\n+ >>> a4.primitive_elt().n(10)\n+ 1.618033989\n+\n+ \"\"\"\n from sympy.polys.polyclasses import ANP, DMP\n from sympy.polys.numberfields import minimal_polynomial\n \n@@ -2576,6 +2681,8 @@ def __new__(cls, expr, coeffs=None, alias=None, **args):\n obj.alias = alias\n obj.minpoly = minpoly\n \n+ obj._own_minpoly = None\n+\n return obj\n \n def __hash__(self):\n@@ -2641,6 +2748,119 @@ def _eval_simplify(self, **kwargs):\n return AlgebraicNumber(r)\n return self\n \n+ @property\n+ def is_primitive_elt(self):\n+ r\"\"\"\n+ Say whether this algebraic number $\\alpha \\in \\mathbb{Q}(\\theta)$ is\n+ equal to the primitive element $\\theta$ for its field.\n+ \"\"\"\n+ return self.coeffs() == [1, 0]\n+\n+ def primitive_elt(self):\n+ r\"\"\"\n+ Get the primitive element $\\theta$ for the number field\n+ $\\mathbb{Q}(\\theta)$ to which this algebraic number $\\alpha$ belongs.\n+\n+ Returns\n+ =======\n+\n+ AlgebraicNumber\n+\n+ \"\"\"\n+ if self.is_primitive_elt:\n+ return self\n+ return AlgebraicNumber(self, coeffs=[1, 0])\n+\n+ def to_primitive_elt(self, prec=15):\n+ r\"\"\"\n+ Convert ``self`` to an :py:class:`~.AlgebraicNumber` instance that is\n+ equal to its own primitive element.\n+\n+ Explanation\n+ ===========\n+\n+ Since an :py:class:`~.AlgebraicNumber` stores both the minimal\n+ polynomial, and a particular root value, for its primitive element,\n+ it is sometimes more convenient to work with instances that equal their\n+ own primitive element.\n+\n+ Examples\n+ ========\n+\n+ >>> from sympy import sqrt, to_number_field\n+ >>> from sympy.abc import x\n+ >>> a = to_number_field(sqrt(2), sqrt(2) + sqrt(3))\n+\n+ The :py:class:`~.AlgebraicNumber` ``a`` represents the number\n+ $\\sqrt{2}$ in the field $\\mathbb{Q}(\\sqrt{2} + \\sqrt{3})$. Rendering\n+ ``a`` as a polynomial,\n+\n+ >>> a.as_poly().as_expr(x)\n+ x**3/2 - 9*x/2\n+\n+ reflects the fact that $\\sqrt{2} = \\theta^3/2 - 9 \\theta/2$, where\n+ $\\theta = \\sqrt{2} + \\sqrt{3}$.\n+\n+ ``a`` is not equal to its own primitive element. Its minpoly\n+\n+ >>> a.minpoly.as_poly().as_expr(x)\n+ x**4 - 10*x**2 + 1\n+\n+ is that of $\\theta$.\n+\n+ Converting to a primitive element,\n+\n+ >>> a_prim = a.to_primitive_elt()\n+ >>> a_prim.minpoly.as_poly().as_expr(x)\n+ x**2 - 2\n+\n+ we obtain an :py:class:`~.AlgebraicNumber` whose ``minpoly`` is that of\n+ the number itself.\n+\n+ Parameters\n+ ==========\n+\n+ prec : int, optional (default=15)\n+ Decimal places of precision for determining to which root of its\n+ minimal polynomial this number is equal.\n+\n+ Returns\n+ =======\n+\n+ AlgebraicNumber\n+\n+ See Also\n+ ========\n+\n+ is_primitive_elt\n+\n+ \"\"\"\n+ if self.is_primitive_elt:\n+ return self\n+ m = self.minpoly_of_elt()\n+ r0 = self.n(prec)\n+ closest_root, min_distance = None, oo\n+ for r in m.all_roots():\n+ d = abs(r0 - r.n(prec))\n+ if d < min_distance:\n+ min_distance = d\n+ closest_root = r\n+ return AlgebraicNumber((m, closest_root))\n+\n+ def minpoly_of_elt(self):\n+ r\"\"\"\n+ Compute the minimal polynomial for the element\n+ $\\alpha \\in \\mathbb{Q}(\\theta)$ we represent.\n+ \"\"\"\n+ if self._own_minpoly is None:\n+ if self.is_primitive_elt:\n+ self._own_minpoly = self.minpoly\n+ else:\n+ from sympy.polys.numberfields.minpoly import minpoly\n+ theta = self.primitive_elt()\n+ self._own_minpoly = minpoly(self.as_expr(theta), polys=True)\n+ return self._own_minpoly\n+\n \n class RationalConstant(Rational):\n \"\"\"\ndiff --git a/sympy/polys/numberfields/minpoly.py b/sympy/polys/numberfields/minpoly.py\nindex 797a2af20521..607365bc3510 100644\n--- a/sympy/polys/numberfields/minpoly.py\n+++ b/sympy/polys/numberfields/minpoly.py\n@@ -808,10 +808,10 @@ def bottom_up_scan(ex):\n else:\n return symbols[expr]\n elif ex.is_AlgebraicNumber:\n- if ex.root not in mapping:\n- return update_mapping(ex.root, ex.minpoly)\n+ if ex not in mapping:\n+ return update_mapping(ex, ex.minpoly_of_elt())\n else:\n- return symbols[ex.root]\n+ return symbols[ex]\n \n raise NotAlgebraic(\"%s doesn't seem to be an algebraic number\" % ex)\n \n@@ -840,7 +840,7 @@ def simpler_inverse(ex):\n inverted = False\n ex = expand_multinomial(ex)\n if ex.is_AlgebraicNumber:\n- return ex.minpoly.as_expr(x)\n+ return ex.minpoly_of_elt().as_expr(x)\n elif ex.is_Rational:\n result = ex.q*x - ex.p\n else:\ndiff --git a/sympy/polys/numberfields/subfield.py b/sympy/polys/numberfields/subfield.py\nindex cc5e383a3ad1..3705cfae98d3 100644\n--- a/sympy/polys/numberfields/subfield.py\n+++ b/sympy/polys/numberfields/subfield.py\n@@ -209,6 +209,9 @@ def field_isomorphism(a, b, *, fast=True):\n if not b.is_AlgebraicNumber:\n b = AlgebraicNumber(b)\n \n+ a = a.to_primitive_elt()\n+ b = b.to_primitive_elt()\n+\n if a == b:\n return a.coeffs()\n \ndiff --git a/sympy/polys/numberfields/tests/test_minpoly.py b/sympy/polys/numberfields/tests/test_minpoly.py\nindex ee202b914646..39fd55e2ee7d 100644\n--- a/sympy/polys/numberfields/tests/test_minpoly.py\n+++ b/sympy/polys/numberfields/tests/test_minpoly.py\n@@ -427,6 +427,13 @@ def test_issue_22559():\n assert minimal_polynomial(alpha**3, x) == x**2 - 8\n \n \n+def test_issue_22561():\n+ a = AlgebraicNumber(sqrt(2) + sqrt(3), [S(1) / 2, 0, S(-9) / 2, 0], gen=x)\n+ assert a.as_expr() == sqrt(2)\n+ assert minimal_polynomial(a, x) == x**2 - 2\n+ assert minimal_polynomial(a**3, x) == x**2 - 8\n+\n+\n def test_separate_sq_not_impl():\n raises(NotImplementedError, lambda: _separate_sq(x**(S(1)/3) + x))\n \ndiff --git a/sympy/polys/numberfields/tests/test_subfield.py b/sympy/polys/numberfields/tests/test_subfield.py\nindex 68cf230f5157..918e19a694a9 100644\n--- a/sympy/polys/numberfields/tests/test_subfield.py\n+++ b/sympy/polys/numberfields/tests/test_subfield.py\n@@ -281,3 +281,9 @@ def test_to_number_field():\n assert to_number_field(sqrt(2), AlgebraicNumber(sqrt(2) + sqrt(3))) == a\n \n raises(IsomorphismFailed, lambda: to_number_field(sqrt(2), sqrt(3)))\n+\n+\n+def test_issue_22561():\n+ a = to_number_field(sqrt(2), sqrt(2) + sqrt(3))\n+ b = to_number_field(sqrt(2), sqrt(2) + sqrt(5))\n+ assert field_isomorphism(a, b) == [1, 0]\n" }
[ { "diff_hunk": "@@ -2641,6 +2748,119 @@ def _eval_simplify(self, **kwargs):\n return AlgebraicNumber(r)\n return self\n \n+ @property\n+ def is_primitive_elt(self):\n+ r\"\"\"\n+ Say whether this algebraic number $\\alpha \\in \\mathbb{Q}(\\theta)$ is\n+ equal to the primitive element $\\theta$ for its field.\n+ \"\"\"\n+ return self.coeffs() == [1, 0]\n+\n+ def primitive_elt(self):\n+ r\"\"\"\n+ Get the primitive element $\\theta$ for the number field\n+ $\\mathbb{Q}(\\theta)$ to which this algebraic number $\\alpha$ belongs.\n+\n+ Returns\n+ =======\n+\n+ AlgebraicNumber\n+\n+ \"\"\"\n+ if self.is_primitive_elt:\n+ return self\n+ return AlgebraicNumber(self, coeffs=[1, 0])\n+\n+ def to_primitive_elt(self, prec=15):\n+ r\"\"\"\n+ Convert ``self`` to an :py:class:`~.AlgebraicNumber` instance that is\n+ equal to its own primitive element.\n+\n+ Explanation\n+ ===========\n+\n+ Since an :py:class:`~.AlgebraicNumber` stores both the minimal\n+ polynomial, and a particular root value, for its primitive element,\n+ it is sometimes more convenient to work with instances that equal their\n+ own primitive element.\n+\n+ Examples\n+ ========\n+\n+ >>> from sympy import sqrt, to_number_field\n+ >>> from sympy.abc import x\n+ >>> a = to_number_field(sqrt(2), sqrt(2) + sqrt(3))\n+\n+ The :py:class:`~.AlgebraicNumber` ``a`` represents the number\n+ $\\sqrt{2}$ in the field $\\mathbb{Q}(\\sqrt{2} + \\sqrt{3})$. Rendering\n+ ``a`` as a polynomial,\n+\n+ >>> a.as_poly().as_expr(x)\n+ x**3/2 - 9*x/2\n+\n+ reflects the fact that $\\sqrt{2} = \\theta^3/2 - 9 \\theta/2$, where\n+ $\\theta = \\sqrt{2} + \\sqrt{3}$.\n+\n+ ``a`` is not equal to its own primitive element. Its minpoly\n+\n+ >>> a.minpoly.as_poly().as_expr(x)\n+ x**4 - 10*x**2 + 1\n+\n+ is that of $\\theta$.\n+\n+ Converting to a primitive element,\n+\n+ >>> a_prim = a.to_primitive_elt()\n+ >>> a_prim.minpoly.as_poly().as_expr(x)\n+ x**2 - 2\n+\n+ we obtain an :py:class:`~.AlgebraicNumber` whose ``minpoly`` is that of\n+ the number itself.\n+\n+ Parameters\n+ ==========\n+\n+ prec : int, optional (default=15)\n+ Decimal places of precision for determining to which root of its\n+ minimal polynomial this number is equal.\n+\n+ Returns\n+ =======\n+\n+ AlgebraicNumber\n+\n+ See Also\n+ ========\n+\n+ is_primitive_elt\n+\n+ \"\"\"\n+ if self.is_primitive_elt:\n+ return self\n+ m = self.minpoly_of_elt()\n+ r0 = self.n(prec)\n+ closest_root, min_distance = None, oo\n+ for r in m.all_roots():\n+ d = abs(r0 - r.n(prec))\n+ if d < min_distance:\n+ min_distance = d\n+ closest_root = r\n+ return AlgebraicNumber((m, closest_root))", "line": null, "original_line": 2848, "original_start_line": null, "path": "sympy/core/numbers.py", "start_line": null, "text": "@author:\n@user1 Do you know if there's an existing function for this problem? I couldn't find one, so wrote an ad hoc solution.\r\n\r\nYou have a `Poly` `m`, and you have a number (in this case `self`), and you know that the number is equal to one of the roots of `m`. You want to determine which of the roots it is, and then you want to obtain an exact expression for that root (such as a `CRootOf`, or an explicit expression using radicals etc.).\n\n@user1:\nI'm not sure there is a function but one could be added to rootisolation.py. The key function is `dup_isolate_all_roots` which gives bounding intervals for each of the roots. Then evalf (i.e. .n) can tell us if our expression is in the bounding interval.\n\n@user1:\nThis function is the only thing I'm not sure about\n\n@user2:\nMaybe this could be based on the same idea that is used in `_choose_factor`:\r\nhttps://github.com/sympy/sympy/blob/551c16facb2081c7b7a085aeaf20316244a1da01/sympy/polys/numberfields/minpoly.py#L80-L86\r\nAll distances are computed and the two least ones are compared. It they have different orders of magnitude (probably even less than `10**6` would suffice), then the minimum is chosen, otherwise the precision in incremented in a loop. There is no need to give a precision parameter as an argument.\n\n@author:\nWait...... First, thanks @user1 and @user2 for your ideas. I think you're on the right track for the problem I asked to solve. However, maybe it's not needed here.\r\n\r\nWhat we're talking about could go in a method called `to_CRootOf()`, which would turn an `AlgebraicNumber` into a `CRootOf` equal to the same complex number.\r\n\r\nBut I think all we need here is any `Expr` that can be `evalf`-ed to arbitrary precision, and `self` already is that.\r\nSo the whole method becomes:\r\n\r\n```python\r\ndef to_primitive_elt(self):\r\n if self.is_primitive_elt:\r\n return self\r\n m = self.minpoly_of_elt()\r\n return AlgebraicNumber((m, self))\r\n```\r\n\r\nSorry for the confusion! I didn't see this earlier.\n\n@author:\nCancel that. This turns out to be quite tricky.\r\n\r\n## TL;DR\r\n\r\nI think we were on the right track in the first place, and I'll push a new commit that does a better job of determining the root.\r\n\r\n## Details\r\n\r\nIt seems that using one `AlgebraicNumber` as the `root` arg to another `AlgebraicNumber` is going to be very error prone and should probably be avoided.\r\n\r\nAs a demonstration of what can happen, let's start with an `AlgebraicNumber` `a` and its minpoly `m`:\r\n\r\n```python\r\n>>> from sympy import sqrt, to_number_field, AlgebraicNumber\r\n>>> a = to_number_field(sqrt(2), sqrt(2) + sqrt(3))\r\n>>> a.as_expr()\r\nsqrt(2)\r\n>>> m = a.minpoly_of_elt()\r\n```\r\n\r\nand use these as the `(m, r)` when constructing a new `AlgebraicNumber` `b`:\r\n\r\n```python\r\n>>> b = AlgebraicNumber((m, a))\r\n>>> b.as_expr()\r\nsqrt(2)\r\n```\r\n\r\nSo far, so good.\r\n\r\nBut now, due to the funny way `AlgebraicNumber.__new__()` treats its first arg, when that arg is an `AlgebraicNumber`,\r\n\r\nhttps://github.com/sympy/sympy/blob/519f1b75eda8830189b1385197f2c4fb49d333aa/sympy/core/numbers.py#L2541-L2542\r\n\r\n`b` is not going to be picklable:\r\n\r\n```python\r\n>>> import pickle\r\n>>> c = pickle.loads(pickle.dumps(b))\r\n>>> c.as_expr()\r\nsqrt(2) + sqrt(3)\r\n```\r\n\r\nYipes!\r\n\r\nBut maybe we can repair pickling by changing this line:\r\n\r\nhttps://github.com/sympy/sympy/blob/519f1b75eda8830189b1385197f2c4fb49d333aa/sympy/core/numbers.py#L2564\r\n\r\nto\r\n\r\n```python\r\n sargs = (expr, scoeffs)\r\n```\r\n\r\nwhich seems totally reasonable. Let our args be exactly as they were passed to us.\r\n\r\nBut this turns out to have too many far-flung ramifications.\r\n\r\nFor just one example, [this unit test](https://github.com/sympy/sympy/blob/519f1b75eda8830189b1385197f2c4fb49d333aa/sympy/polys/numberfields/tests/test_minpoly.py#L416-L422) is broken by it, because this method:\r\n\r\nhttps://github.com/sympy/sympy/blob/519f1b75eda8830189b1385197f2c4fb49d333aa/sympy/polys/domains/gaussiandomains.py#L320-L323\r\n\r\nexpects the first arg of an `AlgebraicNumber` to always equal its root. In how many other places might similar expectations be embedded? This approach begins to look like much more trouble than it's worth, so I'll revert to the original approach.\n\n@user1:\nI have to say that I've never quite understood how `AlgebraicNumber` works. Maybe it is not used consistently and should be fixed.\n\n@user2:\nIt seems that an instance of `AlgebraicNumber` is practically always the same as the primitive element except in some tests. Maybe `coeffs` could be deprecated (and the tests removed).\n\n@author:\nThe possibility did cross my mind, but I think there are strong reasons for keeping it.\r\n\r\n* A computer algebra system should have a class that represents \"an element of a number field\".\r\nSuch elements are usually thought of as polynomials in the primitive element, and the class should\r\nprovide things like printing support for displaying the element in this form. `AlgebraicNumber`\r\ncurrently provides all this, but it will cease to if we give up `coeffs`.\r\n\r\n* The `ANP` class does things the same way -- i.e. it represents the minimal polynomial of the\r\nprimitive element, plus the coeffs of the given element -- but this is on the `polys` side of things.\r\nIt is not an `Expr`.\r\n\r\n* An `AlgebraicNumber` with `coeffs` is needed as the return value of the public `to_number_field()`\r\nfunction.\r\n\r\n* If we give up `coeffs`, then we just have a `minpoly` and a `root`, in which case we're essentially\r\nthe same thing as a `CRootOf`. (Which, to be honest, is a fine representation of an algebraic number for\r\ncertain applications -- but then it would be redundant.)\r\n\r\nI guess those are the main reasons that I have considered.\r\n\r\nAlso, I'm still hoping to implement more things from Cohen. So, you're probably right that right now the library mostly treats these things as their primitive elements, but I'm hoping to add some cool stuff which will make an \"element of a number field\" class of this kind more important!\n\n@user1:\nI think that `AlgebraicNumber` should just be fixed to work properly.\n\n@user1:\n> But now, due to the funny way AlgebraicNumber.__new__() treats its first arg, when that arg is an AlgebraicNumber,\r\n\r\nThere `expr.coeffs` should be checked as well. That should be combined with the newly provided coeffs using e.g. `dup_compose`\n\n@author:\nBut isn't it a backwards compatibility issue? Otherwise I'd be all for changing the behavior here. (I think the special case should just be eliminated.)\n\n@user1:\nCan you give an example of what would be incompatible if this was just fixed?\n\n@author:\nI guess the real question I was raising was where we draw the line between breaking change and bugfix. But it dawns on me now that this entire special case of `AlgebraicNumber.__new__(expr, coeffs, ...)` where `expr` is an `AlgebraicNumber` could be regarded as a part of the same bug this PR sets out to fix; namely, places in the code where the `coeffs` of an `AlgebraicNumber` were forgotten/ignored.\r\n\r\nSo, I think the composing of polynomials here is clever, it brings the coeffs back into play, and it means that on unpickling we recover a number which is the same as a complex number, all of which is good. But unfortunately the internal structure still may differ. Example:\r\n\r\nAdding this in `.__new__()`:\r\n\r\n```python\r\n if expr.is_AlgebraicNumber:\r\n from sympy.polys.densetools import dup_compose\r\n c = dup_compose(rep.rep, expr.rep.rep, dom)\r\n rep = DMP.from_list(c, 0, dom)\r\n```\r\n\r\nwe now get:\r\n\r\n```python\r\n>>> import pickle\r\n>>> a = to_number_field(sqrt(2), sqrt(2) + sqrt(3))\r\n>>> m = a.minpoly_of_elt()\r\n>>> b = AlgebraicNumber((m, a))\r\n>>> c = pickle.loads(pickle.dumps(b))\r\n>>> for e in [b, c]:\r\n>>> print('--------')\r\n>>> print('As a complex number: ', e.as_expr())\r\n>>> print('Our coeffs: ', e.rep)\r\n>>> print('Minpoly of primitive element:', e.minpoly)\r\n>>> print('Primitive element: ', e.root)\r\n>>> print('--------')\r\n>>> print('Equal after pickling:', b == c)\r\n--------\r\nAs a complex number: sqrt(2)\r\nOur coeffs: DMP([mpq(1,1), mpq(0,1)], QQ, None)\r\nMinpoly of primitive element: PurePoly(_x**2 - 2, _x, domain='QQ')\r\nPrimitive element: sqrt(2)\r\n--------\r\nAs a complex number: sqrt(2)\r\nOur coeffs: DMP([mpq(1,2), mpq(0,1), mpq(-9,2), mpq(0,1)], QQ, None)\r\nMinpoly of primitive element: PurePoly(_x**4 - 10*_x**2 + 1, _x, domain='QQ')\r\nPrimitive element: sqrt(2) + sqrt(3)\r\n--------\r\nEqual after pickling: False\r\n```\r\n\r\nI'm thinking the fix could instead be as simple as just deleting the special case,\r\n\r\nhttps://github.com/sympy/sympy/blob/52f606a503cea5e9588de14150ccb9f7f9ed4752/sympy/core/numbers.py#L2541-L2542\r\n\r\naltogether. Then we do test as equal after pickling:\r\n\r\n```python\r\n--------\r\nAs a complex number: -9*sqrt(2) + sqrt(3)/2 + sqrt(2) + sqrt(3)**3/2\r\nOur coeffs: DMP([mpq(1,1), mpq(0,1)], QQ, None)\r\nMinpoly of primitive element: PurePoly(_x**2 - 2, _x, domain='QQ')\r\nPrimitive element: -9*sqrt(2) + sqrt(3)/2 + sqrt(2) + sqrt(3)**3/2\r\n--------\r\nAs a complex number: -9*sqrt(2) + sqrt(3)/2 + sqrt(2) + sqrt(3)**3/2\r\nOur coeffs: DMP([mpq(1,1), mpq(0,1)], QQ, None)\r\nMinpoly of primitive element: PurePoly(_x**2 - 2, _x, domain='QQ')\r\nPrimitive element: -9*sqrt(2) + sqrt(3)/2 + sqrt(2) + sqrt(3)**3/2\r\n--------\r\nEqual after pickling: True\r\n\r\n```\r\n\r\nNote that the printing under `.as_expr()` is also changed in this case. This is because we are now succeeding in restoring `self.root` as an instance of `AlgebraicNumber`, which stays atomic in simplifications.\r\n\r\nI think this is a good thing. If the old behavior is desired, then `b` can just be constructed a little differently:\r\n\r\n```python\r\n>>> b = AlgebraicNumber((m, a.to_root()))\r\n>>> print(b.as_expr())\r\nsqrt(2)\r\n```\r\n\r\nHere I'm using the `to_root()` method which I've now implemented, and have just pushed. In this case `a` is transformed into a `Pow`, which does participate in expression simplifications.\r\n\r\nIt's complex. I want to continue thinking about it a bit more...\n\n@user1:\nOkay, I see the problem now. Your examples use lots of complicated methods but it's actually just this:\r\n```python\r\nIn [15]: a = AlgebraicNumber(sqrt(2), [1, 2])\r\n\r\nIn [16]: a\r\nOut[16]: √2 + 2\r\n\r\nIn [17]: AlgebraicNumber(a)\r\nOut[17]: √2\r\n```\r\nThat's a clear bug to be fixed. Does the fix need to be much more complicated than this:\r\n```diff\r\ndiff --git a/sympy/core/numbers.py b/sympy/core/numbers.py\r\nindex e87b039..93c07c8 100644\r\n--- a/sympy/core/numbers.py\r\n+++ b/sympy/core/numbers.py\r\n@@ -2536,7 +2536,7 @@ def __new__(cls, expr, coeffs=None, alias=None, **args):\r\n from sympy.polys.polytools import Poly\r\n minpoly = Poly(minpoly)\r\n elif expr.is_AlgebraicNumber:\r\n- minpoly, root = expr.minpoly, expr.root\r\n+ minpoly, root, coeffs = expr.minpoly, expr.root, expr.coeffs()\r\n else:\r\n minpoly, root = minimal_polynomial(\r\n expr, args.get('gen'), polys=True), expr\r\n```\r\nCould also check if coeffs was provided as well as an AlgebraicNumber and if so use `dup_compose`.\n\n@author:\nOkay, this has the virtue of sticking closer to existing behavior, so I think it's good.\r\n\r\nBut while we're making `AlgebraicNumber(a)` behave more like a \"copy constructor\" when `a` is an `AlgebraicNumber`, we should probably also copy the `self.alias` from `a`, allowing this to be overridden by any `alias` that may have been passed.\r\n\r\nAlso, we need to modify the case where `expr` is an ordered pair `(m, r)`. If `r` is an `AlgebraicNumber` we have to convert it with `r = r.to_root()`. This (a) repairs pickling, and (b) makes this case consistent with the others.\r\n\r\nI've pushed some code that implements this, and adapts the docs.\n\n@author:\nScratch the bit about `r = r.to_root()`. I tried it, but reverted it since it broke some things.\r\n\r\nAnyway this was intended to address a separate issue -- pickling of `AlgebraicNumber` and `AlgebraicField` -- and I'd like to handle this in a separate PR." } ]
9d31b472894771e8e4d66000ede02625a0d47e5d
diff --git a/doc/src/modules/core.rst b/doc/src/modules/core.rst index c6fbe3ec8f59..0ade07ed6df3 100644 --- a/doc/src/modules/core.rst +++ b/doc/src/modules/core.rst @@ -129,6 +129,8 @@ AlgebraicNumber .. autoclass:: AlgebraicNumber :members: + .. automethod:: AlgebraicNumber.__new__ + NumberSymbol ^^^^^^^^^^^^ .. autoclass:: NumberSymbol diff --git a/sympy/core/numbers.py b/sympy/core/numbers.py index 3b0bca77f5e7..cd7bdb2c1e47 100644 --- a/sympy/core/numbers.py +++ b/sympy/core/numbers.py @@ -2509,9 +2509,24 @@ def __invert__(self): class AlgebraicNumber(Expr): - """Class for representing algebraic numbers in SymPy. """ + r""" + Class for representing algebraic numbers in SymPy. + + Symbolically, an instance of this class represents an element + $\alpha \in \mathbb{Q}(\theta) \hookrightarrow \mathbb{C}$. That is, the + algebraic number $\alpha$ is represented as an element of a particular + number field $\mathbb{Q}(\theta)$, with a particular embedding of this + field into the complex numbers. + + Formally, the primitive element $\theta$ is given by two data points: (1) + its minimal polynomial (which defines $\mathbb{Q}(\theta)$), and (2) a + particular complex number that is a root of this polynomial (which defines + the embedding $\mathbb{Q}(\theta) \hookrightarrow \mathbb{C}$). Finally, + the algebraic number $\alpha$ which we represent is then given by the + coefficients of a polynomial in $\theta$. + """ - __slots__ = ('rep', 'root', 'alias', 'minpoly') + __slots__ = ('rep', 'root', 'alias', 'minpoly', '_own_minpoly') is_AlgebraicNumber = True is_algebraic = True @@ -2526,11 +2541,180 @@ class AlgebraicNumber(Expr): free_symbols: tSet[Basic] = set() def __new__(cls, expr, coeffs=None, alias=None, **args): - """Construct a new algebraic number. """ + r""" + Construct a new algebraic number $\alpha$ belonging to a number field + $k = \mathbb{Q}(\theta)$. + + There are four instance attributes to be determined: + + =========== ============================================================================ + Attribute Type/Meaning + =========== ============================================================================ + ``root`` :py:class:`~.Expr` for $\theta$ as a complex number + ``minpoly`` :py:class:`~.Poly`, the minimal polynomial of $\theta$ + ``rep`` :py:class:`~sympy.polys.polyclasses.DMP` giving $\alpha$ as poly in $\theta$ + ``alias`` :py:class:`~.Symbol` for $\theta$, or ``None`` + =========== ============================================================================ + + See Parameters section for how they are determined. + + Parameters + ========== + + expr : :py:class:`~.Expr`, or pair $(m, r)$ + There are three distinct modes of construction, depending on what + is passed as *expr*. + + **(1)** *expr* is an :py:class:`~.AlgebraicNumber`: + In this case we begin by copying all four instance attributes from + *expr*. If *coeffs* were also given, we compose the two coeff + polynomials (see below). If an *alias* was given, it overrides. + + **(2)** *expr* is any other type of :py:class:`~.Expr`: + Then ``root`` will equal *expr*. Therefore it + must express an algebraic quantity, and we will compute its + ``minpoly``. + + **(3)** *expr* is an ordered pair $(m, r)$ giving the + ``minpoly`` $m$, and a ``root`` $r$ thereof, which together + define $\theta$. In this case $m$ may be either a univariate + :py:class:`~.Poly` or any :py:class:`~.Expr` which represents the + same, while $r$ must be some :py:class:`~.Expr` representing a + complex number that is a root of $m$, including both explicit + expressions in radicals, and instances of + :py:class:`~.ComplexRootOf` or :py:class:`~.AlgebraicNumber`. + + coeffs : list, :py:class:`~.ANP`, None, optional (default=None) + This defines ``rep``, giving the algebraic number $\alpha$ as a + polynomial in $\theta$. + + If a list, the elements should be integers or rational numbers. + If an :py:class:`~.ANP`, we take its coefficients (using its + :py:meth:`~.ANP.to_list()` method). If ``None``, then the list of + coefficients defaults to ``[1, 0]``, meaning that $\alpha = \theta$ + is the primitive element of the field. + + If *expr* was an :py:class:`~.AlgebraicNumber`, let $g(x)$ be its + ``rep`` polynomial, and let $f(x)$ be the polynomial defined by + *coeffs*. Then ``self.rep`` will represent the composition + $(f \circ g)(x)$. + + alias : str, :py:class:`~.Symbol`, None, optional (default=None) + This is a way to provide a name for the primitive element. We + described several ways in which the *expr* argument can define the + value of the primitive element, but none of these methods gave it + a name. Here, for example, *alias* could be set as + ``Symbol('theta')``, in order to make this symbol appear when + $\alpha$ is printed, or rendered as a polynomial, using the + :py:meth:`~.as_poly()` method. + + Examples + ======== + + Recall that we are constructing an algebraic number as a field element + $\alpha \in \mathbb{Q}(\theta)$. + + >>> from sympy import AlgebraicNumber, sqrt, CRootOf, S + >>> from sympy.abc import x + + Example (1): $\alpha = \theta = \sqrt{2}$ + + >>> a1 = AlgebraicNumber(sqrt(2)) + >>> a1.minpoly_of_element().as_expr(x) + x**2 - 2 + >>> a1.evalf(10) + 1.414213562 + + Example (2): $\alpha = 3 \sqrt{2} - 5$, $\theta = \sqrt{2}$. We can + either build on the last example: + + >>> a2 = AlgebraicNumber(a1, [3, -5]) + >>> a2.as_expr() + -5 + 3*sqrt(2) + + or start from scratch: + + >>> a2 = AlgebraicNumber(sqrt(2), [3, -5]) + >>> a2.as_expr() + -5 + 3*sqrt(2) + + Example (3): $\alpha = 6 \sqrt{2} - 11$, $\theta = \sqrt{2}$. Again we + can build on the previous example, and we see that the coeff polys are + composed: + + >>> a3 = AlgebraicNumber(a2, [2, -1]) + >>> a3.as_expr() + -11 + 6*sqrt(2) + + reflecting the fact that $(2x - 1) \circ (3x - 5) = 6x - 11$. + + Example (4): $\alpha = \sqrt{2}$, $\theta = \sqrt{2} + \sqrt{3}$. The + easiest way is to use the :py:func:`~.to_number_field()` function: + + >>> from sympy import to_number_field + >>> a4 = to_number_field(sqrt(2), sqrt(2) + sqrt(3)) + >>> a4.minpoly_of_element().as_expr(x) + x**2 - 2 + >>> a4.to_root() + sqrt(2) + >>> a4.primitive_element() + sqrt(2) + sqrt(3) + >>> a4.coeffs() + [1/2, 0, -9/2, 0] + + but if you already knew the right coefficients, you could construct it + directly: + + >>> a4 = AlgebraicNumber(sqrt(2) + sqrt(3), [S(1)/2, 0, S(-9)/2, 0]) + >>> a4.to_root() + sqrt(2) + >>> a4.primitive_element() + sqrt(2) + sqrt(3) + + Example (5): Construct the Golden Ratio as an element of the 5th + cyclotomic field, supposing we already know its coefficients. This time + we introduce the alias $\zeta$ for the primitive element of the field: + + >>> from sympy import cyclotomic_poly + >>> from sympy.abc import zeta + >>> a5 = AlgebraicNumber(CRootOf(cyclotomic_poly(5), -1), + ... [-1, -1, 0, 0], alias=zeta) + >>> a5.as_poly().as_expr() + -zeta**3 - zeta**2 + >>> a5.evalf() + 1.61803398874989 + + (The index ``-1`` to ``CRootOf`` selects the complex root with the + largest real and imaginary parts, which in this case is + $\mathrm{e}^{2i\pi/5}$. See :py:class:`~.ComplexRootOf`.) + + Example (6): Building on the last example, construct the number + $2 \phi \in \mathbb{Q}(\phi)$, where $\phi$ is the Golden Ratio: + + >>> from sympy.abc import phi + >>> a6 = AlgebraicNumber(a5.to_root(), coeffs=[2, 0], alias=phi) + >>> a6.as_poly().as_expr() + 2*phi + >>> a6.primitive_element().evalf() + 1.61803398874989 + + Note that we needed to use ``a5.to_root()``, since passing ``a5`` as + the first argument would have constructed the number $2 \phi$ as an + element of the field $\mathbb{Q}(\zeta)$: + + >>> a6_wrong = AlgebraicNumber(a5, coeffs=[2, 0]) + >>> a6_wrong.as_poly().as_expr() + -2*zeta**3 - 2*zeta**2 + >>> a6_wrong.primitive_element().evalf() + 0.309016994374947 + 0.951056516295154*I + + """ from sympy.polys.polyclasses import ANP, DMP from sympy.polys.numberfields import minimal_polynomial expr = sympify(expr) + rep0 = None + alias0 = None if isinstance(expr, (tuple, Tuple)): minpoly, root = expr @@ -2539,7 +2723,8 @@ def __new__(cls, expr, coeffs=None, alias=None, **args): from sympy.polys.polytools import Poly minpoly = Poly(minpoly) elif expr.is_AlgebraicNumber: - minpoly, root = expr.minpoly, expr.root + minpoly, root, rep0, alias0 = (expr.minpoly, expr.root, + expr.rep, expr.alias) else: minpoly, root = minimal_polynomial( expr, args.get('gen'), polys=True), expr @@ -2554,15 +2739,22 @@ def __new__(cls, expr, coeffs=None, alias=None, **args): rep = DMP.from_list(coeffs.to_list(), 0, dom) scoeffs = Tuple(*coeffs.to_list()) - if rep.degree() >= minpoly.degree(): - rep = rep.rem(minpoly.rep) - else: rep = DMP.from_list([1, 0], 0, dom) scoeffs = Tuple(1, 0) + if rep0 is not None: + from sympy.polys.densetools import dup_compose + c = dup_compose(rep.rep, rep0.rep, dom) + rep = DMP.from_list(c, 0, dom) + scoeffs = Tuple(*c) + + if rep.degree() >= minpoly.degree(): + rep = rep.rem(minpoly.rep) + sargs = (root, scoeffs) + alias = alias or alias0 if alias is not None: from .symbol import Symbol if not isinstance(alias, Symbol): @@ -2576,6 +2768,8 @@ def __new__(cls, expr, coeffs=None, alias=None, **args): obj.alias = alias obj.minpoly = minpoly + obj._own_minpoly = None + return obj def __hash__(self): @@ -2641,6 +2835,184 @@ def _eval_simplify(self, **kwargs): return AlgebraicNumber(r) return self + @property + def is_primitive_element(self): + r""" + Say whether this algebraic number $\alpha \in \mathbb{Q}(\theta)$ is + equal to the primitive element $\theta$ for its field. + """ + c = self.coeffs() + # Second case occurs if self.minpoly is linear: + return c == [1, 0] or c == [self.root] + + def primitive_element(self): + r""" + Get the primitive element $\theta$ for the number field + $\mathbb{Q}(\theta)$ to which this algebraic number $\alpha$ belongs. + + Returns + ======= + + AlgebraicNumber + + """ + if self.is_primitive_element: + return self + return AlgebraicNumber((self.minpoly, self.root), coeffs=[1, 0]) + + def to_primitive_element(self, radicals=True): + r""" + Convert ``self`` to an :py:class:`~.AlgebraicNumber` instance that is + equal to its own primitive element. + + Explanation + =========== + + If we represent $\alpha \in \mathbb{Q}(\theta)$, $\alpha \neq \theta$, + construct a new :py:class:`~.AlgebraicNumber` that represents + $\alpha \in \mathbb{Q}(\alpha)$. + + Examples + ======== + + >>> from sympy import sqrt, to_number_field + >>> from sympy.abc import x + >>> a = to_number_field(sqrt(2), sqrt(2) + sqrt(3)) + + The :py:class:`~.AlgebraicNumber` ``a`` represents the number + $\sqrt{2}$ in the field $\mathbb{Q}(\sqrt{2} + \sqrt{3})$. Rendering + ``a`` as a polynomial, + + >>> a.as_poly().as_expr(x) + x**3/2 - 9*x/2 + + reflects the fact that $\sqrt{2} = \theta^3/2 - 9 \theta/2$, where + $\theta = \sqrt{2} + \sqrt{3}$. + + ``a`` is not equal to its own primitive element. Its minpoly + + >>> a.minpoly.as_poly().as_expr(x) + x**4 - 10*x**2 + 1 + + is that of $\theta$. + + Converting to a primitive element, + + >>> a_prim = a.to_primitive_element() + >>> a_prim.minpoly.as_poly().as_expr(x) + x**2 - 2 + + we obtain an :py:class:`~.AlgebraicNumber` whose ``minpoly`` is that of + the number itself. + + Parameters + ========== + + radicals : boolean, optional (default=True) + If ``True``, then we will try to return an + :py:class:`~.AlgebraicNumber` whose ``root`` is an expression + in radicals. If that is not possible (or if *radicals* is + ``False``), ``root`` will be a :py:class:`~.ComplexRootOf`. + + Returns + ======= + + AlgebraicNumber + + See Also + ======== + + is_primitive_element + + """ + if self.is_primitive_element: + return self + m = self.minpoly_of_element() + r = self.to_root(radicals=radicals) + return AlgebraicNumber((m, r)) + + def minpoly_of_element(self): + r""" + Compute the minimal polynomial for this algebraic number. + + Explanation + =========== + + Recall that we represent an element $\alpha \in \mathbb{Q}(\theta)$. + Our instance attribute ``self.minpoly`` is the minimal polynomial for + our primitive element $\theta$. This method computes the minimal + polynomial for $\alpha$. + + """ + if self._own_minpoly is None: + if self.is_primitive_element: + self._own_minpoly = self.minpoly + else: + from sympy.polys.numberfields.minpoly import minpoly + theta = self.primitive_element() + self._own_minpoly = minpoly(self.as_expr(theta), polys=True) + return self._own_minpoly + + def to_root(self, radicals=True, minpoly=None): + """ + Convert to an :py:class:`~.Expr` that is not an + :py:class:`~.AlgebraicNumber`, specifically, either a + :py:class:`~.ComplexRootOf`, or, optionally and where possible, an + expression in radicals. + + Parameters + ========== + + radicals : boolean, optional (default=True) + If ``True``, then we will try to return the root as an expression + in radicals. If that is not possible, we will return a + :py:class:`~.ComplexRootOf`. + + minpoly : :py:class:`~.Poly` + If the minimal polynomial for `self` has been pre-computed, it can + be passed in order to save time. + + """ + if self.is_primitive_element and not isinstance(self.root, AlgebraicNumber): + return self.root + m = minpoly or self.minpoly_of_element() + roots = m.all_roots(radicals=radicals) + if len(roots) == 1: + return roots[0] + root = None + if all(hasattr(r, "_get_interval") for r in roots): + root = self._to_root_by_intervals(roots) + if root is not None: + return root + return self._to_root_by_distance(roots) + + def _to_root_by_intervals(self, roots): + intervals = [r._get_interval() for r in roots] + D0 = int(max(i.max_denom for i in intervals)) + # Make n more than the number of decimal places in D0. This is to + # eliminate false positives, i.e. cases where we appear to belong to + # an interval but only due to rounding errors. + n = math.ceil(D0.bit_length()/3.3) + 2 + c = self.evalf(n).as_real_imag() + for j, i in enumerate(intervals): + if c in i: + return roots[j] + return None + + def _to_root_by_distance(self, roots, max_prec=160): + # Compare sympy.polys.numberfields.minpoly._choose_factor() + prec1 = 10 + while prec1 <= max_prec: + r0 = self.evalf(prec1) + candidates = [(abs(r0 - r.evalf(prec1)), j) + for j, r in enumerate(roots)] + can = sorted(candidates) + (a, ix), (b, _) = can[:2] + if b > a * 10 ** 6: + return roots[ix] + prec1 *= 2 + raise NotImplementedError("Could not locate root.") + class RationalConstant(Rational): """ diff --git a/sympy/polys/numberfields/minpoly.py b/sympy/polys/numberfields/minpoly.py index 797a2af20521..fa0265a0a579 100644 --- a/sympy/polys/numberfields/minpoly.py +++ b/sympy/polys/numberfields/minpoly.py @@ -808,10 +808,10 @@ def bottom_up_scan(ex): else: return symbols[expr] elif ex.is_AlgebraicNumber: - if ex.root not in mapping: - return update_mapping(ex.root, ex.minpoly) + if ex not in mapping: + return update_mapping(ex, ex.minpoly_of_element()) else: - return symbols[ex.root] + return symbols[ex] raise NotAlgebraic("%s doesn't seem to be an algebraic number" % ex) @@ -840,7 +840,7 @@ def simpler_inverse(ex): inverted = False ex = expand_multinomial(ex) if ex.is_AlgebraicNumber: - return ex.minpoly.as_expr(x) + return ex.minpoly_of_element().as_expr(x) elif ex.is_Rational: result = ex.q*x - ex.p else: diff --git a/sympy/polys/numberfields/subfield.py b/sympy/polys/numberfields/subfield.py index cc5e383a3ad1..dc091c59539b 100644 --- a/sympy/polys/numberfields/subfield.py +++ b/sympy/polys/numberfields/subfield.py @@ -209,6 +209,9 @@ def field_isomorphism(a, b, *, fast=True): if not b.is_AlgebraicNumber: b = AlgebraicNumber(b) + a = a.to_primitive_element() + b = b.to_primitive_element() + if a == b: return a.coeffs() diff --git a/sympy/polys/numberfields/tests/test_minpoly.py b/sympy/polys/numberfields/tests/test_minpoly.py index ee202b914646..39fd55e2ee7d 100644 --- a/sympy/polys/numberfields/tests/test_minpoly.py +++ b/sympy/polys/numberfields/tests/test_minpoly.py @@ -427,6 +427,13 @@ def test_issue_22559(): assert minimal_polynomial(alpha**3, x) == x**2 - 8 +def test_issue_22561(): + a = AlgebraicNumber(sqrt(2) + sqrt(3), [S(1) / 2, 0, S(-9) / 2, 0], gen=x) + assert a.as_expr() == sqrt(2) + assert minimal_polynomial(a, x) == x**2 - 2 + assert minimal_polynomial(a**3, x) == x**2 - 8 + + def test_separate_sq_not_impl(): raises(NotImplementedError, lambda: _separate_sq(x**(S(1)/3) + x)) diff --git a/sympy/polys/numberfields/tests/test_numbers.py b/sympy/polys/numberfields/tests/test_numbers.py index 5292957a2ce8..f8f350719cc7 100644 --- a/sympy/polys/numberfields/tests/test_numbers.py +++ b/sympy/polys/numberfields/tests/test_numbers.py @@ -1,7 +1,7 @@ """Tests on algebraic numbers. """ from sympy.core.containers import Tuple -from sympy.core.numbers import (AlgebraicNumber, Rational) +from sympy.core.numbers import (AlgebraicNumber, I, Rational) from sympy.core.singleton import S from sympy.core.symbol import Symbol from sympy.functions.elementary.miscellaneous import sqrt @@ -9,6 +9,7 @@ from sympy.polys.numberfields.subfield import to_number_field from sympy.polys.polyclasses import DMP from sympy.polys.domains import QQ +from sympy.polys.rootoftools import CRootOf from sympy.abc import x, y @@ -149,6 +150,21 @@ def test_AlgebraicNumber(): a = AlgebraicNumber(sqrt(2), [1, 2, 3]) assert a.args == (sqrt(2), Tuple(1, 2, 3)) + a = AlgebraicNumber(sqrt(2), [1, 2], "alpha") + b = AlgebraicNumber(a) + c = AlgebraicNumber(a, alias="gamma") + assert a == b + assert c.alias.name == "gamma" + + a = AlgebraicNumber(sqrt(2) + sqrt(3), [S(1)/2, 0, S(-9)/2, 0]) + b = AlgebraicNumber(a, [1, 0, 0]) + assert b.root == a.root + assert a.to_root() == sqrt(2) + assert b.to_root() == 2 + + a = AlgebraicNumber(2) + assert a.is_primitive_element is True + def test_to_algebraic_integer(): a = AlgebraicNumber(sqrt(3), gen=x).to_algebraic_integer() @@ -173,3 +189,14 @@ def test_to_algebraic_integer(): assert a.minpoly == x**2 - 12 assert a.root == 2*sqrt(3) assert a.rep == DMP([QQ(7, 19), QQ(3)], QQ) + + +def test_AlgebraicNumber_to_root(): + assert AlgebraicNumber(sqrt(2)).to_root() == sqrt(2) + + zeta5_squared = AlgebraicNumber(CRootOf(x**5 - 1, 4), coeffs=[1, 0, 0]) + assert zeta5_squared.to_root() == CRootOf(x**4 + x**3 + x**2 + x + 1, 1) + + zeta3_squared = AlgebraicNumber(CRootOf(x**3 - 1, 2), coeffs=[1, 0, 0]) + assert zeta3_squared.to_root() == -S(1)/2 - sqrt(3)*I/2 + assert zeta3_squared.to_root(radicals=False) == CRootOf(x**2 + x + 1, 0) diff --git a/sympy/polys/numberfields/tests/test_subfield.py b/sympy/polys/numberfields/tests/test_subfield.py index 68cf230f5157..918e19a694a9 100644 --- a/sympy/polys/numberfields/tests/test_subfield.py +++ b/sympy/polys/numberfields/tests/test_subfield.py @@ -281,3 +281,9 @@ def test_to_number_field(): assert to_number_field(sqrt(2), AlgebraicNumber(sqrt(2) + sqrt(3))) == a raises(IsomorphismFailed, lambda: to_number_field(sqrt(2), sqrt(3))) + + +def test_issue_22561(): + a = to_number_field(sqrt(2), sqrt(2) + sqrt(3)) + b = to_number_field(sqrt(2), sqrt(2) + sqrt(5)) + assert field_isomorphism(a, b) == [1, 0] diff --git a/sympy/polys/rootisolation.py b/sympy/polys/rootisolation.py index 9bb064723596..cedfbf6dd0df 100644 --- a/sympy/polys/rootisolation.py +++ b/sympy/polys/rootisolation.py @@ -1732,6 +1732,11 @@ def center(self): """Return the center of the real isolating interval. """ return (self.a + self.b)/2 + @property + def max_denom(self): + """Return the largest denominator occurring in either endpoint. """ + return max(self.a.denominator, self.b.denominator) + def as_tuple(self): """Return tuple representation of real isolating interval. """ return (self.a, self.b) @@ -1739,6 +1744,24 @@ def as_tuple(self): def __repr__(self): return "(%s, %s)" % (self.a, self.b) + def __contains__(self, item): + """ + Say whether a complex number belongs to this real interval. + + Parameters + ========== + + item : pair (re, im) or number re + Either a pair giving the real and imaginary parts of the number, + or else a real number. + + """ + if isinstance(item, tuple): + re, im = item + else: + re, im = item, 0 + return im == 0 and self.a <= re <= self.b + def is_disjoint(self, other): """Return ``True`` if two isolation intervals are disjoint. """ if isinstance(other, RealInterval): @@ -1989,6 +2012,12 @@ def center(self): """Return the center of the complex isolating interval. """ return ((self.ax + self.bx)/2, (self.ay + self.by)/2) + @property + def max_denom(self): + """Return the largest denominator occurring in either endpoint. """ + return max(self.ax.denominator, self.bx.denominator, + self.ay.denominator, self.by.denominator) + def as_tuple(self): """Return tuple representation of the complex isolating interval's SW and NE corners, respectively. """ @@ -2002,6 +2031,25 @@ def conjugate(self): return ComplexInterval(self.a, self.b, self.I, self.Q, self.F1, self.F2, self.f1, self.f2, self.dom, conj=True) + def __contains__(self, item): + """ + Say whether a complex number belongs to this complex rectangular + region. + + Parameters + ========== + + item : pair (re, im) or number re + Either a pair giving the real and imaginary parts of the number, + or else a real number. + + """ + if isinstance(item, tuple): + re, im = item + else: + re, im = item, 0 + return self.ax <= re <= self.bx and self.ay <= im <= self.by + def is_disjoint(self, other): """Return ``True`` if two isolation intervals are disjoint. """ if isinstance(other, RealInterval): diff --git a/sympy/polys/rootoftools.py b/sympy/polys/rootoftools.py index e2ae271cfd04..e015aec9d13a 100644 --- a/sympy/polys/rootoftools.py +++ b/sympy/polys/rootoftools.py @@ -167,7 +167,12 @@ class ComplexRootOf(RootOf): """Represents an indexed complex root of a polynomial. Roots of a univariate polynomial separated into disjoint - real or complex intervals and indexed in a fixed order. + real or complex intervals and indexed in a fixed order: + + * real roots come first and are sorted in increasing order; + * complex roots come next and are sorted primarily by increasing + real part, secondarily by increasing imaginary part. + Currently only rational coefficients are allowed. Can be imported as ``CRootOf``. To avoid confusion, the generator must be a Symbol.
{ "difficulty": "high", "estimated_review_effort": 4, "problem_domain": "Bug Fixes" }
xonsh__xonsh-4817@8658381
xonsh/xonsh
Python
4,817
xontrib load/unload
<!--- Thanks for opening a PR on xonsh! Please include a news entry with your PR to help keep our changelog up to date! There are instructions available here: https://xon.sh/devguide.html#changelog --> <!--- If there is specific issue / feature request that this PR is addressing, please link to the corresponding issue by using the `#issuenumber` syntax. Thanks again! --> - [x] implement unloading ## For community ⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
2022-05-23T17:09:36Z
[RFC] Feat: Improve xontrib lifecycle Xontribs do not, as of writing, provide a mechanism for unloading. IPython has the `_unload_ipython_extension` function that is called by IPython when a user unloads an extension. It would be nice to offer the same feature here. I propose that we follow a similar design, and add two methods: ```python3 def _load_xontrib(session): ... ``` and ```python3 def _unload_xontrib(session): ... ``` where the latter is optional. For some transition period, we should not require `_load_xontrib` to be present in the `__init__`, but once that grace period has elapsed, any xontrib that doesn't implement at least `_load_xontrib` should fail to load. ## For community ⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
[ { "body": "Xontribs do not, as of writing, provide a mechanism for unloading. IPython has the `_unload_ipython_extension` function that is called by IPython when a user unloads an extension. It would be nice to offer the same feature here.\r\n\r\nI propose that we follow a similar design, and add two methods:\r\n```python3\r\ndef _load_xontrib(session):\r\n ...\r\n```\r\nand\r\n```python3\r\ndef _unload_xontrib(session):\r\n ...\r\n```\r\n\r\nwhere the latter is optional.\r\n\r\nFor some transition period, we should not require `_load_xontrib` to be present in the `__init__`, but once that grace period has elapsed, any xontrib that doesn't implement at least `_load_xontrib` should fail to load.\r\n\r\n\r\n## For community\r\n⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**\r\n", "number": 4541, "title": "[RFC] Feat: Improve xontrib lifecycle" } ]
e7ab9a4eb82c25f9ef329784dbf194c8b23065d4
{ "head_commit": "8658381c56846d3f1edc7621a484352a37190e5e", "head_commit_message": "chore: give explicit name", "patch_to_review": "diff --git a/docs/tutorial_xontrib.rst b/docs/tutorial_xontrib.rst\nindex 88956e8e02..a51b8de06d 100644\n--- a/docs/tutorial_xontrib.rst\n+++ b/docs/tutorial_xontrib.rst\n@@ -24,15 +24,50 @@ took inspiration from for xonsh:\n * `Sphinx <http://sphinx-doc.org/>`_: Extensions are just Python modules,\n bundles some extensions with the main package, interface is a list of\n string names.\n+* `IPython <https://ipython.readthedocs.io/en/stable/config/extensions/index.html>`_: Extensions are just Python modules\n+ with some special functions to load/unload.\n * `Oh My Zsh <http://ohmyz.sh/>`_: Centralized registry, autoloading, and\n for a shell.\n * `ESLint <http://eslint.org/>`_: Ability to use language package manager\n to install/remove extensions.\n \n-\n Structure\n-==========\n-Xontribs are modules written in either xonsh (``*.xsh``) or Python (``*.py``).\n+================\n+Xontribs are modules with some special functions written\n+in either xonsh (``*.xsh``) or Python (``*.py``) that has a couple of special functions to load and unload it.\n+\n+It is inspired from\n+\n+Here is a template:\n+\n+.. code-block:: python\n+ from xonsh.built_ins import XonshSession\n+\n+ def _load_xontrib_(xsh: XonshSession, **kwargs) -> dict:\n+ \"\"\"\n+ this function will be called when loading/reloading the xontrib.\n+\n+ Args:\n+ xsh: the current xonsh session instance, serves as the interface to manipulate the session.\n+ This allows you to register new aliases, history backends, event listeners ...\n+ **kwargs: it is empty as of now. Kept for future proofing.\n+ Returns:\n+ dict: this will get loaded into the current execution context\n+ \"\"\"\n+\n+ def _unload_xontrib_(xsh: XonshSession, **kwargs) -> dict:\n+ \"\"\"If you want your extension to be unloadable, put that logic here\"\"\"\n+\n+This _load_xontrib_() function is called after your extension is imported,\n+and the currently active :py:class:`xonsh.built_ins.XonshSession` instance is passed as the argument.\n+\n+.. note::\n+\n+ Xontribs without ``_load_xontrib_`` are still supported.\n+ But when such xontrib is loaded, variables listed\n+ in ``__all__`` are placed in the current\n+ execution context if defined.\n+\n Normally, these are stored and found in an\n `implicit namespace package <https://www.python.org/dev/peps/pep-0420/>`_\n called ``xontrib``. However, xontribs may be placed in any package or directory\n@@ -64,8 +99,7 @@ Here is a sample file system layout and what the xontrib names would be::\n |- done.py # \"mypkg.subpkg.done\", full module name\n \n \n-You can also use `cookiecutter <https://github.com/audreyr/cookiecutter>`_ with\n-the `xontrib template <https://github.com/xonsh/xontrib-cookiecutter>`_ to easily\n+You can also use the `xontrib template <https://github.com/xonsh/xontrib-cookiecutter>`_ to easily\n create the layout for your xontrib package.\n \n \n@@ -73,36 +107,27 @@ Loading Xontribs\n ================\n Xontribs may be loaded in a few different ways: from the config file\n (e.g. ``~/.config/xonsh/rc.xsh``), dynamically at runtime with\n-the ``xontrib`` command, or by importing the\n-module normally. Since these extensions are just Python modules, by\n-default, they cannot be unloaded (easily).\n-\n-.. note::\n-\n- When a xontrib is loaded its public variables are placed in the current\n- execution context unless ``__all__`` is defined, just like in regular Python\n- modules.\n+the ``xontrib`` command, or its Python API.\n \n-Extensions are loaded via the ``xontrib`` command, which is a xonsh default\n-alias. This command may be run from anywhere in a xonshrc file or at any point\n-after xonsh has started up. Loading is the default action of the ``xontrib``\n-command. Thus the following methods for loading via this command are equivalent:\n+Extensions are loaded via the ``xontrib load`` command.\n+This command may be run from anywhere in a xonshrc file or at any point\n+after xonsh has started up.\n \n .. code-block:: xonsh\n \n- xontrib myext mpl mypkg.show\n xontrib load myext mpl mypkg.show\n \n-Loading the same xontrib multiple times does not have any effect after the\n-first. Xontribs are simply Python modules, and therefore follow the same\n-caching rules. So by the same token, you can also import them normally.\n-Of course, you have to use the full module name to import a xontrib:\n+The same can be done in Python as well\n \n .. code-block:: python\n \n- import xontrib.mpl\n- from xontrib import myext\n- from mypkg.show import *\n+ from xonsh.xontribs import xontribs_load\n+ xontribs_load(['myext', 'mpl', 'mypkg.show'])\n+\n+A xontrib can be unloaded from the current session using ``xontrib unload``\n+\n+.. code-block:: xonsh\n+ xontrib unload myext mpl mypkg.show\n \n \n Listing Known Xontribs\ndiff --git a/news/feat-xontrib-lifecycle.rst b/news/feat-xontrib-lifecycle.rst\nnew file mode 100644\nindex 0000000000..703ddbdb96\n--- /dev/null\n+++ b/news/feat-xontrib-lifecycle.rst\n@@ -0,0 +1,26 @@\n+**Added:**\n+\n+* Now xontribs support `loading and unloading <https://github.com/xonsh/xonsh/issues/4541>`_\n+ with functions ``_load_xontrib_(xsh: XonshSession, **kwargs) -> dict``,\n+ ``_unload_xontrib_(xsh: XonshSession, **kwargs) -> None`` defined in their module.\n+ `Updated doc <https://xon.sh/tutorial_xontrib.html>`_\n+\n+**Changed:**\n+\n+* <news item>\n+\n+**Deprecated:**\n+\n+* <news item>\n+\n+**Removed:**\n+\n+* <news item>\n+\n+**Fixed:**\n+\n+* <news item>\n+\n+**Security:**\n+\n+* <news item>\ndiff --git a/tests/test_xontribs.py b/tests/test_xontribs.py\nindex 1fdd1c2062..8291af82be 100644\n--- a/tests/test_xontribs.py\n+++ b/tests/test_xontribs.py\n@@ -8,6 +8,8 @@\n xontribs_load,\n xontribs_loaded,\n xontribs_main,\n+ xontribs_reload,\n+ xontribs_unload,\n )\n \n \n@@ -96,6 +98,51 @@ def test_xontrib_load(tmpmod):\n assert \"script\" in xontribs_loaded()\n \n \n+def test_xontrib_unload(tmpmod, xession):\n+ with tmpmod.mkdir(\"xontrib\").join(\"script.py\").open(\"w\") as x:\n+ x.write(\n+ \"\"\"\n+hello = 'world'\n+\n+def _unload_xontrib_(xsh): del xsh.ctx['hello']\n+\"\"\"\n+ )\n+\n+ xontribs_load([\"script\"])\n+ assert \"script\" in xontribs_loaded()\n+ assert \"hello\" in xession.ctx\n+ xontribs_unload([\"script\"])\n+ assert \"script\" not in xontribs_loaded()\n+ assert \"hello\" not in xession.ctx\n+\n+\n+def test_xontrib_reload(tmpmod, xession):\n+ with tmpmod.mkdir(\"xontrib\").join(\"script.py\").open(\"w\") as x:\n+ x.write(\n+ \"\"\"\n+hello = 'world'\n+\n+def _unload_xontrib_(xsh): del xsh.ctx['hello']\n+\"\"\"\n+ )\n+\n+ xontribs_load([\"script\"])\n+ assert \"script\" in xontribs_loaded()\n+ assert xession.ctx[\"hello\"] == \"world\"\n+\n+ with tmpmod.join(\"xontrib\").join(\"script.py\").open(\"w\") as x:\n+ x.write(\n+ \"\"\"\n+hello = 'world1'\n+\n+def _unload_xontrib_(xsh): del xsh.ctx['hello']\n+\"\"\"\n+ )\n+ xontribs_reload([\"script\"])\n+ assert \"script\" in xontribs_loaded()\n+ assert xession.ctx[\"hello\"] == \"world1\"\n+\n+\n def test_xontrib_load_dashed(tmpmod):\n \"\"\"\n Test that .xsh xontribs are loadable\ndiff --git a/xonsh/events.py b/xonsh/events.py\nindex 86c7287a78..7f6012c52b 100644\n--- a/xonsh/events.py\n+++ b/xonsh/events.py\n@@ -269,6 +269,21 @@ class EventManager:\n Each event is just an attribute. They're created dynamically on first use.\n \"\"\"\n \n+ def register(self, func):\n+ \"\"\"\n+ wraps ``EventManager.doc``\n+\n+ Parameters\n+ ----------\n+ func\n+ extract name and doc from the function\n+ \"\"\"\n+\n+ name = func.__name__\n+ doc = inspect.getdoc(func)\n+ sign = inspect.signature(func)\n+ return self.doc(name, f\"{name}{sign}\\n\\n{doc}\")\n+\n def doc(self, name, docstring):\n \"\"\"\n Applies a docstring to an event.\ndiff --git a/xonsh/xontribs.py b/xonsh/xontribs.py\nindex 3939400d66..b2034e2d9a 100644\n--- a/xonsh/xontribs.py\n+++ b/xonsh/xontribs.py\n@@ -20,6 +20,10 @@ class ExitCode(IntEnum):\n INIT_FAILED = 2\n \n \n+class XontribNotInstalled(Exception):\n+ \"\"\"raised when the requested xontrib is not found\"\"\"\n+\n+\n def find_xontrib(name):\n \"\"\"Finds a xontribution from its name.\"\"\"\n spec = None\n@@ -36,12 +40,26 @@ def xontrib_context(name):\n spec = find_xontrib(name)\n if spec is None:\n return None\n- m = importlib.import_module(spec.name)\n- pubnames = getattr(m, \"__all__\", None)\n- if pubnames is not None:\n- ctx = {k: getattr(m, k) for k in pubnames}\n+ module = importlib.import_module(spec.name)\n+ ctx = {}\n+\n+ def _get__all__():\n+ pubnames = getattr(module, \"__all__\", None)\n+ if pubnames is None:\n+ for k in dir(module):\n+ if not k.startswith(\"_\"):\n+ yield k, getattr(module, k)\n+ else:\n+ for attr in pubnames:\n+ yield attr, getattr(module, attr)\n+\n+ entrypoint = getattr(module, \"_load_xontrib_\", None)\n+ if entrypoint is None:\n+ ctx.update(dict(_get__all__()))\n else:\n- ctx = {k: getattr(m, k) for k in dir(m) if not k.startswith(\"_\")}\n+ result = entrypoint(xsh=XSH)\n+ if result is not None:\n+ ctx.update(result)\n return ctx\n \n \n@@ -65,28 +83,30 @@ def prompt_xontrib_install(names: tp.List[str]):\n )\n \n \n-def update_context(name, ctx=None):\n- \"\"\"Updates a context in place from a xontrib. If ctx is not provided,\n- then __xonsh__.ctx is updated.\n- \"\"\"\n- if ctx is None:\n- ctx = XSH.ctx\n+def update_context(name, ctx: dict):\n+ \"\"\"Updates a context in place from a xontrib.\"\"\"\n modctx = xontrib_context(name)\n if modctx is None:\n- if not hasattr(update_context, \"bad_imports\"):\n- update_context.bad_imports = []\n- update_context.bad_imports.append(name)\n- return ctx\n- return ctx.update(modctx)\n+ raise XontribNotInstalled(f\"Xontrib - {name} is not found.\")\n+ else:\n+ ctx.update(modctx)\n+ return ctx\n \n \n-def xontrib_names_completer(**_):\n- for name, meta in get_xontribs().items():\n- full_name = f\"xontrib.{name}\"\n- if full_name not in sys.modules:\n+def _xontrib_name_completions(loaded=False):\n+ for name, meta, spec in _get_xontrib_specs():\n+ if (spec.name in sys.modules) is loaded:\n yield RichCompletion(name, append_space=True, description=meta.description)\n \n \n+def xontrib_names_completer(**_):\n+ yield from _xontrib_name_completions(loaded=False)\n+\n+\n+def xontrib_unload_completer(**_):\n+ yield from _xontrib_name_completions(loaded=True)\n+\n+\n def xontribs_load(\n names: Annotated[\n tp.Sequence[str],\n@@ -103,31 +123,90 @@ def xontribs_load(\n verbose : -v, --verbose\n verbose output\n \"\"\"\n- ctx = XSH.ctx\n+ ctx = {} if XSH.ctx is None else XSH.ctx\n res = ExitCode.OK\n stdout = None\n stderr = None\n+ bad_imports = []\n for name in names:\n if verbose:\n print(f\"loading xontrib {name!r}\")\n try:\n update_context(name, ctx=ctx)\n+ except XontribNotInstalled:\n+ bad_imports.append(name)\n except Exception:\n res = ExitCode.INIT_FAILED\n print_exception(f\"Failed to load xontrib {name}.\")\n- if hasattr(update_context, \"bad_imports\"):\n+ if bad_imports:\n res = ExitCode.NOT_FOUND\n- stderr = prompt_xontrib_install(update_context.bad_imports) # type: ignore\n- del update_context.bad_imports # type: ignore\n+ stderr = prompt_xontrib_install(bad_imports)\n return stdout, stderr, res\n \n \n+def xontribs_unload(\n+ names: Annotated[\n+ tp.Sequence[str],\n+ Arg(nargs=\"+\", completer=xontrib_unload_completer),\n+ ] = (),\n+ verbose=False,\n+):\n+ \"\"\"Unload the given xontribs\n+\n+ Parameters\n+ ----------\n+ names\n+ name of xontribs to unload\n+\n+ Notes\n+ -----\n+ Proper cleanup can be implemented by the xontrib. The default is equivalent to ``del sys.modules[module]``.\n+ \"\"\"\n+ for name in names:\n+ if verbose:\n+ print(f\"unloading xontrib {name!r}\")\n+ spec = find_xontrib(name)\n+ try:\n+ if spec and spec.name in sys.modules:\n+ module = sys.modules[spec.name]\n+ unloader = getattr(module, \"_unload_xontrib_\", None)\n+ if unloader is not None:\n+ unloader(XSH)\n+ del sys.modules[spec.name]\n+ except Exception as ex:\n+ print_exception(f\"Failed to unload xontrib {name} ({ex})\")\n+\n+\n+def xontribs_reload(\n+ names: Annotated[\n+ tp.Sequence[str],\n+ Arg(nargs=\"+\", completer=xontrib_unload_completer),\n+ ] = (),\n+ verbose=False,\n+):\n+ \"\"\"Reload the given xontribs\n+\n+ Parameters\n+ ----------\n+ names\n+ name of xontribs to reload\n+ \"\"\"\n+ for name in names:\n+ if verbose:\n+ print(f\"reloading xontrib {name!r}\")\n+ xontribs_unload([name])\n+ xontribs_load([name])\n+\n+\n+def _get_xontrib_specs():\n+ for xo_name, meta in get_xontribs().items():\n+ yield xo_name, meta, find_xontrib(xo_name)\n+\n+\n def xontrib_data():\n \"\"\"Collects and returns the data about installed xontribs.\"\"\"\n- meta = get_xontribs()\n data = {}\n- for xo_name in meta:\n- spec = find_xontrib(xo_name)\n+ for xo_name, _, spec in _get_xontrib_specs():\n loaded = spec.name in sys.modules\n data[xo_name] = {\"name\": xo_name, \"loaded\": loaded}\n \n@@ -173,6 +252,8 @@ class XontribAlias(ArgParserAlias):\n def build(self):\n parser = self.create_parser(prog=\"xontrib\")\n parser.add_command(xontribs_load, prog=\"load\")\n+ parser.add_command(xontribs_unload, prog=\"unload\")\n+ parser.add_command(xontribs_reload, prog=\"reload\")\n parser.add_command(_list)\n return parser\n \ndiff --git a/xontrib/abbrevs.py b/xontrib/abbrevs.py\nindex c010c52dbe..bf7160bd6d 100644\n--- a/xontrib/abbrevs.py\n+++ b/xontrib/abbrevs.py\n@@ -32,8 +32,7 @@\n from prompt_toolkit.filters import IsMultiline, completion_is_selected\n from prompt_toolkit.keys import Keys\n \n-from xonsh.built_ins import XSH, DynamicAccessProxy\n-from xonsh.events import events\n+from xonsh.built_ins import DynamicAccessProxy, XonshSession\n from xonsh.tools import check_for_partial_string\n \n __all__ = ()\n@@ -49,12 +48,6 @@ def __call__(self, word: str, buffer: Buffer) -> str:\n \n abbrevs: \"dict[str, AbbrValType]\" = dict()\n \n-# XSH.builtins is a namespace and extendable\n-XSH.builtins.abbrevs = abbrevs\n-\n-proxy = DynamicAccessProxy(\"abbrevs\", \"__xonsh__.builtins.abbrevs\")\n-builtins.abbrevs = proxy # type: ignore\n-\n \n class _LastExpanded(tp.NamedTuple):\n word: str\n@@ -121,7 +114,6 @@ def set_cursor_position(buffer, expanded: str) -> None:\n buffer.delete(len(EDIT_SYMBOL))\n \n \[email protected]_ptk_create\n def custom_keybindings(bindings, **kw):\n \n from prompt_toolkit.filters import EmacsInsertMode, ViInsertMode\n@@ -156,3 +148,13 @@ def multiline_carriage_return(event):\n if not current_char or current_char.isspace():\n abbrev.expand(buffer)\n carriage_return(buffer, event.cli)\n+\n+\n+def _load_xontrib_(xsh: XonshSession, **_):\n+ xsh.builtins.events.on_ptk_create(custom_keybindings)\n+ # XSH.builtins is a namespace and extendable\n+ xsh.builtins.abbrevs = abbrevs\n+ proxy = DynamicAccessProxy(\"abbrevs\", \"__xonsh__.builtins.abbrevs\")\n+ builtins.abbrevs = proxy # type: ignore\n+\n+ return {\"abbrevs\": abbrevs}\ndiff --git a/xontrib/autovox.py b/xontrib/autovox.py\nindex 40aed1e05e..52f73fa45c 100644\n--- a/xontrib/autovox.py\n+++ b/xontrib/autovox.py\n@@ -13,26 +13,22 @@\n from pathlib import Path\n \n import xontrib.voxapi as voxapi\n-from xonsh.built_ins import XSH\n+from xonsh.built_ins import XSH, XonshSession\n \n __all__ = ()\n \n \n-XSH.builtins.events.doc(\n- \"autovox_policy\",\n+def autovox_policy(path: \"Path\") -> \"str|Path|None\":\n \"\"\"\n-autovox_policy(path: pathlib.Path) -> Union[str, pathlib.Path, None]\n+ Register a policy with autovox.\n \n-Register a policy with autovox.\n+ A policy is a function that takes a Path and returns the venv associated with it,\n+ if any.\n \n-A policy is a function that takes a Path and returns the venv associated with it,\n-if any.\n-\n-NOTE: The policy should only return a venv for this path exactly, not for\n-parent paths. Parent walking is handled by autovox so that all policies can\n-be queried at each level.\n-\"\"\",\n-)\n+ NOTE: The policy should only return a venv for this path exactly, not for\n+ parent paths. Parent walking is handled by autovox so that all policies can\n+ be queried at each level.\n+ \"\"\"\n \n \n class MultipleVenvsWarning(RuntimeWarning):\n@@ -80,7 +76,8 @@ def check_for_new_venv(curdir, olddir):\n \n \n # Core mechanism: Check for venv when the current directory changes\[email protected]_chdir\n+\n+\n def cd_handler(newdir, olddir, **_):\n check_for_new_venv(Path(newdir), Path(olddir))\n \n@@ -88,12 +85,10 @@ def cd_handler(newdir, olddir, **_):\n # Recalculate when venvs are created or destroyed\n \n \[email protected]_on_create\n def create_handler(**_):\n check_for_new_venv(Path.cwd(), ...)\n \n \[email protected]_on_destroy\n def destroy_handler(**_):\n check_for_new_venv(Path.cwd(), ...)\n \n@@ -101,6 +96,13 @@ def destroy_handler(**_):\n # Initial activation before first prompt\n \n \[email protected]_post_init\n def load_handler(**_):\n check_for_new_venv(Path.cwd(), None)\n+\n+\n+def _load_xontrib_(xsh: XonshSession, **_):\n+ xsh.builtins.events.register(autovox_policy)\n+ xsh.builtins.events.on_chdir(cd_handler)\n+ xsh.builtins.events.vox_on_create(create_handler)\n+ xsh.builtins.events.vox_on_destroy(destroy_handler)\n+ xsh.builtins.events.on_post_init(load_handler)\ndiff --git a/xontrib/bashisms.py b/xontrib/bashisms.py\nindex 01cfdd04b2..41fc263094 100644\n--- a/xontrib/bashisms.py\n+++ b/xontrib/bashisms.py\n@@ -15,7 +15,7 @@\n import shlex\n import sys\n \n-from xonsh.built_ins import XSH\n+from xonsh.built_ins import XSH, XonshSession\n \n __all__ = ()\n \n@@ -28,7 +28,6 @@ def _warn_not_supported(msg: str):\n )\n \n \[email protected]_transform_command\n def bash_preproc(cmd, **kw):\n bang_previous = {\n \"!\": lambda x: x,\n@@ -82,10 +81,6 @@ def alias(args, stdin=None):\n return ret\n \n \n-XSH.aliases[\"alias\"] = alias\n-XSH.env[\"THREAD_SUBPROCS\"] = False\n-\n-\n def _unset(args):\n if not args:\n print(\"Usage: unset ENV_VARIABLE\", file=sys.stderr)\n@@ -97,9 +92,6 @@ def _unset(args):\n print(f\"{v} not found\", file=sys.stderr)\n \n \n-XSH.aliases[\"unset\"] = _unset\n-\n-\n def _export(args):\n if not args:\n print(\"Usage: export ENV_VARIABLE=VALUE\", file=sys.stderr)\n@@ -112,9 +104,6 @@ def _export(args):\n print(f\"{eq} equal sign not found\", file=sys.stderr)\n \n \n-XSH.aliases[\"export\"] = _export\n-\n-\n def _set(args):\n arg = args[0]\n if arg == \"-e\":\n@@ -129,9 +118,6 @@ def _set(args):\n _warn_not_supported(f\"set {arg}\")\n \n \n-XSH.aliases[\"set\"] = _set\n-\n-\n def _shopt(args):\n \n supported_shopt = [\"DOTGLOB\"]\n@@ -157,7 +143,12 @@ def _shopt(args):\n _warn_not_supported(f\"shopt {args}\")\n \n \n-XSH.aliases[\"shopt\"] = _shopt\n-\n-\n-XSH.aliases[\"complete\"] = \"completer list\"\n+def _load_xontrib_(xsh: XonshSession, **_):\n+ xsh.builtins.events.on_transform_command(bash_preproc)\n+ xsh.aliases.register(_unset)\n+ xsh.aliases.register(_export)\n+ xsh.aliases.register(_shopt)\n+ xsh.aliases.register(_set)\n+ xsh.aliases[\"complete\"] = \"completer list\".split()\n+ xsh.aliases[\"alias\"] = alias\n+ xsh.env[\"THREAD_SUBPROCS\"] = False\ndiff --git a/xontrib/coreutils.py b/xontrib/coreutils.py\nindex bf2eb79c9b..7384b0e345 100644\n--- a/xontrib/coreutils.py\n+++ b/xontrib/coreutils.py\n@@ -14,7 +14,7 @@\n tools avoid the need for a full subprocess call. Additionally, these\n tools are cross-platform.\n \"\"\"\n-from xonsh.built_ins import XSH\n+from xonsh.built_ins import XonshSession\n from xonsh.platform import ON_POSIX\n from xonsh.xoreutils.cat import cat\n from xonsh.xoreutils.echo import echo\n@@ -26,20 +26,18 @@\n from xonsh.xoreutils.uptime import uptime\n from xonsh.xoreutils.yes import yes\n \n-__all__ = ()\n \n-XSH.aliases[\"cat\"] = cat\n-XSH.aliases[\"echo\"] = echo\n-XSH.aliases[\"pwd\"] = pwd\n-XSH.aliases[\"tee\"] = tee\n-XSH.aliases[\"tty\"] = tty\n-XSH.aliases[\"uname\"] = uname\n-XSH.aliases[\"uptime\"] = uptime\n-XSH.aliases[\"yes\"] = yes\n-XSH.aliases[\"umask\"] = umask\n-XSH.aliases[\"uptime\"] = uptime\n+def _load_xontrib_(xsh: XonshSession, **_):\n+ xsh.aliases[\"cat\"] = cat\n+ xsh.aliases[\"echo\"] = echo\n+ xsh.aliases[\"pwd\"] = pwd\n+ xsh.aliases[\"tee\"] = tee\n+ xsh.aliases[\"tty\"] = tty\n+ xsh.aliases[\"uname\"] = uname\n+ xsh.aliases[\"uptime\"] = uptime\n+ xsh.aliases[\"umask\"] = umask\n+ xsh.aliases[\"yes\"] = yes\n+ if ON_POSIX:\n+ from xonsh.xoreutils.ulimit import ulimit\n \n-if ON_POSIX:\n- from xonsh.xoreutils.ulimit import ulimit\n-\n- XSH.aliases[\"ulimit\"] = ulimit\n+ xsh.aliases[\"ulimit\"] = ulimit\ndiff --git a/xontrib/fish_completer.py b/xontrib/fish_completer.py\nindex 4b212e7879..7782b3de5a 100644\n--- a/xontrib/fish_completer.py\n+++ b/xontrib/fish_completer.py\n@@ -26,4 +26,5 @@ def fish_proc_completer(ctx: CommandContext):\n )\n \n \n-completer.add_one_completer(\"fish\", fish_proc_completer, \"<bash\")\n+def _load_xontrib_(**_):\n+ completer.add_one_completer(\"fish\", fish_proc_completer, \"<bash\")\ndiff --git a/xontrib/free_cwd.py b/xontrib/free_cwd.py\nindex 3fc435dbdb..f8f232d70a 100644\n--- a/xontrib/free_cwd.py\n+++ b/xontrib/free_cwd.py\n@@ -4,7 +4,7 @@\n Windows Explorer to delete or rename the current or parent\n directories. Internally, it is accomplished by temporarily resetting\n CWD to the root drive folder while waiting at the prompt. This only\n-works with the prompt_toolkit backend and can cause cause issues\n+works with the prompt_toolkit backend and can cause issues\n if any extensions are enabled that hook the prompt and relies on\n ``os.getcwd()``.\n \"\"\"\n@@ -12,7 +12,7 @@\n import os\n from pathlib import Path\n \n-from xonsh.built_ins import XSH\n+from xonsh.built_ins import XSH, XonshSession\n from xonsh.platform import ON_CYGWIN, ON_MSYS, ON_WINDOWS\n from xonsh.tools import print_exception\n \n@@ -92,7 +92,6 @@ def wrapper(*args, **kwargs):\n return wrapper\n \n \[email protected]_ptk_create\n def setup_release_cwd_hook(prompter, history, completer, bindings, **kw):\n if ON_WINDOWS and not ON_CYGWIN and not ON_MSYS:\n prompter.prompt = _cwd_release_wrapper(prompter.prompt)\n@@ -101,3 +100,7 @@ def setup_release_cwd_hook(prompter, history, completer, bindings, **kw):\n completer.completer.complete = _cwd_restore_wrapper(\n completer.completer.complete\n )\n+\n+\n+def _load_xontrib_(xsh: XonshSession, **_):\n+ xsh.builtins.events.on_ptk_create(_cwd_restore_wrapper)\ndiff --git a/xontrib/pdb.py b/xontrib/pdb.py\nindex 24f6de12e7..793265ecdb 100644\n--- a/xontrib/pdb.py\n+++ b/xontrib/pdb.py\n@@ -1,7 +1,7 @@\n \"\"\"Simple built-in debugger. Runs pdb on reception of SIGUSR1 signal.\"\"\"\n import signal\n \n-__all__ = ()\n+from xonsh.built_ins import XonshSession\n \n \n def handle_sigusr1(sig, frame):\n@@ -11,4 +11,5 @@ def handle_sigusr1(sig, frame):\n pdb.Pdb().set_trace(frame)\n \n \n-signal.signal(signal.SIGUSR1, handle_sigusr1)\n+def _load_xontrib_(xsh: XonshSession, **_):\n+ signal.signal(signal.SIGUSR1, handle_sigusr1)\ndiff --git a/xontrib/vox.py b/xontrib/vox.py\nindex 700566668a..4891deca4a 100644\n--- a/xontrib/vox.py\n+++ b/xontrib/vox.py\n@@ -7,7 +7,7 @@\n \n import xonsh.cli_utils as xcli\n import xontrib.voxapi as voxapi\n-from xonsh.built_ins import XSH\n+from xonsh.built_ins import XSH, XonshSession\n from xonsh.dirstack import pushd_fn\n from xonsh.platform import ON_WINDOWS\n from xonsh.tools import XonshError\n@@ -495,4 +495,5 @@ def upgrade(\n self.out(venv)\n \n \n-XSH.aliases[\"vox\"] = VoxHandler(threadable=False)\n+def _load_xontrib_(xsh: XonshSession, **_):\n+ xsh.aliases[\"vox\"] = VoxHandler(threadable=False)\ndiff --git a/xontrib/whole_word_jumping.py b/xontrib/whole_word_jumping.py\nindex 07777bdc06..7f9538146b 100644\n--- a/xontrib/whole_word_jumping.py\n+++ b/xontrib/whole_word_jumping.py\n@@ -5,12 +5,9 @@\n \"\"\"\n from prompt_toolkit.keys import Keys\n \n-from xonsh.built_ins import XSH\n+from xonsh.built_ins import XonshSession\n \n-__all__ = ()\n \n-\[email protected]_ptk_create\n def custom_keybindings(bindings, **kw):\n \n # Key bindings for jumping over whole words (everything that's not\n@@ -42,3 +39,7 @@ def shift_delete(event):\n endpos = endpos + 1 if startpos == 0 else endpos\n buff.text = buff.text[:startpos] + buff.text[endpos:]\n buff.cursor_position = startpos\n+\n+\n+def _load_xontrib_(xsh: XonshSession, **_):\n+ xsh.builtins.events.on_ptk_create(custom_keybindings)\ndiff --git a/xontrib/xog.py b/xontrib/xog.py\nindex 3e5602cff0..005c5bf25b 100644\n--- a/xontrib/xog.py\n+++ b/xontrib/xog.py\n@@ -6,9 +6,7 @@\n import pathlib\n import tempfile\n \n-from xonsh.built_ins import XSH\n-\n-__all__ = ()\n+from xonsh.built_ins import XSH, XonshSession\n \n \n def _get_log_file_name():\n@@ -65,5 +63,6 @@ def _xog(args, stdout=None, stderr=None):\n return 0 if rc else -1\n \n \n-XSH.env[\"XONSH_TRACEBACK_LOGFILE\"] = _get_log_file_name()\n-XSH.aliases[\"xog\"] = _xog\n+def _load_xontrib_(xsh: XonshSession, **_):\n+ xsh.env[\"XONSH_TRACEBACK_LOGFILE\"] = _get_log_file_name()\n+ xsh.aliases[\"xog\"] = _xog\n" }
[ { "diff_hunk": "@@ -24,15 +24,50 @@ took inspiration from for xonsh:\n * `Sphinx <http://sphinx-doc.org/>`_: Extensions are just Python modules,\n bundles some extensions with the main package, interface is a list of\n string names.\n+* `IPython <https://ipython.readthedocs.io/en/stable/config/extensions/index.html>`_: Extensions are just Python modules\n+ with some special functions to load/unload.\n * `Oh My Zsh <http://ohmyz.sh/>`_: Centralized registry, autoloading, and\n for a shell.\n * `ESLint <http://eslint.org/>`_: Ability to use language package manager\n to install/remove extensions.\n \n-\n Structure\n-==========\n-Xontribs are modules written in either xonsh (``*.xsh``) or Python (``*.py``).\n+================\n+Xontribs are modules with some special functions written\n+in either xonsh (``*.xsh``) or Python (``*.py``) that has a couple of special functions to load and unload it.\n+\n+It is inspired from", "line": null, "original_line": 39, "original_start_line": null, "path": "docs/tutorial_xontrib.rst", "start_line": null, "text": "@user1:\n@author Did you accidentally delete the rest of this sentence?\n\n@author:\nI think that is the case, there should be ipython,sphinx ... in the mix\n\n@author:\nOk they are above, I should be removing the hanging sentence" } ]
6ddb1a85ecc9148a6b416b5da355300ca2a09ce1
diff --git a/docs/tutorial_xontrib.rst b/docs/tutorial_xontrib.rst index 88956e8e02..468e00326d 100644 --- a/docs/tutorial_xontrib.rst +++ b/docs/tutorial_xontrib.rst @@ -24,15 +24,48 @@ took inspiration from for xonsh: * `Sphinx <http://sphinx-doc.org/>`_: Extensions are just Python modules, bundles some extensions with the main package, interface is a list of string names. +* `IPython <https://ipython.readthedocs.io/en/stable/config/extensions/index.html>`_: Extensions are just Python modules + with some special functions to load/unload. * `Oh My Zsh <http://ohmyz.sh/>`_: Centralized registry, autoloading, and for a shell. * `ESLint <http://eslint.org/>`_: Ability to use language package manager to install/remove extensions. - Structure -========== -Xontribs are modules written in either xonsh (``*.xsh``) or Python (``*.py``). +================ +Xontribs are modules with some special functions written +in either xonsh (``*.xsh``) or Python (``*.py``). + +Here is a template: + +.. code-block:: python + from xonsh.built_ins import XonshSession + + def _load_xontrib_(xsh: XonshSession, **kwargs) -> dict: + """ + this function will be called when loading/reloading the xontrib. + + Args: + xsh: the current xonsh session instance, serves as the interface to manipulate the session. + This allows you to register new aliases, history backends, event listeners ... + **kwargs: it is empty as of now. Kept for future proofing. + Returns: + dict: this will get loaded into the current execution context + """ + + def _unload_xontrib_(xsh: XonshSession, **kwargs) -> dict: + """If you want your extension to be unloadable, put that logic here""" + +This _load_xontrib_() function is called after your extension is imported, +and the currently active :py:class:`xonsh.built_ins.XonshSession` instance is passed as the argument. + +.. note:: + + Xontribs without ``_load_xontrib_`` are still supported. + But when such xontrib is loaded, variables listed + in ``__all__`` are placed in the current + execution context if defined. + Normally, these are stored and found in an `implicit namespace package <https://www.python.org/dev/peps/pep-0420/>`_ called ``xontrib``. However, xontribs may be placed in any package or directory @@ -64,8 +97,7 @@ Here is a sample file system layout and what the xontrib names would be:: |- done.py # "mypkg.subpkg.done", full module name -You can also use `cookiecutter <https://github.com/audreyr/cookiecutter>`_ with -the `xontrib template <https://github.com/xonsh/xontrib-cookiecutter>`_ to easily +You can also use the `xontrib template <https://github.com/xonsh/xontrib-cookiecutter>`_ to easily create the layout for your xontrib package. @@ -73,36 +105,27 @@ Loading Xontribs ================ Xontribs may be loaded in a few different ways: from the config file (e.g. ``~/.config/xonsh/rc.xsh``), dynamically at runtime with -the ``xontrib`` command, or by importing the -module normally. Since these extensions are just Python modules, by -default, they cannot be unloaded (easily). - -.. note:: +the ``xontrib`` command, or its Python API. - When a xontrib is loaded its public variables are placed in the current - execution context unless ``__all__`` is defined, just like in regular Python - modules. - -Extensions are loaded via the ``xontrib`` command, which is a xonsh default -alias. This command may be run from anywhere in a xonshrc file or at any point -after xonsh has started up. Loading is the default action of the ``xontrib`` -command. Thus the following methods for loading via this command are equivalent: +Extensions are loaded via the ``xontrib load`` command. +This command may be run from anywhere in a xonshrc file or at any point +after xonsh has started up. .. code-block:: xonsh - xontrib myext mpl mypkg.show xontrib load myext mpl mypkg.show -Loading the same xontrib multiple times does not have any effect after the -first. Xontribs are simply Python modules, and therefore follow the same -caching rules. So by the same token, you can also import them normally. -Of course, you have to use the full module name to import a xontrib: +The same can be done in Python as well .. code-block:: python - import xontrib.mpl - from xontrib import myext - from mypkg.show import * + from xonsh.xontribs import xontribs_load + xontribs_load(['myext', 'mpl', 'mypkg.show']) + +A xontrib can be unloaded from the current session using ``xontrib unload`` + +.. code-block:: xonsh + xontrib unload myext mpl mypkg.show Listing Known Xontribs diff --git a/news/feat-xontrib-lifecycle.rst b/news/feat-xontrib-lifecycle.rst new file mode 100644 index 0000000000..703ddbdb96 --- /dev/null +++ b/news/feat-xontrib-lifecycle.rst @@ -0,0 +1,26 @@ +**Added:** + +* Now xontribs support `loading and unloading <https://github.com/xonsh/xonsh/issues/4541>`_ + with functions ``_load_xontrib_(xsh: XonshSession, **kwargs) -> dict``, + ``_unload_xontrib_(xsh: XonshSession, **kwargs) -> None`` defined in their module. + `Updated doc <https://xon.sh/tutorial_xontrib.html>`_ + +**Changed:** + +* <news item> + +**Deprecated:** + +* <news item> + +**Removed:** + +* <news item> + +**Fixed:** + +* <news item> + +**Security:** + +* <news item> diff --git a/tests/test_xontribs.py b/tests/test_xontribs.py index 1fdd1c2062..8291af82be 100644 --- a/tests/test_xontribs.py +++ b/tests/test_xontribs.py @@ -8,6 +8,8 @@ xontribs_load, xontribs_loaded, xontribs_main, + xontribs_reload, + xontribs_unload, ) @@ -96,6 +98,51 @@ def test_xontrib_load(tmpmod): assert "script" in xontribs_loaded() +def test_xontrib_unload(tmpmod, xession): + with tmpmod.mkdir("xontrib").join("script.py").open("w") as x: + x.write( + """ +hello = 'world' + +def _unload_xontrib_(xsh): del xsh.ctx['hello'] +""" + ) + + xontribs_load(["script"]) + assert "script" in xontribs_loaded() + assert "hello" in xession.ctx + xontribs_unload(["script"]) + assert "script" not in xontribs_loaded() + assert "hello" not in xession.ctx + + +def test_xontrib_reload(tmpmod, xession): + with tmpmod.mkdir("xontrib").join("script.py").open("w") as x: + x.write( + """ +hello = 'world' + +def _unload_xontrib_(xsh): del xsh.ctx['hello'] +""" + ) + + xontribs_load(["script"]) + assert "script" in xontribs_loaded() + assert xession.ctx["hello"] == "world" + + with tmpmod.join("xontrib").join("script.py").open("w") as x: + x.write( + """ +hello = 'world1' + +def _unload_xontrib_(xsh): del xsh.ctx['hello'] +""" + ) + xontribs_reload(["script"]) + assert "script" in xontribs_loaded() + assert xession.ctx["hello"] == "world1" + + def test_xontrib_load_dashed(tmpmod): """ Test that .xsh xontribs are loadable diff --git a/xonsh/events.py b/xonsh/events.py index 86c7287a78..7f6012c52b 100644 --- a/xonsh/events.py +++ b/xonsh/events.py @@ -269,6 +269,21 @@ class EventManager: Each event is just an attribute. They're created dynamically on first use. """ + def register(self, func): + """ + wraps ``EventManager.doc`` + + Parameters + ---------- + func + extract name and doc from the function + """ + + name = func.__name__ + doc = inspect.getdoc(func) + sign = inspect.signature(func) + return self.doc(name, f"{name}{sign}\n\n{doc}") + def doc(self, name, docstring): """ Applies a docstring to an event. diff --git a/xonsh/xontribs.py b/xonsh/xontribs.py index 3939400d66..b2034e2d9a 100644 --- a/xonsh/xontribs.py +++ b/xonsh/xontribs.py @@ -20,6 +20,10 @@ class ExitCode(IntEnum): INIT_FAILED = 2 +class XontribNotInstalled(Exception): + """raised when the requested xontrib is not found""" + + def find_xontrib(name): """Finds a xontribution from its name.""" spec = None @@ -36,12 +40,26 @@ def xontrib_context(name): spec = find_xontrib(name) if spec is None: return None - m = importlib.import_module(spec.name) - pubnames = getattr(m, "__all__", None) - if pubnames is not None: - ctx = {k: getattr(m, k) for k in pubnames} + module = importlib.import_module(spec.name) + ctx = {} + + def _get__all__(): + pubnames = getattr(module, "__all__", None) + if pubnames is None: + for k in dir(module): + if not k.startswith("_"): + yield k, getattr(module, k) + else: + for attr in pubnames: + yield attr, getattr(module, attr) + + entrypoint = getattr(module, "_load_xontrib_", None) + if entrypoint is None: + ctx.update(dict(_get__all__())) else: - ctx = {k: getattr(m, k) for k in dir(m) if not k.startswith("_")} + result = entrypoint(xsh=XSH) + if result is not None: + ctx.update(result) return ctx @@ -65,28 +83,30 @@ def prompt_xontrib_install(names: tp.List[str]): ) -def update_context(name, ctx=None): - """Updates a context in place from a xontrib. If ctx is not provided, - then __xonsh__.ctx is updated. - """ - if ctx is None: - ctx = XSH.ctx +def update_context(name, ctx: dict): + """Updates a context in place from a xontrib.""" modctx = xontrib_context(name) if modctx is None: - if not hasattr(update_context, "bad_imports"): - update_context.bad_imports = [] - update_context.bad_imports.append(name) - return ctx - return ctx.update(modctx) + raise XontribNotInstalled(f"Xontrib - {name} is not found.") + else: + ctx.update(modctx) + return ctx -def xontrib_names_completer(**_): - for name, meta in get_xontribs().items(): - full_name = f"xontrib.{name}" - if full_name not in sys.modules: +def _xontrib_name_completions(loaded=False): + for name, meta, spec in _get_xontrib_specs(): + if (spec.name in sys.modules) is loaded: yield RichCompletion(name, append_space=True, description=meta.description) +def xontrib_names_completer(**_): + yield from _xontrib_name_completions(loaded=False) + + +def xontrib_unload_completer(**_): + yield from _xontrib_name_completions(loaded=True) + + def xontribs_load( names: Annotated[ tp.Sequence[str], @@ -103,31 +123,90 @@ def xontribs_load( verbose : -v, --verbose verbose output """ - ctx = XSH.ctx + ctx = {} if XSH.ctx is None else XSH.ctx res = ExitCode.OK stdout = None stderr = None + bad_imports = [] for name in names: if verbose: print(f"loading xontrib {name!r}") try: update_context(name, ctx=ctx) + except XontribNotInstalled: + bad_imports.append(name) except Exception: res = ExitCode.INIT_FAILED print_exception(f"Failed to load xontrib {name}.") - if hasattr(update_context, "bad_imports"): + if bad_imports: res = ExitCode.NOT_FOUND - stderr = prompt_xontrib_install(update_context.bad_imports) # type: ignore - del update_context.bad_imports # type: ignore + stderr = prompt_xontrib_install(bad_imports) return stdout, stderr, res +def xontribs_unload( + names: Annotated[ + tp.Sequence[str], + Arg(nargs="+", completer=xontrib_unload_completer), + ] = (), + verbose=False, +): + """Unload the given xontribs + + Parameters + ---------- + names + name of xontribs to unload + + Notes + ----- + Proper cleanup can be implemented by the xontrib. The default is equivalent to ``del sys.modules[module]``. + """ + for name in names: + if verbose: + print(f"unloading xontrib {name!r}") + spec = find_xontrib(name) + try: + if spec and spec.name in sys.modules: + module = sys.modules[spec.name] + unloader = getattr(module, "_unload_xontrib_", None) + if unloader is not None: + unloader(XSH) + del sys.modules[spec.name] + except Exception as ex: + print_exception(f"Failed to unload xontrib {name} ({ex})") + + +def xontribs_reload( + names: Annotated[ + tp.Sequence[str], + Arg(nargs="+", completer=xontrib_unload_completer), + ] = (), + verbose=False, +): + """Reload the given xontribs + + Parameters + ---------- + names + name of xontribs to reload + """ + for name in names: + if verbose: + print(f"reloading xontrib {name!r}") + xontribs_unload([name]) + xontribs_load([name]) + + +def _get_xontrib_specs(): + for xo_name, meta in get_xontribs().items(): + yield xo_name, meta, find_xontrib(xo_name) + + def xontrib_data(): """Collects and returns the data about installed xontribs.""" - meta = get_xontribs() data = {} - for xo_name in meta: - spec = find_xontrib(xo_name) + for xo_name, _, spec in _get_xontrib_specs(): loaded = spec.name in sys.modules data[xo_name] = {"name": xo_name, "loaded": loaded} @@ -173,6 +252,8 @@ class XontribAlias(ArgParserAlias): def build(self): parser = self.create_parser(prog="xontrib") parser.add_command(xontribs_load, prog="load") + parser.add_command(xontribs_unload, prog="unload") + parser.add_command(xontribs_reload, prog="reload") parser.add_command(_list) return parser diff --git a/xontrib/abbrevs.py b/xontrib/abbrevs.py index c010c52dbe..bf7160bd6d 100644 --- a/xontrib/abbrevs.py +++ b/xontrib/abbrevs.py @@ -32,8 +32,7 @@ from prompt_toolkit.filters import IsMultiline, completion_is_selected from prompt_toolkit.keys import Keys -from xonsh.built_ins import XSH, DynamicAccessProxy -from xonsh.events import events +from xonsh.built_ins import DynamicAccessProxy, XonshSession from xonsh.tools import check_for_partial_string __all__ = () @@ -49,12 +48,6 @@ def __call__(self, word: str, buffer: Buffer) -> str: abbrevs: "dict[str, AbbrValType]" = dict() -# XSH.builtins is a namespace and extendable -XSH.builtins.abbrevs = abbrevs - -proxy = DynamicAccessProxy("abbrevs", "__xonsh__.builtins.abbrevs") -builtins.abbrevs = proxy # type: ignore - class _LastExpanded(tp.NamedTuple): word: str @@ -121,7 +114,6 @@ def set_cursor_position(buffer, expanded: str) -> None: buffer.delete(len(EDIT_SYMBOL)) [email protected]_ptk_create def custom_keybindings(bindings, **kw): from prompt_toolkit.filters import EmacsInsertMode, ViInsertMode @@ -156,3 +148,13 @@ def multiline_carriage_return(event): if not current_char or current_char.isspace(): abbrev.expand(buffer) carriage_return(buffer, event.cli) + + +def _load_xontrib_(xsh: XonshSession, **_): + xsh.builtins.events.on_ptk_create(custom_keybindings) + # XSH.builtins is a namespace and extendable + xsh.builtins.abbrevs = abbrevs + proxy = DynamicAccessProxy("abbrevs", "__xonsh__.builtins.abbrevs") + builtins.abbrevs = proxy # type: ignore + + return {"abbrevs": abbrevs} diff --git a/xontrib/autovox.py b/xontrib/autovox.py index 40aed1e05e..52f73fa45c 100644 --- a/xontrib/autovox.py +++ b/xontrib/autovox.py @@ -13,26 +13,22 @@ from pathlib import Path import xontrib.voxapi as voxapi -from xonsh.built_ins import XSH +from xonsh.built_ins import XSH, XonshSession __all__ = () -XSH.builtins.events.doc( - "autovox_policy", +def autovox_policy(path: "Path") -> "str|Path|None": """ -autovox_policy(path: pathlib.Path) -> Union[str, pathlib.Path, None] + Register a policy with autovox. -Register a policy with autovox. + A policy is a function that takes a Path and returns the venv associated with it, + if any. -A policy is a function that takes a Path and returns the venv associated with it, -if any. - -NOTE: The policy should only return a venv for this path exactly, not for -parent paths. Parent walking is handled by autovox so that all policies can -be queried at each level. -""", -) + NOTE: The policy should only return a venv for this path exactly, not for + parent paths. Parent walking is handled by autovox so that all policies can + be queried at each level. + """ class MultipleVenvsWarning(RuntimeWarning): @@ -80,7 +76,8 @@ def check_for_new_venv(curdir, olddir): # Core mechanism: Check for venv when the current directory changes [email protected]_chdir + + def cd_handler(newdir, olddir, **_): check_for_new_venv(Path(newdir), Path(olddir)) @@ -88,12 +85,10 @@ def cd_handler(newdir, olddir, **_): # Recalculate when venvs are created or destroyed [email protected]_on_create def create_handler(**_): check_for_new_venv(Path.cwd(), ...) [email protected]_on_destroy def destroy_handler(**_): check_for_new_venv(Path.cwd(), ...) @@ -101,6 +96,13 @@ def destroy_handler(**_): # Initial activation before first prompt [email protected]_post_init def load_handler(**_): check_for_new_venv(Path.cwd(), None) + + +def _load_xontrib_(xsh: XonshSession, **_): + xsh.builtins.events.register(autovox_policy) + xsh.builtins.events.on_chdir(cd_handler) + xsh.builtins.events.vox_on_create(create_handler) + xsh.builtins.events.vox_on_destroy(destroy_handler) + xsh.builtins.events.on_post_init(load_handler) diff --git a/xontrib/bashisms.py b/xontrib/bashisms.py index 01cfdd04b2..41fc263094 100644 --- a/xontrib/bashisms.py +++ b/xontrib/bashisms.py @@ -15,7 +15,7 @@ import shlex import sys -from xonsh.built_ins import XSH +from xonsh.built_ins import XSH, XonshSession __all__ = () @@ -28,7 +28,6 @@ def _warn_not_supported(msg: str): ) [email protected]_transform_command def bash_preproc(cmd, **kw): bang_previous = { "!": lambda x: x, @@ -82,10 +81,6 @@ def alias(args, stdin=None): return ret -XSH.aliases["alias"] = alias -XSH.env["THREAD_SUBPROCS"] = False - - def _unset(args): if not args: print("Usage: unset ENV_VARIABLE", file=sys.stderr) @@ -97,9 +92,6 @@ def _unset(args): print(f"{v} not found", file=sys.stderr) -XSH.aliases["unset"] = _unset - - def _export(args): if not args: print("Usage: export ENV_VARIABLE=VALUE", file=sys.stderr) @@ -112,9 +104,6 @@ def _export(args): print(f"{eq} equal sign not found", file=sys.stderr) -XSH.aliases["export"] = _export - - def _set(args): arg = args[0] if arg == "-e": @@ -129,9 +118,6 @@ def _set(args): _warn_not_supported(f"set {arg}") -XSH.aliases["set"] = _set - - def _shopt(args): supported_shopt = ["DOTGLOB"] @@ -157,7 +143,12 @@ def _shopt(args): _warn_not_supported(f"shopt {args}") -XSH.aliases["shopt"] = _shopt - - -XSH.aliases["complete"] = "completer list" +def _load_xontrib_(xsh: XonshSession, **_): + xsh.builtins.events.on_transform_command(bash_preproc) + xsh.aliases.register(_unset) + xsh.aliases.register(_export) + xsh.aliases.register(_shopt) + xsh.aliases.register(_set) + xsh.aliases["complete"] = "completer list".split() + xsh.aliases["alias"] = alias + xsh.env["THREAD_SUBPROCS"] = False diff --git a/xontrib/coreutils.py b/xontrib/coreutils.py index bf2eb79c9b..7384b0e345 100644 --- a/xontrib/coreutils.py +++ b/xontrib/coreutils.py @@ -14,7 +14,7 @@ tools avoid the need for a full subprocess call. Additionally, these tools are cross-platform. """ -from xonsh.built_ins import XSH +from xonsh.built_ins import XonshSession from xonsh.platform import ON_POSIX from xonsh.xoreutils.cat import cat from xonsh.xoreutils.echo import echo @@ -26,20 +26,18 @@ from xonsh.xoreutils.uptime import uptime from xonsh.xoreutils.yes import yes -__all__ = () -XSH.aliases["cat"] = cat -XSH.aliases["echo"] = echo -XSH.aliases["pwd"] = pwd -XSH.aliases["tee"] = tee -XSH.aliases["tty"] = tty -XSH.aliases["uname"] = uname -XSH.aliases["uptime"] = uptime -XSH.aliases["yes"] = yes -XSH.aliases["umask"] = umask -XSH.aliases["uptime"] = uptime +def _load_xontrib_(xsh: XonshSession, **_): + xsh.aliases["cat"] = cat + xsh.aliases["echo"] = echo + xsh.aliases["pwd"] = pwd + xsh.aliases["tee"] = tee + xsh.aliases["tty"] = tty + xsh.aliases["uname"] = uname + xsh.aliases["uptime"] = uptime + xsh.aliases["umask"] = umask + xsh.aliases["yes"] = yes + if ON_POSIX: + from xonsh.xoreutils.ulimit import ulimit -if ON_POSIX: - from xonsh.xoreutils.ulimit import ulimit - - XSH.aliases["ulimit"] = ulimit + xsh.aliases["ulimit"] = ulimit diff --git a/xontrib/fish_completer.py b/xontrib/fish_completer.py index 4b212e7879..7782b3de5a 100644 --- a/xontrib/fish_completer.py +++ b/xontrib/fish_completer.py @@ -26,4 +26,5 @@ def fish_proc_completer(ctx: CommandContext): ) -completer.add_one_completer("fish", fish_proc_completer, "<bash") +def _load_xontrib_(**_): + completer.add_one_completer("fish", fish_proc_completer, "<bash") diff --git a/xontrib/free_cwd.py b/xontrib/free_cwd.py index 3fc435dbdb..f8f232d70a 100644 --- a/xontrib/free_cwd.py +++ b/xontrib/free_cwd.py @@ -4,7 +4,7 @@ Windows Explorer to delete or rename the current or parent directories. Internally, it is accomplished by temporarily resetting CWD to the root drive folder while waiting at the prompt. This only -works with the prompt_toolkit backend and can cause cause issues +works with the prompt_toolkit backend and can cause issues if any extensions are enabled that hook the prompt and relies on ``os.getcwd()``. """ @@ -12,7 +12,7 @@ import os from pathlib import Path -from xonsh.built_ins import XSH +from xonsh.built_ins import XSH, XonshSession from xonsh.platform import ON_CYGWIN, ON_MSYS, ON_WINDOWS from xonsh.tools import print_exception @@ -92,7 +92,6 @@ def wrapper(*args, **kwargs): return wrapper [email protected]_ptk_create def setup_release_cwd_hook(prompter, history, completer, bindings, **kw): if ON_WINDOWS and not ON_CYGWIN and not ON_MSYS: prompter.prompt = _cwd_release_wrapper(prompter.prompt) @@ -101,3 +100,7 @@ def setup_release_cwd_hook(prompter, history, completer, bindings, **kw): completer.completer.complete = _cwd_restore_wrapper( completer.completer.complete ) + + +def _load_xontrib_(xsh: XonshSession, **_): + xsh.builtins.events.on_ptk_create(_cwd_restore_wrapper) diff --git a/xontrib/pdb.py b/xontrib/pdb.py index 24f6de12e7..793265ecdb 100644 --- a/xontrib/pdb.py +++ b/xontrib/pdb.py @@ -1,7 +1,7 @@ """Simple built-in debugger. Runs pdb on reception of SIGUSR1 signal.""" import signal -__all__ = () +from xonsh.built_ins import XonshSession def handle_sigusr1(sig, frame): @@ -11,4 +11,5 @@ def handle_sigusr1(sig, frame): pdb.Pdb().set_trace(frame) -signal.signal(signal.SIGUSR1, handle_sigusr1) +def _load_xontrib_(xsh: XonshSession, **_): + signal.signal(signal.SIGUSR1, handle_sigusr1) diff --git a/xontrib/vox.py b/xontrib/vox.py index 700566668a..4891deca4a 100644 --- a/xontrib/vox.py +++ b/xontrib/vox.py @@ -7,7 +7,7 @@ import xonsh.cli_utils as xcli import xontrib.voxapi as voxapi -from xonsh.built_ins import XSH +from xonsh.built_ins import XSH, XonshSession from xonsh.dirstack import pushd_fn from xonsh.platform import ON_WINDOWS from xonsh.tools import XonshError @@ -495,4 +495,5 @@ def upgrade( self.out(venv) -XSH.aliases["vox"] = VoxHandler(threadable=False) +def _load_xontrib_(xsh: XonshSession, **_): + xsh.aliases["vox"] = VoxHandler(threadable=False) diff --git a/xontrib/whole_word_jumping.py b/xontrib/whole_word_jumping.py index 07777bdc06..7f9538146b 100644 --- a/xontrib/whole_word_jumping.py +++ b/xontrib/whole_word_jumping.py @@ -5,12 +5,9 @@ """ from prompt_toolkit.keys import Keys -from xonsh.built_ins import XSH +from xonsh.built_ins import XonshSession -__all__ = () - [email protected]_ptk_create def custom_keybindings(bindings, **kw): # Key bindings for jumping over whole words (everything that's not @@ -42,3 +39,7 @@ def shift_delete(event): endpos = endpos + 1 if startpos == 0 else endpos buff.text = buff.text[:startpos] + buff.text[endpos:] buff.cursor_position = startpos + + +def _load_xontrib_(xsh: XonshSession, **_): + xsh.builtins.events.on_ptk_create(custom_keybindings) diff --git a/xontrib/xog.py b/xontrib/xog.py index 3e5602cff0..005c5bf25b 100644 --- a/xontrib/xog.py +++ b/xontrib/xog.py @@ -6,9 +6,7 @@ import pathlib import tempfile -from xonsh.built_ins import XSH - -__all__ = () +from xonsh.built_ins import XSH, XonshSession def _get_log_file_name(): @@ -65,5 +63,6 @@ def _xog(args, stdout=None, stderr=None): return 0 if rc else -1 -XSH.env["XONSH_TRACEBACK_LOGFILE"] = _get_log_file_name() -XSH.aliases["xog"] = _xog +def _load_xontrib_(xsh: XonshSession, **_): + xsh.env["XONSH_TRACEBACK_LOGFILE"] = _get_log_file_name() + xsh.aliases["xog"] = _xog
{ "difficulty": "medium", "estimated_review_effort": 4, "problem_domain": "New Feature Additions" }
sympy__sympy-22551@626064e
sympy/sympy
Python
22,551
core/containers: no longer create Tuple's with lists as arguments.
<!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> Fixes #22550 #### Brief description of what is fixed or changed Several classes in SymPy use `sympify` on their arguments. In some cases, this leads to the creation of `Tuple` objects that have `lists` as their arguments. This PR solves those cases as far as they show up in the tests, by explicitly looping over iterables when a mix of `tuple` and `list` can be used for the input arguments. #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> NO ENTRY <!-- END RELEASE NOTES -->
2021-11-26T01:10:18Z
Tuple using sympify instead of _sympify incorrectly supports lists Currently, `Tuple` supports the keyword `sympify` (`True` by default). However, this uses `sympify` rather than `_sympify`. As a result, `lists` appear to be supported as arguments to `Tuple` but this may result in unexpected behaviour since the resulting `Tuple` is now in principle mutable. Instead, I suggest to use `_sympify` over `sympify` and check for any `list` argument and convert those using `Tuple(*list_arg)`.
Alternatively, since this is used very often, perhaps it is better to define a `TestBasic` class for the tests, which takes care of this in the class constructor It is generally preferred to use `_sympify` rather than `sympify` but changing that everywhere would not be fully compatible. Is there a particular problem with Tuple and list arguments? The issue is that it keeps them as lists, which are not `Basic`s ```python >>> from sympy import * >>> type(Tuple([1]).args[0]) list ``` The proposed solution would give `Tuple([1]).args[0] == Tuple(S.One)` My question is really: is that a behaviour that is depended upon somewhere and if so where? The errors produced seem unrelated. I'm not sure if I made a mistake setting up the PR, or if the `TypeError` I am raising gets caught in try/except blocks > The proposed solution would give `Tuple([1]).args[0]` Maybe that's ok since we don't want a mutable as an argument so this is the only thing we can legitimately do with a list argument. But what if there is a nested list as in `Tuple([1, 2, [3, 4]])`? Or `Tuple([1,2],3)` -- raise an error? Note: ```python >>> t=Tuple((1,2),3) >>> type(t.args[0]) is Tuple True ``` If any argument would be a `list`, it would be converted to `Tuple(*list)`, so that way it should be handled recursively, although I might be missing some case. I'll make sure to add tests such that `list` and `tuple` arguments are handled equivalently. The linked PR now correctly hows where the issues are @oscarbenjamin I guess part of the problem is the fact that `sympify` (as opposed to `_sympify` also handles lists so is not guaranteed to return Basic but many places that should use `_sympify` use `sympify` instead. The problem with `Tuple` specifically is not that it isn't guaranteed to have `Basic`s, but there are actually examples that are ran during the tests that have this happening. <details> <summary> errors found when not allowing `list` in `Tuple` </summary> ``` ________________________________________________________________________________ ____________ sympy/core/tests/test_diff.py:test_diff_nth_derivative ____________ Traceback (most recent call last): File "/home/runner/work/sympy/sympy/sympy/core/tests/test_diff.py", line 152, in test_diff_nth_derivative assert (cos(x)*sin(y)).diff([[x, y, z]]) == NDimArray([ File "/home/runner/work/sympy/sympy/sympy/core/expr.py", line 3544, in diff return _derivative_dispatch(self, *symbols, **assumptions) File "/home/runner/work/sympy/sympy/sympy/core/function.py", line 1920, in _derivative_dispatch return ArrayDerivative(expr, *variables, **kwargs) File "/home/runner/work/sympy/sympy/sympy/tensor/array/array_derivatives.py", line 19, in __new__ obj = super().__new__(cls, expr, *variables, **kwargs) File "/home/runner/work/sympy/sympy/sympy/core/function.py", line 1273, in __new__ variables = list(sympify(variables)) File "/home/runner/work/sympy/sympy/sympy/core/sympify.py", line 366, in sympify return conv(a) File "/home/runner/work/sympy/sympy/sympy/core/containers.py", line 152, in <lambda> converter[tuple] = lambda tup: Tuple(*tup) File "/home/runner/work/sympy/sympy/sympy/core/containers.py", line 57, in __new__ warnings.warn( DeprecationWarning(msg)) DeprecationWarning: Tuple should not contain list as arguments: [[x, y, z]] ________________________________________________________________________________ ___________ sympy/core/tests/test_function.py:test_Derivative__new__ ___________ Traceback (most recent call last): File "/home/runner/work/sympy/sympy/sympy/core/tests/test_function.py", line 1334, in test_Derivative__new__ assert f(x, y).diff([(x, y), 0]) == f(x, y) File "/home/runner/work/sympy/sympy/sympy/core/expr.py", line 3544, in diff return _derivative_dispatch(self, *symbols, **assumptions) File "/home/runner/work/sympy/sympy/sympy/core/function.py", line 1920, in _derivative_dispatch return ArrayDerivative(expr, *variables, **kwargs) File "/home/runner/work/sympy/sympy/sympy/tensor/array/array_derivatives.py", line 19, in __new__ obj = super().__new__(cls, expr, *variables, **kwargs) File "/home/runner/work/sympy/sympy/sympy/core/function.py", line 1273, in __new__ variables = list(sympify(variables)) File "/home/runner/work/sympy/sympy/sympy/core/sympify.py", line 366, in sympify return conv(a) File "/home/runner/work/sympy/sympy/sympy/core/containers.py", line 152, in <lambda> converter[tuple] = lambda tup: Tuple(*tup) File "/home/runner/work/sympy/sympy/sympy/core/containers.py", line 57, in __new__ warnings.warn( DeprecationWarning(msg)) DeprecationWarning: Tuple should not contain list as arguments: [(x, y), 0] ________________________________________________________________________________ __________ sympy/matrices/tests/test_matrices.py:test_diff_by_matrix ___________ Traceback (most recent call last): File "/home/runner/work/sympy/sympy/sympy/matrices/tests/test_matrices.py", line 1977, in test_diff_by_matrix dB = B.diff([[a, b]]) File "/home/runner/work/sympy/sympy/sympy/matrices/matrices.py", line 470, in diff deriv = ArrayDerivative(self, *args, evaluate=True) File "/home/runner/work/sympy/sympy/sympy/tensor/array/array_derivatives.py", line 19, in __new__ obj = super().__new__(cls, expr, *variables, **kwargs) File "/home/runner/work/sympy/sympy/sympy/core/function.py", line 1273, in __new__ variables = list(sympify(variables)) File "/home/runner/work/sympy/sympy/sympy/core/sympify.py", line 366, in sympify return conv(a) File "/home/runner/work/sympy/sympy/sympy/core/containers.py", line 152, in <lambda> converter[tuple] = lambda tup: Tuple(*tup) File "/home/runner/work/sympy/sympy/sympy/core/containers.py", line 57, in __new__ warnings.warn( DeprecationWarning(msg)) DeprecationWarning: Tuple should not contain list as arguments: [[a, b]] ________________________________________________________________________________ ______________ sympy/printing/tests/test_julia.py:test_containers ______________ Traceback (most recent call last): File "/home/runner/work/sympy/sympy/sympy/printing/tests/test_julia.py", line 215, in test_containers assert julia_code((1, eye(3), Matrix(0, 0, []), [])) == "(1, [1 0 0;\n0 1 0;\n0 0 1], zeros(0, 0), Any[])" File "/home/runner/work/sympy/sympy/sympy/printing/julia.py", line 626, in julia_code return JuliaCodePrinter(settings).doprint(expr, assign_to) File "/home/runner/work/sympy/sympy/sympy/printing/codeprinter.py", line 137, in doprint expr = _handle_assign_to(expr, assign_to) File "/home/runner/work/sympy/sympy/sympy/printing/codeprinter.py", line 122, in _handle_assign_to return sympify(expr) File "/home/runner/work/sympy/sympy/sympy/core/sympify.py", line 366, in sympify return conv(a) File "/home/runner/work/sympy/sympy/sympy/core/containers.py", line 152, in <lambda> converter[tuple] = lambda tup: Tuple(*tup) File "/home/runner/work/sympy/sympy/sympy/core/containers.py", line 57, in __new__ warnings.warn( DeprecationWarning(msg)) DeprecationWarning: Tuple should not contain list as arguments: [] ________________________________________________________________________________ ______________ sympy/printing/tests/test_maple.py:test_containers ______________ Traceback (most recent call last): File "/home/runner/work/sympy/sympy/sympy/printing/tests/test_maple.py", line 237, in test_containers assert maple_code((1, eye(3), Matrix(0, 0, []), [])) == \ File "/home/runner/work/sympy/sympy/sympy/printing/maple.py", line 295, in maple_code return MapleCodePrinter(settings).doprint(expr, assign_to) File "/home/runner/work/sympy/sympy/sympy/printing/codeprinter.py", line 137, in doprint expr = _handle_assign_to(expr, assign_to) File "/home/runner/work/sympy/sympy/sympy/printing/codeprinter.py", line 122, in _handle_assign_to return sympify(expr) File "/home/runner/work/sympy/sympy/sympy/core/sympify.py", line 366, in sympify return conv(a) File "/home/runner/work/sympy/sympy/sympy/core/containers.py", line 152, in <lambda> converter[tuple] = lambda tup: Tuple(*tup) File "/home/runner/work/sympy/sympy/sympy/core/containers.py", line 57, in __new__ warnings.warn( DeprecationWarning(msg)) DeprecationWarning: Tuple should not contain list as arguments: [] ________________________________________________________________________________ _____________ sympy/printing/tests/test_octave.py:test_containers ______________ Traceback (most recent call last): File "/home/runner/work/sympy/sympy/sympy/printing/tests/test_octave.py", line 284, in test_containers assert mcode((1, eye(3), Matrix(0, 0, []), [])) == "{1, [1 0 0; 0 1 0; 0 0 1], [], {}}" File "/home/runner/work/sympy/sympy/sympy/printing/octave.py", line 709, in octave_code return OctaveCodePrinter(settings).doprint(expr, assign_to) File "/home/runner/work/sympy/sympy/sympy/printing/codeprinter.py", line 137, in doprint expr = _handle_assign_to(expr, assign_to) File "/home/runner/work/sympy/sympy/sympy/printing/codeprinter.py", line 122, in _handle_assign_to return sympify(expr) File "/home/runner/work/sympy/sympy/sympy/core/sympify.py", line 366, in sympify return conv(a) File "/home/runner/work/sympy/sympy/sympy/core/containers.py", line 152, in <lambda> converter[tuple] = lambda tup: Tuple(*tup) File "/home/runner/work/sympy/sympy/sympy/core/containers.py", line 57, in __new__ warnings.warn( DeprecationWarning(msg)) DeprecationWarning: Tuple should not contain list as arguments: [] ________________________________________________________________________________ ____________ sympy/printing/tests/test_tableform.py:test_TableForm _____________ Traceback (most recent call last): File "/home/runner/work/sympy/sympy/sympy/printing/tests/test_tableform.py", line 13, in test_TableForm s = str(TableForm([["a", "b"], ["c", "d"], ["e", 0]], File "/home/runner/work/sympy/sympy/sympy/printing/tableform.py", line 147, in __init__ _lines = Tuple(*data) File "/home/runner/work/sympy/sympy/sympy/core/containers.py", line 57, in __new__ warnings.warn( DeprecationWarning(msg)) DeprecationWarning: Tuple should not contain list as arguments: [a, b] ________________________________________________________________________________ _________ sympy/printing/tests/test_tableform.py:test_TableForm_latex __________ Traceback (most recent call last): File "/home/runner/work/sympy/sympy/sympy/printing/tests/test_tableform.py", line 104, in test_TableForm_latex s = latex(TableForm([[0, x**3], ["c", S.One/4], [sqrt(x), sin(x**2)]], File "/home/runner/work/sympy/sympy/sympy/printing/tableform.py", line 147, in __init__ _lines = Tuple(*data) File "/home/runner/work/sympy/sympy/sympy/core/containers.py", line 57, in __new__ warnings.warn( DeprecationWarning(msg)) DeprecationWarning: Tuple should not contain list as arguments: [0, x**3] ________________________________________________________________________________ ________________ sympy/simplify/tests/test_cse.py:test_cse_list ________________ Traceback (most recent call last): File "/home/runner/work/sympy/sympy/sympy/simplify/tests/test_cse.py", line 588, in test_cse_list assert _cse(c(it)) == ([], c(it)) File "/home/runner/work/sympy/sympy/sympy/core/containers.py", line 57, in __new__ warnings.warn( DeprecationWarning(msg)) DeprecationWarning: Tuple should not contain list as arguments: [x] ________________________________________________________________________________ ____ sympy/stats/tests/test_stochastic_process.py:test_DiscreteMarkovChain _____ Traceback (most recent call last): File "/home/runner/work/sympy/sympy/sympy/stats/tests/test_stochastic_process.py", line 140, in test_DiscreteMarkovChain assert Y3.fundamental_matrix() == ImmutableMatrix([[176, 81, -132], [36, 141, -52], [-44, -39, 208]])/125 File "/home/runner/work/sympy/sympy/sympy/stats/stochastic_process_types.py", line 1110, in fundamental_matrix _, _, _, Q = self.decompose() File "/home/runner/work/sympy/sympy/sympy/stats/stochastic_process_types.py", line 1351, in decompose classes = self.communication_classes() File "/home/runner/work/sympy/sympy/sympy/stats/stochastic_process_types.py", line 1096, in communication_classes return sympify(list(zip(classes, recurrence, periods))) File "/home/runner/work/sympy/sympy/sympy/core/sympify.py", line 442, in sympify return type(a)([sympify(x, locals=locals, convert_xor=convert_xor, File "/home/runner/work/sympy/sympy/sympy/core/sympify.py", line 442, in <listcomp> return type(a)([sympify(x, locals=locals, convert_xor=convert_xor, File "/home/runner/work/sympy/sympy/sympy/core/sympify.py", line 366, in sympify return conv(a) File "/home/runner/work/sympy/sympy/sympy/core/containers.py", line 152, in <lambda> converter[tuple] = lambda tup: Tuple(*tup) File "/home/runner/work/sympy/sympy/sympy/core/containers.py", line 57, in __new__ warnings.warn( DeprecationWarning(msg)) DeprecationWarning: Tuple should not contain list as arguments: [0, 1, 2] ________________________________________________________________________________ sympy/tensor/array/tests/test_array_comprehension.py:test_array_comprehension _ Traceback (most recent call last): File "/home/runner/work/sympy/sympy/sympy/tensor/array/tests/test_array_comprehension.py", line 44, in test_array_comprehension raises(TypeError, lambda: ArrayComprehension(i*j, (i, 1, 3), (j, 2, [1, 3, 2]))) File "/home/runner/work/sympy/sympy/sympy/testing/pytest.py", line 110, in raises code() File "/home/runner/work/sympy/sympy/sympy/tensor/array/tests/test_array_comprehension.py", line 44, in <lambda> raises(TypeError, lambda: ArrayComprehension(i*j, (i, 1, 3), (j, 2, [1, 3, 2]))) File "/home/runner/work/sympy/sympy/sympy/tensor/array/array_comprehension.py", line 41, in __new__ arglist.extend(cls._check_limits_validity(function, symbols)) File "/home/runner/work/sympy/sympy/sympy/tensor/array/array_comprehension.py", line 228, in _check_limits_validity limits = sympify(limits) File "/home/runner/work/sympy/sympy/sympy/core/sympify.py", line 366, in sympify return conv(a) File "/home/runner/work/sympy/sympy/sympy/core/containers.py", line 152, in <lambda> converter[tuple] = lambda tup: Tuple(*tup) File "/home/runner/work/sympy/sympy/sympy/core/containers.py", line 53, in __new__ args = tuple((sympify(arg) for arg in args)) File "/home/runner/work/sympy/sympy/sympy/core/containers.py", line 53, in <genexpr> args = tuple((sympify(arg) for arg in args)) File "/home/runner/work/sympy/sympy/sympy/core/sympify.py", line 366, in sympify return conv(a) File "/home/runner/work/sympy/sympy/sympy/core/containers.py", line 152, in <lambda> converter[tuple] = lambda tup: Tuple(*tup) File "/home/runner/work/sympy/sympy/sympy/core/containers.py", line 57, in __new__ warnings.warn( DeprecationWarning(msg)) DeprecationWarning: Tuple should not contain list as arguments: [1, 3, 2] ``` ``` ________________________________________________________________________________ _________________ sympy.printing.tableform.TableForm._sympystr _________________ File "/home/runner/work/sympy/sympy/sympy/printing/tableform.py", line 250, in sympy.printing.tableform.TableForm._sympystr Failed example: t = TableForm([[5, 7], [4, 2], [10, 3]]) Exception raised: Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/doctest.py", line 1334, in __run exec(compile(example.source, filename, "single", File "<doctest sympy.printing.tableform.TableForm._sympystr[1]>", line 1, in <module> t = TableForm([[5, 7], [4, 2], [10, 3]]) File "/home/runner/work/sympy/sympy/sympy/printing/tableform.py", line 147, in __init__ _lines = Tuple(*data) File "/home/runner/work/sympy/sympy/sympy/core/containers.py", line 57, in __new__ warnings.warn( DeprecationWarning(msg)) DeprecationWarning: Tuple should not contain list as arguments: [5, 7] ********************************************************************** File "/home/runner/work/sympy/sympy/sympy/printing/tableform.py", line 251, in sympy.printing.tableform.TableForm._sympystr Failed example: s = t.as_str() Exception raised: Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/doctest.py", line 1334, in __run exec(compile(example.source, filename, "single", File "<doctest sympy.printing.tableform.TableForm._sympystr[2]>", line 1, in <module> s = t.as_str() NameError: name 't' is not defined ________________________________________________________________________________ _________________ sympy.printing.tableform.TableForm.as_matrix _________________ File "/home/runner/work/sympy/sympy/sympy/printing/tableform.py", line 218, in sympy.printing.tableform.TableForm.as_matrix Failed example: t = TableForm([[5, 7], [4, 2], [10, 3]], headings='automatic') Exception raised: Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/doctest.py", line 1334, in __run exec(compile(example.source, filename, "single", File "<doctest sympy.printing.tableform.TableForm.as_matrix[1]>", line 1, in <module> t = TableForm([[5, 7], [4, 2], [10, 3]], headings='automatic') File "/home/runner/work/sympy/sympy/sympy/printing/tableform.py", line 147, in __init__ _lines = Tuple(*data) File "/home/runner/work/sympy/sympy/sympy/core/containers.py", line 57, in __new__ warnings.warn( DeprecationWarning(msg)) DeprecationWarning: Tuple should not contain list as arguments: [5, 7] ********************************************************************** File "/home/runner/work/sympy/sympy/sympy/printing/tableform.py", line 219, in sympy.printing.tableform.TableForm.as_matrix Failed example: t Exception raised: Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/doctest.py", line 1334, in __run exec(compile(example.source, filename, "single", File "<doctest sympy.printing.tableform.TableForm.as_matrix[2]>", line 1, in <module> t NameError: name 't' is not defined ********************************************************************** File "/home/runner/work/sympy/sympy/sympy/printing/tableform.py", line 225, in sympy.printing.tableform.TableForm.as_matrix Failed example: t.as_matrix() Exception raised: Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/doctest.py", line 1334, in __run exec(compile(example.source, filename, "single", File "<doctest sympy.printing.tableform.TableForm.as_matrix[3]>", line 1, in <module> t.as_matrix() NameError: name 't' is not defined ________________________________________________________________________________ _________________ sympy.printing.tableform.TableForm.__init__ __________________ File "/home/runner/work/sympy/sympy/sympy/printing/tableform.py", line 101, in sympy.printing.tableform.TableForm.__init__ Failed example: TableForm([[5, 7], [4, 2], [10, 3]]) Exception raised: Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/doctest.py", line 1334, in __run exec(compile(example.source, filename, "single", File "<doctest sympy.printing.tableform.TableForm.__init__[1]>", line 1, in <module> TableForm([[5, 7], [4, 2], [10, 3]]) File "/home/runner/work/sympy/sympy/sympy/printing/tableform.py", line 147, in __init__ _lines = Tuple(*data) File "/home/runner/work/sympy/sympy/sympy/core/containers.py", line 57, in __new__ warnings.warn( DeprecationWarning(msg)) DeprecationWarning: Tuple should not contain list as arguments: [5, 7] ********************************************************************** File "/home/runner/work/sympy/sympy/sympy/printing/tableform.py", line 105, in sympy.printing.tableform.TableForm.__init__ Failed example: TableForm([list('.'*i) for i in range(1, 4)], headings='automatic') Exception raised: Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/doctest.py", line 1334, in __run exec(compile(example.source, filename, "single", File "<doctest sympy.printing.tableform.TableForm.__init__[2]>", line 1, in <module> TableForm([list('.'*i) for i in range(1, 4)], headings='automatic') File "/home/runner/work/sympy/sympy/sympy/printing/tableform.py", line 147, in __init__ _lines = Tuple(*data) File "/home/runner/work/sympy/sympy/sympy/core/containers.py", line 57, in __new__ warnings.warn( DeprecationWarning(msg)) DeprecationWarning: Tuple should not contain list as arguments: [., , ] ********************************************************************** File "/home/runner/work/sympy/sympy/sympy/printing/tableform.py", line 111, in sympy.printing.tableform.TableForm.__init__ Failed example: TableForm([[Symbol('.'*(j if not i%2 else 1)) for i in range(3)] for j in range(4)], alignments='rcl') Exception raised: Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/doctest.py", line 1334, in __run exec(compile(example.source, filename, "single", File "<doctest sympy.printing.tableform.TableForm.__init__[3]>", line 1, in <module> TableForm([[Symbol('.'*(j if not i%2 else 1)) for i in range(3)] File "/home/runner/work/sympy/sympy/sympy/printing/tableform.py", line 147, in __init__ _lines = Tuple(*data) File "/home/runner/work/sympy/sympy/sympy/core/containers.py", line 57, in __new__ warnings.warn( DeprecationWarning(msg)) DeprecationWarning: Tuple should not contain list as arguments: [, ., ] ________________________________________________________________________________ ______________________ sympy.printing.tableform.TableForm ______________________ File "/home/runner/work/sympy/sympy/sympy/printing/tableform.py", line 17, in sympy.printing.tableform.TableForm Failed example: t = TableForm([[5, 7], [4, 2], [10, 3]]) Exception raised: Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/doctest.py", line 1334, in __run exec(compile(example.source, filename, "single", File "<doctest sympy.printing.tableform.TableForm[1]>", line 1, in <module> t = TableForm([[5, 7], [4, 2], [10, 3]]) File "/home/runner/work/sympy/sympy/sympy/printing/tableform.py", line 147, in __init__ _lines = Tuple(*data) File "/home/runner/work/sympy/sympy/sympy/core/containers.py", line 57, in __new__ warnings.warn( DeprecationWarning(msg)) DeprecationWarning: Tuple should not contain list as arguments: [5, 7] ********************************************************************** File "/home/runner/work/sympy/sympy/sympy/printing/tableform.py", line 18, in sympy.printing.tableform.TableForm Failed example: print(t) Exception raised: Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/doctest.py", line 1334, in __run exec(compile(example.source, filename, "single", File "<doctest sympy.printing.tableform.TableForm[2]>", line 1, in <module> print(t) NameError: name 't' is not defined ********************************************************************** File "/home/runner/work/sympy/sympy/sympy/printing/tableform.py", line 26, in sympy.printing.tableform.TableForm Failed example: print(t.as_latex()) Exception raised: Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/doctest.py", line 1334, in __run exec(compile(example.source, filename, "single", File "<doctest sympy.printing.tableform.TableForm[3]>", line 1, in <module> print(t.as_latex()) NameError: name 't' is not defined ________________________________________________________________________________ ________________________ sympy.utilities.misc.rawlines _________________________ File "/home/runner/work/sympy/sympy/sympy/utilities/misc.py", line 104, in sympy.utilities.misc.rawlines Failed example: s = str(TableForm([[1, 10]], headings=(None, ['a', 'bee']))) Exception raised: Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/doctest.py", line 1334, in __run exec(compile(example.source, filename, "single", File "<doctest sympy.utilities.misc.rawlines[2]>", line 1, in <module> s = str(TableForm([[1, 10]], headings=(None, ['a', 'bee']))) File "/home/runner/work/sympy/sympy/sympy/printing/tableform.py", line 147, in __init__ _lines = Tuple(*data) File "/home/runner/work/sympy/sympy/sympy/core/containers.py", line 57, in __new__ warnings.warn( DeprecationWarning(msg)) DeprecationWarning: Tuple should not contain list as arguments: [1, 10] ********************************************************************** File "/home/runner/work/sympy/sympy/sympy/utilities/misc.py", line 105, in sympy.utilities.misc.rawlines Failed example: print(rawlines(s)) Exception raised: Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/doctest.py", line 1334, in __run exec(compile(example.source, filename, "single", File "<doctest sympy.utilities.misc.rawlines[3]>", line 1, in <module> print(rawlines(s)) NameError: name 's' is not defined ________________________________________________________________________________ ___ sympy.stats.stochastic_process_types.DiscreteMarkovChain.canonical_form ____ File "/home/runner/work/sympy/sympy/sympy/stats/stochastic_process_types.py", line 1406, in sympy.stats.stochastic_process_types.DiscreteMarkovChain.canonical_form Failed example: states, new_matrix = X.canonical_form() Exception raised: Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/doctest.py", line 1334, in __run exec(compile(example.source, filename, "single", File "<doctest sympy.stats.stochastic_process_types.DiscreteMarkovChain.canonical_form[4]>", line 1, in <module> states, new_matrix = X.canonical_form() File "/home/runner/work/sympy/sympy/sympy/stats/stochastic_process_types.py", line 1468, in canonical_form states, A, B, C = self.decompose() File "/home/runner/work/sympy/sympy/sympy/stats/stochastic_process_types.py", line 1351, in decompose classes = self.communication_classes() File "/home/runner/work/sympy/sympy/sympy/stats/stochastic_process_types.py", line 1096, in communication_classes return sympify(list(zip(classes, recurrence, periods))) File "/home/runner/work/sympy/sympy/sympy/core/sympify.py", line 442, in sympify return type(a)([sympify(x, locals=locals, convert_xor=convert_xor, File "/home/runner/work/sympy/sympy/sympy/core/sympify.py", line 442, in <listcomp> return type(a)([sympify(x, locals=locals, convert_xor=convert_xor, File "/home/runner/work/sympy/sympy/sympy/core/sympify.py", line 366, in sympify return conv(a) File "/home/runner/work/sympy/sympy/sympy/core/containers.py", line 152, in <lambda> converter[tuple] = lambda tup: Tuple(*tup) File "/home/runner/work/sympy/sympy/sympy/core/containers.py", line 57, in __new__ warnings.warn( DeprecationWarning(msg)) DeprecationWarning: Tuple should not contain list as arguments: [3] ********************************************************************** File "/home/runner/work/sympy/sympy/sympy/stats/stochastic_process_types.py", line 1407, in sympy.stats.stochastic_process_types.DiscreteMarkovChain.canonical_form Failed example: states Exception raised: Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/doctest.py", line 1334, in __run exec(compile(example.source, filename, "single", File "<doctest sympy.stats.stochastic_process_types.DiscreteMarkovChain.canonical_form[5]>", line 1, in <module> states NameError: name 'states' is not defined ********************************************************************** File "/home/runner/work/sympy/sympy/sympy/stats/stochastic_process_types.py", line 1410, in sympy.stats.stochastic_process_types.DiscreteMarkovChain.canonical_form Failed example: new_matrix Exception raised: Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/doctest.py", line 1334, in __run exec(compile(example.source, filename, "single", File "<doctest sympy.stats.stochastic_process_types.DiscreteMarkovChain.canonical_form[6]>", line 1, in <module> new_matrix NameError: name 'new_matrix' is not defined ********************************************************************** File "/home/runner/work/sympy/sympy/sympy/stats/stochastic_process_types.py", line 1423, in sympy.stats.stochastic_process_types.DiscreteMarkovChain.canonical_form Failed example: X = DiscreteMarkovChain('X', states, new_matrix) Exception raised: Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/doctest.py", line 1334, in __run exec(compile(example.source, filename, "single", File "<doctest sympy.stats.stochastic_process_types.DiscreteMarkovChain.canonical_form[7]>", line 1, in <module> X = DiscreteMarkovChain('X', states, new_matrix) NameError: name 'states' is not defined ********************************************************************** File "/home/runner/work/sympy/sympy/sympy/stats/stochastic_process_types.py", line 1424, in sympy.stats.stochastic_process_types.DiscreteMarkovChain.canonical_form Failed example: states, new_matrix = X.canonical_form() Exception raised: Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/doctest.py", line 1334, in __run exec(compile(example.source, filename, "single", File "<doctest sympy.stats.stochastic_process_types.DiscreteMarkovChain.canonical_form[8]>", line 1, in <module> states, new_matrix = X.canonical_form() File "/home/runner/work/sympy/sympy/sympy/stats/stochastic_process_types.py", line 1468, in canonical_form states, A, B, C = self.decompose() File "/home/runner/work/sympy/sympy/sympy/stats/stochastic_process_types.py", line 1351, in decompose classes = self.communication_classes() File "/home/runner/work/sympy/sympy/sympy/stats/stochastic_process_types.py", line 1096, in communication_classes return sympify(list(zip(classes, recurrence, periods))) File "/home/runner/work/sympy/sympy/sympy/core/sympify.py", line 442, in sympify return type(a)([sympify(x, locals=locals, convert_xor=convert_xor, File "/home/runner/work/sympy/sympy/sympy/core/sympify.py", line 442, in <listcomp> return type(a)([sympify(x, locals=locals, convert_xor=convert_xor, File "/home/runner/work/sympy/sympy/sympy/core/sympify.py", line 366, in sympify return conv(a) File "/home/runner/work/sympy/sympy/sympy/core/containers.py", line 152, in <lambda> converter[tuple] = lambda tup: Tuple(*tup) File "/home/runner/work/sympy/sympy/sympy/core/containers.py", line 57, in __new__ warnings.warn( DeprecationWarning(msg)) DeprecationWarning: Tuple should not contain list as arguments: [3] ********************************************************************** File "/home/runner/work/sympy/sympy/sympy/stats/stochastic_process_types.py", line 1425, in sympy.stats.stochastic_process_types.DiscreteMarkovChain.canonical_form Failed example: states Exception raised: Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/doctest.py", line 1334, in __run exec(compile(example.source, filename, "single", File "<doctest sympy.stats.stochastic_process_types.DiscreteMarkovChain.canonical_form[9]>", line 1, in <module> states NameError: name 'states' is not defined ********************************************************************** File "/home/runner/work/sympy/sympy/sympy/stats/stochastic_process_types.py", line 1428, in sympy.stats.stochastic_process_types.DiscreteMarkovChain.canonical_form Failed example: new_matrix Exception raised: Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/doctest.py", line 1334, in __run exec(compile(example.source, filename, "single", File "<doctest sympy.stats.stochastic_process_types.DiscreteMarkovChain.canonical_form[10]>", line 1, in <module> new_matrix NameError: name 'new_matrix' is not defined ********************************************************************** File "/home/runner/work/sympy/sympy/sympy/stats/stochastic_process_types.py", line 1444, in sympy.stats.stochastic_process_types.DiscreteMarkovChain.canonical_form Failed example: states, new_matrix = X.canonical_form() Exception raised: Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/doctest.py", line 1334, in __run exec(compile(example.source, filename, "single", File "<doctest sympy.stats.stochastic_process_types.DiscreteMarkovChain.canonical_form[13]>", line 1, in <module> states, new_matrix = X.canonical_form() File "/home/runner/work/sympy/sympy/sympy/stats/stochastic_process_types.py", line 1468, in canonical_form states, A, B, C = self.decompose() File "/home/runner/work/sympy/sympy/sympy/stats/stochastic_process_types.py", line 1351, in decompose classes = self.communication_classes() File "/home/runner/work/sympy/sympy/sympy/stats/stochastic_process_types.py", line 1096, in communication_classes return sympify(list(zip(classes, recurrence, periods))) File "/home/runner/work/sympy/sympy/sympy/core/sympify.py", line 442, in sympify return type(a)([sympify(x, locals=locals, convert_xor=convert_xor, File "/home/runner/work/sympy/sympy/sympy/core/sympify.py", line 442, in <listcomp> return type(a)([sympify(x, locals=locals, convert_xor=convert_xor, File "/home/runner/work/sympy/sympy/sympy/core/sympify.py", line 366, in sympify return conv(a) File "/home/runner/work/sympy/sympy/sympy/core/containers.py", line 152, in <lambda> converter[tuple] = lambda tup: Tuple(*tup) File "/home/runner/work/sympy/sympy/sympy/core/containers.py", line 57, in __new__ warnings.warn( DeprecationWarning(msg)) DeprecationWarning: Tuple should not contain list as arguments: [1, 3] ********************************************************************** File "/home/runner/work/sympy/sympy/sympy/stats/stochastic_process_types.py", line 1445, in sympy.stats.stochastic_process_types.DiscreteMarkovChain.canonical_form Failed example: states Exception raised: Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/doctest.py", line 1334, in __run exec(compile(example.source, filename, "single", File "<doctest sympy.stats.stochastic_process_types.DiscreteMarkovChain.canonical_form[14]>", line 1, in <module> states NameError: name 'states' is not defined ********************************************************************** File "/home/runner/work/sympy/sympy/sympy/stats/stochastic_process_types.py", line 1448, in sympy.stats.stochastic_process_types.DiscreteMarkovChain.canonical_form Failed example: new_matrix Exception raised: Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/doctest.py", line 1334, in __run exec(compile(example.source, filename, "single", File "<doctest sympy.stats.stochastic_process_types.DiscreteMarkovChain.canonical_form[15]>", line 1, in <module> new_matrix NameError: name 'new_matrix' is not defined ________________________________________________________________________________ ______ sympy.stats.stochastic_process_types.DiscreteMarkovChain.decompose ______ File "/home/runner/work/sympy/sympy/sympy/stats/stochastic_process_types.py", line 1310, in sympy.stats.stochastic_process_types.DiscreteMarkovChain.decompose Failed example: states, A, B, C = X.decompose() Exception raised: Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/doctest.py", line 1334, in __run exec(compile(example.source, filename, "single", File "<doctest sympy.stats.stochastic_process_types.DiscreteMarkovChain.decompose[4]>", line 1, in <module> states, A, B, C = X.decompose() File "/home/runner/work/sympy/sympy/sympy/stats/stochastic_process_types.py", line 1351, in decompose classes = self.communication_classes() File "/home/runner/work/sympy/sympy/sympy/stats/stochastic_process_types.py", line 1096, in communication_classes return sympify(list(zip(classes, recurrence, periods))) File "/home/runner/work/sympy/sympy/sympy/core/sympify.py", line 442, in sympify return type(a)([sympify(x, locals=locals, convert_xor=convert_xor, File "/home/runner/work/sympy/sympy/sympy/core/sympify.py", line 442, in <listcomp> return type(a)([sympify(x, locals=locals, convert_xor=convert_xor, File "/home/runner/work/sympy/sympy/sympy/core/sympify.py", line 366, in sympify return conv(a) File "/home/runner/work/sympy/sympy/sympy/core/containers.py", line 152, in <lambda> converter[tuple] = lambda tup: Tuple(*tup) File "/home/runner/work/sympy/sympy/sympy/core/containers.py", line 57, in __new__ warnings.warn( DeprecationWarning(msg)) DeprecationWarning: Tuple should not contain list as arguments: [2] ********************************************************************** File "/home/runner/work/sympy/sympy/sympy/stats/stochastic_process_types.py", line 1311, in sympy.stats.stochastic_process_types.DiscreteMarkovChain.decompose Failed example: states Exception raised: Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/doctest.py", line 1334, in __run exec(compile(example.source, filename, "single", File "<doctest sympy.stats.stochastic_process_types.DiscreteMarkovChain.decompose[5]>", line 1, in <module> states NameError: name 'states' is not defined ********************************************************************** File "/home/runner/work/sympy/sympy/sympy/stats/stochastic_process_types.py", line 1314, in sympy.stats.stochastic_process_types.DiscreteMarkovChain.decompose Failed example: A # recurrent to recurrent Exception raised: Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/doctest.py", line 1334, in __run exec(compile(example.source, filename, "single", File "<doctest sympy.stats.stochastic_process_types.DiscreteMarkovChain.decompose[6]>", line 1, in <module> A # recurrent to recurrent NameError: name 'A' is not defined ********************************************************************** File "/home/runner/work/sympy/sympy/sympy/stats/stochastic_process_types.py", line 1317, in sympy.stats.stochastic_process_types.DiscreteMarkovChain.decompose Failed example: B # transient to recurrent Exception raised: Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/doctest.py", line 1334, in __run exec(compile(example.source, filename, "single", File "<doctest sympy.stats.stochastic_process_types.DiscreteMarkovChain.decompose[7]>", line 1, in <module> B # transient to recurrent NameError: name 'B' is not defined ********************************************************************** File "/home/runner/work/sympy/sympy/sympy/stats/stochastic_process_types.py", line 1324, in sympy.stats.stochastic_process_types.DiscreteMarkovChain.decompose Failed example: C # transient to transient Exception raised: Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/doctest.py", line 1334, in __run exec(compile(example.source, filename, "single", File "<doctest sympy.stats.stochastic_process_types.DiscreteMarkovChain.decompose[8]>", line 1, in <module> C # transient to transient NameError: name 'C' is not defined ________________________________________________________________________________ sympy.stats.stochastic_process_types.DiscreteMarkovChain.communication_classes File "/home/runner/work/sympy/sympy/sympy/stats/stochastic_process_types.py", line 995, in sympy.stats.stochastic_process_types.DiscreteMarkovChain.communication_classes Failed example: classes = X.communication_classes() Exception raised: Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/doctest.py", line 1334, in __run exec(compile(example.source, filename, "single", File "<doctest sympy.stats.stochastic_process_types.DiscreteMarkovChain.communication_classes[4]>", line 1, in <module> classes = X.communication_classes() File "/home/runner/work/sympy/sympy/sympy/stats/stochastic_process_types.py", line 1096, in communication_classes return sympify(list(zip(classes, recurrence, periods))) File "/home/runner/work/sympy/sympy/sympy/core/sympify.py", line 442, in sympify return type(a)([sympify(x, locals=locals, convert_xor=convert_xor, File "/home/runner/work/sympy/sympy/sympy/core/sympify.py", line 442, in <listcomp> return type(a)([sympify(x, locals=locals, convert_xor=convert_xor, File "/home/runner/work/sympy/sympy/sympy/core/sympify.py", line 366, in sympify return conv(a) File "/home/runner/work/sympy/sympy/sympy/core/containers.py", line 152, in <lambda> converter[tuple] = lambda tup: Tuple(*tup) File "/home/runner/work/sympy/sympy/sympy/core/containers.py", line 57, in __new__ warnings.warn( DeprecationWarning(msg)) DeprecationWarning: Tuple should not contain list as arguments: [1, 2] ********************************************************************** File "/home/runner/work/sympy/sympy/sympy/stats/stochastic_process_types.py", line 996, in sympy.stats.stochastic_process_types.DiscreteMarkovChain.communication_classes Failed example: for states, is_recurrent, period in classes: states, is_recurrent, period Exception raised: Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/doctest.py", line 1334, in __run exec(compile(example.source, filename, "single", File "<doctest sympy.stats.stochastic_process_types.DiscreteMarkovChain.communication_classes[5]>", line 1, in <module> for states, is_recurrent, period in classes: NameError: name 'classes' is not defined ``` </summary>
[ { "body": "Currently, `Tuple` supports the keyword `sympify` (`True` by default). However, this uses `sympify` rather than `_sympify`. As a result, `lists` appear to be supported as arguments to `Tuple` but this may result in unexpected behaviour since the resulting `Tuple` is now in principle mutable. Instead, I suggest to use `_sympify` over `sympify` and check for any `list` argument and convert those using `Tuple(*list_arg)`. ", "number": 22550, "title": "Tuple using sympify instead of _sympify incorrectly supports lists" } ]
11eb28349d8a97e1088e9daa1ad12b9046005185
{ "head_commit": "626064e1a582d6dc37fc3bcf54e3508d38fe6877", "head_commit_message": "removed misuses of sympify that would create Tuples with non-Basic argument", "patch_to_review": "diff --git a/sympy/core/containers.py b/sympy/core/containers.py\nindex 2c6a7721b09c..f3f8a89662ca 100644\n--- a/sympy/core/containers.py\n+++ b/sympy/core/containers.py\n@@ -14,7 +14,7 @@\n from .sympify import _sympify, sympify, converter, SympifyError\n from sympy.utilities.iterables import iterable\n from sympy.utilities.misc import as_int\n-\n+import warnings\n \n class Tuple(Basic):\n \"\"\"\n@@ -50,6 +50,12 @@ class Tuple(Basic):\n def __new__(cls, *args, **kwargs):\n if kwargs.get('sympify', True):\n args = (sympify(arg) for arg in args)\n+ #if kwargs.get('denest_lists', False):\n+ # args = (Tuple(*arg) if isinstance(arg, list) else arg for arg in args)\n+ args = tuple(args)\n+ for arg in args:\n+ if isinstance(arg, list):\n+ warnings.warn(DeprecationWarning('Tuple',arg))\n obj = Basic.__new__(cls, *args)\n return obj\n \ndiff --git a/sympy/core/function.py b/sympy/core/function.py\nindex bb4ae3a6e06a..01dc55eb4944 100644\n--- a/sympy/core/function.py\n+++ b/sympy/core/function.py\n@@ -1270,7 +1270,7 @@ def __new__(cls, expr, *variables, **kwargs):\n must be supplied to differentiate %s''' % expr))\n \n # Standardize the variables by sympifying them:\n- variables = list(sympify(variables))\n+ #variables = sympify(list(variables))\n \n # Split the list of variables into a list of the variables we are diff\n # wrt, where each element of the list has the form (s, count) where\n@@ -1278,11 +1278,12 @@ def __new__(cls, expr, *variables, **kwargs):\n # derivative.\n variable_count = []\n array_likes = (tuple, list, Tuple)\n+ integer_likes = (int, Integer)\n \n from sympy.tensor.array import Array, NDimArray\n \n for i, v in enumerate(variables):\n- if isinstance(v, Integer):\n+ if isinstance(v, integer_likes):\n if i == 0:\n raise ValueError(\"First variable cannot be a number: %i\" % v)\n count = v\ndiff --git a/sympy/core/tests/test_containers.py b/sympy/core/tests/test_containers.py\nindex c598af7eda0f..a8070251e32e 100644\n--- a/sympy/core/tests/test_containers.py\n+++ b/sympy/core/tests/test_containers.py\n@@ -42,6 +42,10 @@ def test_Tuple():\n assert Tuple.fromiter(x for x in range(4)) == Tuple(0, 1, 2, 3)\n assert st2.fromiter(st2.args) == st2\n \n+ #see issue 22550\n+ assert Tuple([1, 2, [3, 4]]) == Tuple(Tuple(1, 2, Tuple(3, 4)))\n+ assert Tuple([1, 2], 3) == Tuple(Tuple(1, 2), 3)\n+\n \n def test_Tuple_contains():\n t1, t2 = Tuple(1), Tuple(2)\ndiff --git a/sympy/printing/tableform.py b/sympy/printing/tableform.py\nindex c102ffe7f556..8d39285a4758 100644\n--- a/sympy/printing/tableform.py\n+++ b/sympy/printing/tableform.py\n@@ -144,7 +144,7 @@ def __init__(self, data, **kwarg):\n lj = Symbol(str(lj))\n line[j] = lj\n data[i] = line\n- _lines = Tuple(*data)\n+ _lines = Tuple(*[Tuple(*d) for d in data])\n \n headings = kwarg.get(\"headings\", [None, None])\n if headings == \"automatic\":\ndiff --git a/sympy/printing/tests/test_julia.py b/sympy/printing/tests/test_julia.py\nindex e0e13f9257d7..c9a39151fc16 100644\n--- a/sympy/printing/tests/test_julia.py\n+++ b/sympy/printing/tests/test_julia.py\n@@ -212,7 +212,8 @@ def test_containers():\n assert julia_code(Tuple(*[1, 2, 3])) == \"(1, 2, 3)\"\n assert julia_code((1, x*y, (3, x**2))) == \"(1, x.*y, (3, x.^2))\"\n # scalar, matrix, empty matrix and empty list\n- assert julia_code((1, eye(3), Matrix(0, 0, []), [])) == \"(1, [1 0 0;\\n0 1 0;\\n0 0 1], zeros(0, 0), Any[])\"\n+ from sympy.codegen.pynodes import List\n+ assert julia_code((1, eye(3), Matrix(0, 0, []), List())) == \"(1, [1 0 0;\\n0 1 0;\\n0 0 1], zeros(0, 0), Any[])\"\n \n \n def test_julia_noninline():\ndiff --git a/sympy/printing/tests/test_maple.py b/sympy/printing/tests/test_maple.py\nindex 337a6320930d..9435cab91e7b 100644\n--- a/sympy/printing/tests/test_maple.py\n+++ b/sympy/printing/tests/test_maple.py\n@@ -233,8 +233,8 @@ def test_containers():\n assert maple_code(Tuple(*[1, 2, 3])) == \"[1, 2, 3]\"\n assert maple_code((1, x * y, (3, x ** 2))) == \"[1, x*y, [3, x^2]]\"\n # scalar, matrix, empty matrix and empty list\n-\n- assert maple_code((1, eye(3), Matrix(0, 0, []), [])) == \\\n+ from sympy.codegen.pynodes import List\n+ assert maple_code((1, eye(3), Matrix(0, 0, []), List(*[]))) == \\\n \"[1, Matrix([[1, 0, 0], [0, 1, 0], [0, 0, 1]], storage = rectangular), Matrix([], storage = rectangular), []]\"\n \n \ndiff --git a/sympy/printing/tests/test_octave.py b/sympy/printing/tests/test_octave.py\nindex e0762b7580db..d1969ec9abf1 100644\n--- a/sympy/printing/tests/test_octave.py\n+++ b/sympy/printing/tests/test_octave.py\n@@ -281,7 +281,8 @@ def test_containers():\n assert mcode(Tuple(*[1, 2, 3])) == \"{1, 2, 3}\"\n assert mcode((1, x*y, (3, x**2))) == \"{1, x.*y, {3, x.^2}}\"\n # scalar, matrix, empty matrix and empty list\n- assert mcode((1, eye(3), Matrix(0, 0, []), [])) == \"{1, [1 0 0; 0 1 0; 0 0 1], [], {}}\"\n+ from sympy.codegen.pynodes import List\n+ assert mcode((1, eye(3), Matrix(0, 0, []), List(*[]))) == \"{1, [1 0 0; 0 1 0; 0 0 1], [], {}}\"\n \n \n def test_octave_noninline():\ndiff --git a/sympy/simplify/tests/test_cse.py b/sympy/simplify/tests/test_cse.py\nindex efb39bc2d3ea..eb9cc231316b 100644\n--- a/sympy/simplify/tests/test_cse.py\n+++ b/sympy/simplify/tests/test_cse.py\n@@ -584,8 +584,10 @@ def test_cse_list():\n assert _cse(x) == ([], x)\n assert _cse('x') == ([], 'x')\n it = [x]\n- for c in (list, tuple, set, Tuple):\n+ for c in (list, tuple, set):\n assert _cse(c(it)) == ([], c(it))\n+ #Tuple works different from tuple:\n+ assert _cse(Tuple(*it)) == ([], Tuple(*it))\n d = {x: 1}\n assert _cse(d) == ([], d)\n \ndiff --git a/sympy/stats/stochastic_process_types.py b/sympy/stats/stochastic_process_types.py\nindex ab5f934d0d61..5ea0bd658437 100644\n--- a/sympy/stats/stochastic_process_types.py\n+++ b/sympy/stats/stochastic_process_types.py\n@@ -995,8 +995,8 @@ def communication_classes(self) -> tList[tTuple[tList[Basic], Boolean, Integer]]\n >>> classes = X.communication_classes()\n >>> for states, is_recurrent, period in classes:\n ... states, is_recurrent, period\n- ([1, 2], True, 2)\n- ([3], False, 1)\n+ ((1, 2), True, 2)\n+ ((3,), False, 1)\n \n From this we can see that states ``1`` and ``2``\n communicate, are recurrent and have a period\n@@ -1091,9 +1091,8 @@ def communication_classes(self) -> tList[tTuple[tList[Basic], Boolean, Integer]]\n # end breadth-first search\n \n # convert back to the user's state names\n- classes = [[self._state_index[i] for i in class_] for class_ in classes]\n-\n- return sympify(list(zip(classes, recurrence, periods)))\n+ classes = [[_sympify(self._state_index[i]) for i in class_] for class_ in classes]\n+ return list(zip(classes, recurrence, map(Integer,periods)))\n \n def fundamental_matrix(self):\n \"\"\"\ndiff --git a/sympy/tensor/array/array_comprehension.py b/sympy/tensor/array/array_comprehension.py\nindex 9498ea7f9f07..cd960c7c37ea 100644\n--- a/sympy/tensor/array/array_comprehension.py\n+++ b/sympy/tensor/array/array_comprehension.py\n@@ -1,7 +1,7 @@\n import functools, itertools\n-from sympy.core.sympify import sympify\n+from sympy.core.sympify import _sympify, sympify\n from sympy.core.expr import Expr\n-from sympy.core import Basic\n+from sympy.core import Basic, Tuple\n from sympy.tensor.array import ImmutableDenseNDimArray\n from sympy.core.symbol import Symbol\n from sympy.core.numbers import Integer\n@@ -225,8 +225,18 @@ def __len__(self):\n \n @classmethod\n def _check_limits_validity(cls, function, limits):\n- limits = sympify(limits)\n+ #limits = sympify(limits)\n+ new_limits = []\n for var, inf, sup in limits:\n+ var = _sympify(var)\n+ inf = _sympify(inf)\n+ #since this is stored as an argument, it should be\n+ #a Tuple\n+ if isinstance(sup, list):\n+ sup = Tuple(*sup)\n+ else:\n+ sup = _sympify(sup) \n+ new_limits.append(Tuple(var, inf, sup))\n if any((not isinstance(i, Expr)) or i.atoms(Symbol, Integer) != i.atoms()\n for i in [inf, sup]):\n raise TypeError('Bounds should be an Expression(combination of Integer and Symbol)')\n@@ -234,7 +244,7 @@ def _check_limits_validity(cls, function, limits):\n raise ValueError('Lower bound should be inferior to upper bound')\n if var in inf.free_symbols or var in sup.free_symbols:\n raise ValueError('Variable should not be part of its bounds')\n- return limits\n+ return new_limits\n \n @classmethod\n def _calculate_shape_from_limits(cls, limits):\n" }
[ { "diff_hunk": "@@ -1270,19 +1270,20 @@ def __new__(cls, expr, *variables, **kwargs):\n must be supplied to differentiate %s''' % expr))\n \n # Standardize the variables by sympifying them:\n- variables = list(sympify(variables))\n+ #variables = sympify(list(variables))", "line": null, "original_line": 1273, "original_start_line": null, "path": "sympy/core/function.py", "start_line": null, "text": "@author:\nThis is actually not needed here since all variables are eventually stored in `Tuple` (or `Array` which again stores it in `Tuple`) where `sympify` is also used.\n\n@user1:\nMaybe this should just be deleted." } ]
41ba53c4197cc7b5ffec89bf0161555f00b103b4
diff --git a/sympy/codegen/abstract_nodes.py b/sympy/codegen/abstract_nodes.py new file mode 100644 index 000000000000..eb5da05fc0fe --- /dev/null +++ b/sympy/codegen/abstract_nodes.py @@ -0,0 +1,15 @@ +"""This module provides containers for python objects that are valid +printing targets but are not a subclass of SymPy's Printable. +""" + + +from sympy.core.containers import Tuple + + +class List(Tuple): + """Represents a (frozen) (Python) list (for code printing purposes).""" + def __eq__(self, other): + if isinstance(other, list): + return self == List(*other) + else: + return self.args == other diff --git a/sympy/codegen/pynodes.py b/sympy/codegen/pynodes.py index 2fe93c6e252d..407fbde3aec5 100644 --- a/sympy/codegen/pynodes.py +++ b/sympy/codegen/pynodes.py @@ -1,10 +1,4 @@ -from sympy.core import Tuple +from .abstract_nodes import List as AbstractList - -class List(Tuple): - """Represents a (frozen) (Python) list (for code printing purposes).""" - def __eq__(self, other): - if isinstance(other, list): - return self == List(*other) - else: - return self.args == other +class List(AbstractList): + pass diff --git a/sympy/core/function.py b/sympy/core/function.py index bb4ae3a6e06a..1282fd5547a7 100644 --- a/sympy/core/function.py +++ b/sympy/core/function.py @@ -1269,20 +1269,18 @@ def __new__(cls, expr, *variables, **kwargs): expression, the variable(s) of differentiation must be supplied to differentiate %s''' % expr)) - # Standardize the variables by sympifying them: - variables = list(sympify(variables)) - # Split the list of variables into a list of the variables we are diff # wrt, where each element of the list has the form (s, count) where # s is the entity to diff wrt and count is the order of the # derivative. variable_count = [] array_likes = (tuple, list, Tuple) + integer_likes = (int, Integer) from sympy.tensor.array import Array, NDimArray for i, v in enumerate(variables): - if isinstance(v, Integer): + if isinstance(v, integer_likes): if i == 0: raise ValueError("First variable cannot be a number: %i" % v) count = v diff --git a/sympy/core/tests/test_args.py b/sympy/core/tests/test_args.py index 29a727d55df2..60cfabd584ae 100644 --- a/sympy/core/tests/test_args.py +++ b/sympy/core/tests/test_args.py @@ -491,6 +491,9 @@ def test_sympy__codegen__scipy_nodes__cosm1(): from sympy.codegen.scipy_nodes import cosm1 assert _test_args(cosm1(x)) +def test_sympy__codegen__abstract_nodes__List(): + from sympy.codegen.abstract_nodes import List + assert _test_args(List(1, 2, 3)) @XFAIL def test_sympy__combinatorics__graycode__GrayCode(): diff --git a/sympy/printing/codeprinter.py b/sympy/printing/codeprinter.py index 8c2f803ca377..98f861562137 100644 --- a/sympy/printing/codeprinter.py +++ b/sympy/printing/codeprinter.py @@ -36,8 +36,10 @@ class AssignmentError(Exception): def _convert_python_lists(arg): if isinstance(arg, list): - from sympy.codegen.pynodes import List + from sympy.codegen.abstract_nodes import List return List(*(_convert_python_lists(e) for e in arg)) + elif isinstance(arg, tuple): + return tuple(_convert_python_lists(e) for e in arg) else: return arg @@ -134,8 +136,9 @@ def _handle_assign_to(expr, assign_to): type(self).__name__, type(assign_to))) return Assignment(assign_to, expr) - expr = _handle_assign_to(expr, assign_to) expr = _convert_python_lists(expr) + expr = _handle_assign_to(expr, assign_to) + # Remove re(...) nodes due to UnevaluatedExpr.is_real always is None: expr = self._handle_UnevaluatedExpr(expr) diff --git a/sympy/printing/tableform.py b/sympy/printing/tableform.py index c102ffe7f556..8d39285a4758 100644 --- a/sympy/printing/tableform.py +++ b/sympy/printing/tableform.py @@ -144,7 +144,7 @@ def __init__(self, data, **kwarg): lj = Symbol(str(lj)) line[j] = lj data[i] = line - _lines = Tuple(*data) + _lines = Tuple(*[Tuple(*d) for d in data]) headings = kwarg.get("headings", [None, None]) if headings == "automatic": diff --git a/sympy/simplify/tests/test_cse.py b/sympy/simplify/tests/test_cse.py index efb39bc2d3ea..eb9cc231316b 100644 --- a/sympy/simplify/tests/test_cse.py +++ b/sympy/simplify/tests/test_cse.py @@ -584,8 +584,10 @@ def test_cse_list(): assert _cse(x) == ([], x) assert _cse('x') == ([], 'x') it = [x] - for c in (list, tuple, set, Tuple): + for c in (list, tuple, set): assert _cse(c(it)) == ([], c(it)) + #Tuple works different from tuple: + assert _cse(Tuple(*it)) == ([], Tuple(*it)) d = {x: 1} assert _cse(d) == ([], d) diff --git a/sympy/stats/stochastic_process_types.py b/sympy/stats/stochastic_process_types.py index ab5f934d0d61..a43003aaa062 100644 --- a/sympy/stats/stochastic_process_types.py +++ b/sympy/stats/stochastic_process_types.py @@ -1091,9 +1091,8 @@ def communication_classes(self) -> tList[tTuple[tList[Basic], Boolean, Integer]] # end breadth-first search # convert back to the user's state names - classes = [[self._state_index[i] for i in class_] for class_ in classes] - - return sympify(list(zip(classes, recurrence, periods))) + classes = [[_sympify(self._state_index[i]) for i in class_] for class_ in classes] + return list(zip(classes, recurrence, map(Integer,periods))) def fundamental_matrix(self): """ diff --git a/sympy/tensor/array/array_comprehension.py b/sympy/tensor/array/array_comprehension.py index 9498ea7f9f07..329696bef21e 100644 --- a/sympy/tensor/array/array_comprehension.py +++ b/sympy/tensor/array/array_comprehension.py @@ -1,7 +1,7 @@ import functools, itertools -from sympy.core.sympify import sympify +from sympy.core.sympify import _sympify, sympify from sympy.core.expr import Expr -from sympy.core import Basic +from sympy.core import Basic, Tuple from sympy.tensor.array import ImmutableDenseNDimArray from sympy.core.symbol import Symbol from sympy.core.numbers import Integer @@ -225,8 +225,18 @@ def __len__(self): @classmethod def _check_limits_validity(cls, function, limits): - limits = sympify(limits) + #limits = sympify(limits) + new_limits = [] for var, inf, sup in limits: + var = _sympify(var) + inf = _sympify(inf) + #since this is stored as an argument, it should be + #a Tuple + if isinstance(sup, list): + sup = Tuple(*sup) + else: + sup = _sympify(sup) + new_limits.append(Tuple(var, inf, sup)) if any((not isinstance(i, Expr)) or i.atoms(Symbol, Integer) != i.atoms() for i in [inf, sup]): raise TypeError('Bounds should be an Expression(combination of Integer and Symbol)') @@ -234,7 +244,7 @@ def _check_limits_validity(cls, function, limits): raise ValueError('Lower bound should be inferior to upper bound') if var in inf.free_symbols or var in sup.free_symbols: raise ValueError('Variable should not be part of its bounds') - return limits + return new_limits @classmethod def _calculate_shape_from_limits(cls, limits):
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Code Refactoring / Architectural Improvement" }
sympy__sympy-22483@42077c2
sympy/sympy
Python
22,483
recursive canonical
This is for Eq and Ne which can have relational args. <!-- Your title above should be a short description of what was changed. Do not include the issue number in the title. --> #### References to other Issues or PRs <!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact format, e.g. "Fixes #1234" (see https://tinyurl.com/auto-closing for more information). Also, please write a comment on that issue linking back to this pull request once it is open. --> closes #8243 #### Brief description of what is fixed or changed #### Other comments #### Release Notes <!-- Write the release notes for this release below between the BEGIN and END statements. The basic format is a bulleted list with the name of the subpackage and the release note for this PR. For example: * solvers * Added a new solver for logarithmic equations. * functions * Fixed a bug with log of integers. or if no release note(s) should be included use: NO ENTRY See https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information on how to write release notes. The bot will check your release notes automatically to see if they are formatted correctly. --> <!-- BEGIN RELEASE NOTES --> * core * canonical will recurse into Relational args in Eq and Ne so `Eq(y<x,x>y).canonical -> True` <!-- END RELEASE NOTES -->
2021-11-13T03:02:27Z
relationals created with symbols can be made canonical ``` python >>> a=1<x >>> b=S(1)<x >>> c=x>S(1) >>> a==b False >>> b==c False ``` The `a==b` failure is expected since Python flips the args around. But it and the `b==c` case could all be the same if Relationals created from symbols (<, >, etc...) would sort args canonically. This, like sorting arguments of an evaluated Add (so that `x + 2` == `2 + x`), would make working with inequalities a little friendlier.
Actually, there is the gts and lts (in addition to the lhs and rhs attributes): ``` python >>> a = S(1) >>> b = a<x >>> c=x>a >>> b.lts == c.lts True >>> b.gts == c.gts True ``` Some work toward this end is in #8423 I don't think that auto-canonicalization is favored. It is possible to simplify this expression: ```python >>> Eq((S.One<x),(x>1)) Eq(1 < x, x > 1) >>> _.canonical Eq(x > 1, 1 < x) <--- canonical probably isn't recursive but could be >>> simplify(_) True ```
[ { "body": "``` python\n>>> a=1<x\n>>> b=S(1)<x\n>>> c=x>S(1)\n>>> a==b\nFalse\n>>> b==c\nFalse\n```\n\nThe `a==b` failure is expected since Python flips the args around. But it and the `b==c` case could all be the same if Relationals created from symbols (<, >, etc...) would sort args canonically. This, like sorting arguments of an evaluated Add (so that `x + 2` == `2 + x`), would make working with inequalities a little friendlier.\n", "number": 8243, "title": "relationals created with symbols can be made canonical" } ]
76d72acaeeab0082fc43eee42ddc8688acd3e306
{ "head_commit": "42077c2a7e16e947aeffab4d1ccb42d5cb2ff0c4", "head_commit_message": "Update test_relational.py", "patch_to_review": "diff --git a/sympy/core/relational.py b/sympy/core/relational.py\nindex 3d95a9102bad..2b30b5d0d9a4 100644\n--- a/sympy/core/relational.py\n+++ b/sympy/core/relational.py\n@@ -111,7 +111,8 @@ def __new__(cls, lhs, rhs, rop=None, **assumptions):\n # other than Eq/Ne;\n # Note: Symbol is a subclass of Boolean but is considered\n # acceptable here.\n- if any(map(_nontrivBool, (lhs, rhs))):\n+ if any(map(_nontrivBool, (lhs, rhs))) or isinstance(lhs, Relational\n+ ) or isinstance(rhs, Relational)):\n raise TypeError(filldedent('''\n A Boolean argument can only be used in\n Eq and Ne; all other relationals expect\n@@ -266,7 +267,7 @@ def canonical(self):\n >>> (-y < -x).canonical\n x < y\n \"\"\"\n- args = self.args\n+ args = [i.canonical if isinstance(i, Relational) else i for i in self.args]\n r = self\n if r.rhs.is_number:\n if r.rhs.is_Number and r.lhs.is_Number and r.lhs > r.rhs:\ndiff --git a/sympy/core/tests/test_relational.py b/sympy/core/tests/test_relational.py\nindex eeb5f9afec4d..e98f5f5df754 100644\n--- a/sympy/core/tests/test_relational.py\n+++ b/sympy/core/tests/test_relational.py\n@@ -854,6 +854,7 @@ def test_canonical():\n assert [i.canonical for i in c] == c\n assert [i.reversed.canonical for i in c] == c\n assert not any(i.lhs.is_Number and not i.rhs.is_Number for i in c)\n+ assert Eq(y < x, x > y).canonical is S.true\n \n \n @XFAIL\n" }
[ { "diff_hunk": "@@ -111,7 +111,8 @@ def __new__(cls, lhs, rhs, rop=None, **assumptions):\n # other than Eq/Ne;\n # Note: Symbol is a subclass of Boolean but is considered\n # acceptable here.\n- if any(map(_nontrivBool, (lhs, rhs))):\n+ if any(map(_nontrivBool, (lhs, rhs))) or isinstance(lhs, Relational\n+ ) or isinstance(rhs, Relational)):", "line": null, "original_line": 115, "original_start_line": 114, "path": "sympy/core/relational.py", "start_line": null, "text": "@author:\n```suggestion\r\n if any(map(_nontrivBool, (lhs, rhs))):\r\n```" } ]
64a74e6c9c577a882aa958f717f6b64f4e6a3a24
diff --git a/sympy/core/relational.py b/sympy/core/relational.py index 3d95a9102bad..7a4be766fdb1 100644 --- a/sympy/core/relational.py +++ b/sympy/core/relational.py @@ -265,9 +265,20 @@ def canonical(self): x < -y >>> (-y < -x).canonical x < y + + The canonicalization is recursively applied: + + >>> from sympy import Eq + >>> Eq(x < y, y > x).canonical + True """ - args = self.args - r = self + args = tuple([i.canonical if isinstance(i, Relational) else i for i in self.args]) + if args != self.args: + r = self.func(*args) + if not isinstance(r, Relational): + return r + else: + r = self if r.rhs.is_number: if r.rhs.is_Number and r.lhs.is_Number and r.lhs > r.rhs: r = r.reversed diff --git a/sympy/core/tests/test_relational.py b/sympy/core/tests/test_relational.py index eeb5f9afec4d..e98f5f5df754 100644 --- a/sympy/core/tests/test_relational.py +++ b/sympy/core/tests/test_relational.py @@ -854,6 +854,7 @@ def test_canonical(): assert [i.canonical for i in c] == c assert [i.reversed.canonical for i in c] == c assert not any(i.lhs.is_Number and not i.rhs.is_Number for i in c) + assert Eq(y < x, x > y).canonical is S.true @XFAIL
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Code Refactoring / Architectural Improvement" }
xonsh__xonsh-4461@3b2e507
xonsh/xonsh
Python
4,461
Fix: EnvPath.add() not maintain uniqueness with Path objects on replace
<!--- Thanks for opening a PR on xonsh! Please include a news entry with your PR to help keep our changelog up to date! There are instructions available here: https://xon.sh/devguide.html#changelog --> <!--- If there is specific issue / feature request that this PR is addressing, please link to the corresponding issue by using the `#issuenumber` syntax. Thanks again! --> Fixes #4366 Tested via docker python. ![image](https://user-images.githubusercontent.com/1596188/132269422-df894b81-1c1d-4f80-9db7-512abd253eee.png) Original credits for the filter out duplicates to source: https://stackoverflow.com/a/25251306/1621381 Seems to work great when testing. ## For community ⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
2021-09-07T01:05:21Z
EnvPath.add() doesn't maintain uniqueness with Path objects ## xonfig <details> ``` +------------------+----------------------+ | xonsh | 0.9.27 | | Git SHA | 71fe9014 | | Commit Date | Jan 29 08:58:58 2021 | | Python | 3.9.5 | | PLY | 3.11 | | have readline | True | | prompt toolkit | None | | shell type | readline | | pygments | 2.9.0 | | on posix | True | | on linux | True | | distro | ubuntu | | on darwin | False | | on windows | False | | on cygwin | False | | on msys2 | False | | is superuser | False | | default encoding | utf-8 | | xonsh encoding | utf-8 | | encoding errors | surrogateescape | | on jupyter | False | | jupyter kernel | None | | xontrib 1 | apt_tabcomplete | | xontrib 2 | direnv | | xontrib 3 | kitty | | xontrib 4 | linuxbrew | +------------------+----------------------+ ``` </details> ## Expected Behavior `EnvPath.add(thing)` (e.g. `$PATH.add(thing)`) should not allow multiple copies of `thing` to enter the list, compared as strings. ## Current Behavior (and reproduction steps) `EnvPath.add("str")` and `EnvPath.add(p"str")` do not detect each other's entries: ``` egnor@ostrich ~ $ $PATH EnvPath( ['/usr/local/sbin', '/usr/local/bin', '/usr/sbin', '/usr/bin', '/sbin', '/bin', '/usr/games', '/usr/local/games', '/snap/bin'] ) egnor@ostrich ~ $ $PATH.add("/home/egnor/bin") egnor@ostrich ~ $ $PATH.add(p"~/bin") egnor@ostrich ~ $ $PATH EnvPath( ['/usr/local/sbin', '/usr/local/bin', '/usr/sbin', '/usr/bin', '/sbin', '/bin', '/usr/games', '/usr/local/games', '/snap/bin', '/home/egnor/bin', '/home/egnor/bin'] ) ``` This is because `EnvPath.add(data)` will put its data (of any type, `Path` or `str`) into the object directly, and `Path("foo") != "foo"`. My first instinct was to have `EnvPath.add(data)` include `data = str(data)`, but then I realized that `EnvPath.__init__` was explicitly written to allow a mixed bag of `Path` and `str` objects, so I wasn't sure what was desired. `EnvPath` is documented as being a "list of strings" so I would expect it to stringify everything on entry, but maybe there's some concerns with that? Note that `$PATH.add(p"~/bin")` seems like a very logical and idiomatic thing to put in one's `.xonshrc`, but due to this bug will end up with repeated copies when subshells occur. ## For community ⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
Hi! > EnvPath.add(thing) (e.g. $PATH.add(thing)) should not allow multiple copies of thing to enter the list, compared as strings. Why? AFAIK the OS doesn't mind and other shells allow this as well. The `.add` method is specifically documented not to add duplicates (with flags controlling precise de-duping semantics), and in general when all the items are "str" it does not, and that seems to be that method's whole reason to exist, vs `.append` or whatnot. The general use case is adding to `$PATH` or similar strings in one's `.xonshrc` without ending up with an ever expanding path full of dupes due to subshells that launch each other for various reasons. This all works just fine but the `Path`/`str` mixture seems to break that. > Oh sorry I missed you used add instead of append On Sat, Jul 17, 2021, 18:59 Daniel Egnor ***@***.***> wrote: > The `.add` method is specifically documented not to add duplicates (with > flags controlling precise de-duping semantics), and in general when all the > items are "str" it does not, and that seems to be that method's whole > reason to exist, vs `.append` or whatnot. > > The general use case is adding to `$PATH` or similar strings in one's > `.xonshrc` without ending up with an ever expanding path full of dupes due > to subshells that launch each other for various reasons. > > This all works just fine but the `Path`/`str` mixture seems to break that. > > > > > — > You are receiving this because you commented. > Reply to this email directly, view it on GitHub > <https://github.com/xonsh/xonsh/issues/4366#issuecomment-881919089>, or > unsubscribe > <https://github.com/notifications/unsubscribe-auth/AELF3BIYPKBQ2DBQEPISGQDTYGSEVANCNFSM5AQJJPRA> > . > Can confirm this is still a thing too. Also even with add used and replace it never removes all copies before inserting the one in front. So you will still have consistent issues. ![image](https://user-images.githubusercontent.com/1596188/132262503-ff91e3c7-aba0-4c73-baa9-964362bec31a.png) ![image](https://user-images.githubusercontent.com/1596188/132262525-70067811-40b9-4a71-9471-69e643824159.png) Still think the suggestion from: https://github.com/xonsh/xonsh/issues/2562#issuecomment-357891119 Was sleekest way to clean this up after ![image](https://user-images.githubusercontent.com/1596188/132262586-98c3a08b-1bdd-43f4-8aca-017ac4644995.png)
[ { "body": "## xonfig\r\n\r\n<details>\r\n\r\n```\r\n+------------------+----------------------+\r\n| xonsh | 0.9.27 |\r\n| Git SHA | 71fe9014 |\r\n| Commit Date | Jan 29 08:58:58 2021 |\r\n| Python | 3.9.5 |\r\n| PLY | 3.11 |\r\n| have readline | True |\r\n| prompt toolkit | None |\r\n| shell type | readline |\r\n| pygments | 2.9.0 |\r\n| on posix | True |\r\n| on linux | True |\r\n| distro | ubuntu |\r\n| on darwin | False |\r\n| on windows | False |\r\n| on cygwin | False |\r\n| on msys2 | False |\r\n| is superuser | False |\r\n| default encoding | utf-8 |\r\n| xonsh encoding | utf-8 |\r\n| encoding errors | surrogateescape |\r\n| on jupyter | False |\r\n| jupyter kernel | None |\r\n| xontrib 1 | apt_tabcomplete |\r\n| xontrib 2 | direnv |\r\n| xontrib 3 | kitty |\r\n| xontrib 4 | linuxbrew |\r\n+------------------+----------------------+\r\n```\r\n\r\n</details>\r\n\r\n## Expected Behavior\r\n`EnvPath.add(thing)` (e.g. `$PATH.add(thing)`) should not allow multiple copies of `thing` to enter the list, compared as strings.\r\n\r\n## Current Behavior (and reproduction steps)\r\n`EnvPath.add(\"str\")` and `EnvPath.add(p\"str\")` do not detect each other's entries:\r\n\r\n```\r\negnor@ostrich ~ $ $PATH\r\nEnvPath(\r\n['/usr/local/sbin',\r\n '/usr/local/bin',\r\n '/usr/sbin',\r\n '/usr/bin',\r\n '/sbin',\r\n '/bin',\r\n '/usr/games',\r\n '/usr/local/games',\r\n '/snap/bin']\r\n)\r\negnor@ostrich ~ $ $PATH.add(\"/home/egnor/bin\")\r\negnor@ostrich ~ $ $PATH.add(p\"~/bin\")\r\negnor@ostrich ~ $ $PATH\r\nEnvPath(\r\n['/usr/local/sbin',\r\n '/usr/local/bin',\r\n '/usr/sbin',\r\n '/usr/bin',\r\n '/sbin',\r\n '/bin',\r\n '/usr/games',\r\n '/usr/local/games',\r\n '/snap/bin',\r\n '/home/egnor/bin',\r\n '/home/egnor/bin']\r\n)\r\n```\r\n\r\nThis is because `EnvPath.add(data)` will put its data (of any type, `Path` or `str`) into the object directly, and `Path(\"foo\") != \"foo\"`.\r\n\r\nMy first instinct was to have `EnvPath.add(data)` include `data = str(data)`, but then I realized that `EnvPath.__init__` was explicitly written to allow a mixed bag of `Path` and `str` objects, so I wasn't sure what was desired. `EnvPath` is documented as being a \"list of strings\" so I would expect it to stringify everything on entry, but maybe there's some concerns with that?\r\n\r\nNote that `$PATH.add(p\"~/bin\")` seems like a very logical and idiomatic thing to put in one's `.xonshrc`, but due to this bug will end up with repeated copies when subshells occur.\r\n\r\n## For community\r\n⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**\r\n", "number": 4366, "title": "EnvPath.add() doesn't maintain uniqueness with Path objects" } ]
6df8a24fc92ce6af8adbd471f276171c96966b6d
{ "head_commit": "3b2e507fedc11ca113a046dbeb7f8b64ec2ade24", "head_commit_message": "Add tests for EnvPath.add() function", "patch_to_review": "diff --git a/news/fixduplicatepaths.rst b/news/fixduplicatepaths.rst\nnew file mode 100644\nindex 0000000000..8beed2482a\n--- /dev/null\n+++ b/news/fixduplicatepaths.rst\n@@ -0,0 +1,15 @@\n+**Added:** None\n+\n+**Changed:**\n+\n+* <news item>\n+\n+**Deprecated:** None\n+\n+**Removed:** None\n+\n+**Fixed:**\n+\n+* Fix Duplicate paths left over when add paths to Path via xonsh.tools.EnvPath\n+\n+**Security:** None\ndiff --git a/tests/test_tools.py b/tests/test_tools.py\nindex 6dd1b34049..9784aa9077 100644\n--- a/tests/test_tools.py\n+++ b/tests/test_tools.py\n@@ -928,9 +928,9 @@ def test_env_path_to_str(inp, exp):\n \"left, right, exp\",\n [\n (\n- EnvPath([\"/home/wakka\"]),\n- [\"/home/jawaka\"],\n- EnvPath([\"/home/wakka\", \"/home/jawaka\"]),\n+ EnvPath([\"/home/wakka\"]),\n+ [\"/home/jawaka\"],\n+ EnvPath([\"/home/wakka\", \"/home/jawaka\"]),\n ),\n ([\"a\"], EnvPath([\"b\"]), EnvPath([\"a\", \"b\"])),\n (EnvPath([\"c\"]), EnvPath([\"d\"]), EnvPath([\"c\", \"d\"])),\n@@ -942,6 +942,30 @@ def test_env_path_add(left, right, exp):\n assert exp == obs\n \n \n+def test_env_path_add_replace_no_dupes_front_replace_existing():\n+ # Test replaces without dupes when added to front when adding existing entry\n+ path = EnvPath([\"/home/wakka\", \"/home/wakka/bin\"])\n+ path.add(\"/home/wakka/bin\", front=True, replace=True)\n+ exp = [\"/home/wakka/bin\", \"/home/wakka\"]\n+ assert exp == path\n+\n+\n+def test_env_path_add_replace_no_dupes_front_replace_multiple():\n+ # Test replaces without dupes when added to front when multiple existing occurrences\n+ path = EnvPath([\"/home/wakka\", \"/home/wakka/bin\", \"/home/wakka/bin\"])\n+ path.add(\"/home/wakka/bin\", front=True, replace=True)\n+ exp = [\"/home/wakka/bin\", \"/home/wakka\"]\n+ assert exp == path\n+\n+\n+def test_env_path_add_replace_no_dupes_back_replace_multiple():\n+ # Test replaces without dupes when not added to front\n+ path = EnvPath([\"/home/wakka\", \"/home/wakka/bin\", \"/home/wakka/bin\"])\n+ path.add(\"/home/wakka/bin\", front=False, replace=True)\n+ exp = [\"/home/wakka\", \"/home/wakka/bin\"]\n+ assert exp == path\n+\n+\n # helper\n def expand(path):\n return os.path.expanduser(os.path.expandvars(path))\n@@ -972,12 +996,12 @@ def test_env_path_getitem(inp, exp, xession, env):\n \"inp, exp\",\n [\n (\n- os.pathsep.join([\"xonsh_dir\", \"../\", \".\", \"~/\"]),\n- [\"xonsh_dir\", \"../\", \".\", \"~/\"],\n+ os.pathsep.join([\"xonsh_dir\", \"../\", \".\", \"~/\"]),\n+ [\"xonsh_dir\", \"../\", \".\", \"~/\"],\n ),\n (\n- \"/home/wakka\" + os.pathsep + \"/home/jakka\" + os.pathsep + \"~/\",\n- [\"/home/wakka\", \"/home/jakka\", \"~/\"],\n+ \"/home/wakka\" + os.pathsep + \"/home/jakka\" + os.pathsep + \"~/\",\n+ [\"/home/wakka\", \"/home/jakka\", \"~/\"],\n ),\n ],\n )\n@@ -999,8 +1023,8 @@ def test_env_path_multipath(inp, exp, xession, env):\n (pathlib.Path(\"~/\"), [\"~\"]),\n (pathlib.Path(\".\"), [\".\"]),\n (\n- [\"/home/wakka\", pathlib.Path(\"/home/jakka\"), \"~/\"],\n- [\"/home/wakka\", \"/home/jakka\".replace(\"/\", os.sep), \"~/\"],\n+ [\"/home/wakka\", pathlib.Path(\"/home/jakka\"), \"~/\"],\n+ [\"/home/wakka\", \"/home/jakka\".replace(\"/\", os.sep), \"~/\"],\n ),\n ([\"/home/wakka\", pathlib.Path(\"../\"), \"../\"], [\"/home/wakka\", \"..\", \"../\"]),\n ([\"/home/wakka\", pathlib.Path(\"~/\"), \"~/\"], [\"/home/wakka\", \"~\", \"~/\"]),\n@@ -1028,8 +1052,8 @@ def mkpath(*paths):\n \"inp, exp\",\n [\n (\n- [mkpath(\"home\", \"wakka\"), mkpath(\"home\", \"jakka\"), mkpath(\"home\", \"yakka\")],\n- [mkpath(\"home\", \"wakka\"), mkpath(\"home\", \"jakka\")],\n+ [mkpath(\"home\", \"wakka\"), mkpath(\"home\", \"jakka\"), mkpath(\"home\", \"yakka\")],\n+ [mkpath(\"home\", \"wakka\"), mkpath(\"home\", \"jakka\")],\n )\n ],\n )\n@@ -1042,8 +1066,8 @@ def test_env_path_slice_get_all_except_last_element(inp, exp):\n \"inp, exp\",\n [\n (\n- [mkpath(\"home\", \"wakka\"), mkpath(\"home\", \"jakka\"), mkpath(\"home\", \"yakka\")],\n- [mkpath(\"home\", \"jakka\"), mkpath(\"home\", \"yakka\")],\n+ [mkpath(\"home\", \"wakka\"), mkpath(\"home\", \"jakka\"), mkpath(\"home\", \"yakka\")],\n+ [mkpath(\"home\", \"jakka\"), mkpath(\"home\", \"yakka\")],\n )\n ],\n )\n@@ -1056,14 +1080,14 @@ def test_env_path_slice_get_all_except_first_element(inp, exp):\n \"inp, exp_a, exp_b\",\n [\n (\n- [\n- mkpath(\"home\", \"wakka\"),\n- mkpath(\"home\", \"jakka\"),\n- mkpath(\"home\", \"yakka\"),\n- mkpath(\"home\", \"takka\"),\n- ],\n- [mkpath(\"home\", \"wakka\"), mkpath(\"home\", \"yakka\")],\n- [mkpath(\"home\", \"jakka\"), mkpath(\"home\", \"takka\")],\n+ [\n+ mkpath(\"home\", \"wakka\"),\n+ mkpath(\"home\", \"jakka\"),\n+ mkpath(\"home\", \"yakka\"),\n+ mkpath(\"home\", \"takka\"),\n+ ],\n+ [mkpath(\"home\", \"wakka\"), mkpath(\"home\", \"yakka\")],\n+ [mkpath(\"home\", \"jakka\"), mkpath(\"home\", \"takka\")],\n )\n ],\n )\n@@ -1078,14 +1102,14 @@ def test_env_path_slice_path_with_step(inp, exp_a, exp_b):\n \"inp, exp\",\n [\n (\n- [\n- mkpath(\"home\", \"wakka\"),\n- mkpath(\"home\", \"xakka\"),\n- mkpath(\"other\", \"zakka\"),\n- mkpath(\"another\", \"akka\"),\n- mkpath(\"home\", \"bakka\"),\n- ],\n- [mkpath(\"other\", \"zakka\"), mkpath(\"another\", \"akka\")],\n+ [\n+ mkpath(\"home\", \"wakka\"),\n+ mkpath(\"home\", \"xakka\"),\n+ mkpath(\"other\", \"zakka\"),\n+ mkpath(\"another\", \"akka\"),\n+ mkpath(\"home\", \"bakka\"),\n+ ],\n+ [mkpath(\"other\", \"zakka\"), mkpath(\"another\", \"akka\")],\n )\n ],\n )\n@@ -1232,8 +1256,8 @@ def test_ensure_slice(inp, exp):\n [\n ((range(50), slice(25, 40)), list(i for i in range(25, 40))),\n (\n- ([1, 2, 3, 4, 5, 6, 7, 8, 9, 10], [slice(1, 4), slice(6, None)]),\n- [2, 3, 4, 7, 8, 9, 10],\n+ ([1, 2, 3, 4, 5, 6, 7, 8, 9, 10], [slice(1, 4), slice(6, None)]),\n+ [2, 3, 4, 7, 8, 9, 10],\n ),\n (([1, 2, 3, 4, 5], [slice(-2, None), slice(-5, -3)]), [4, 5, 1, 2]),\n ],\n@@ -1429,25 +1453,25 @@ def test_escape_windows_cmd_string(st, esc):\n (\"\", '\"\"', None),\n (\"foo\", \"foo\", '\"foo\"'),\n (\n- r'arg1 \"hallo, \"world\"\" \"\\some\\path with\\spaces\")',\n- r'\"arg1 \\\"hallo, \\\"world\\\"\\\" \\\"\\some\\path with\\spaces\\\")\"',\n- None,\n+ r'arg1 \"hallo, \"world\"\" \"\\some\\path with\\spaces\")',\n+ r'\"arg1 \\\"hallo, \\\"world\\\"\\\" \\\"\\some\\path with\\spaces\\\")\"',\n+ None,\n ),\n (\n- r'\"argument\"2\" argument3 argument4',\n- r'\"\\\"argument\\\"2\\\" argument3 argument4\"',\n- None,\n+ r'\"argument\"2\" argument3 argument4',\n+ r'\"\\\"argument\\\"2\\\" argument3 argument4\"',\n+ None,\n ),\n (r'\"\\foo\\bar bar\\foo\\\" arg', r'\"\\\"\\foo\\bar bar\\foo\\\\\\\" arg\"', None),\n (\n- r\"\\\\machine\\dir\\file.bat\",\n- r\"\\\\machine\\dir\\file.bat\",\n- r'\"\\\\machine\\dir\\file.bat\"',\n+ r\"\\\\machine\\dir\\file.bat\",\n+ r\"\\\\machine\\dir\\file.bat\",\n+ r'\"\\\\machine\\dir\\file.bat\"',\n ),\n (\n- r'\"\\\\machine\\dir space\\file.bat\"',\n- r'\"\\\"\\\\machine\\dir space\\file.bat\\\"\"',\n- None,\n+ r'\"\\\\machine\\dir space\\file.bat\"',\n+ r'\"\\\"\\\\machine\\dir space\\file.bat\\\"\"',\n+ None,\n ),\n ],\n )\n@@ -1601,15 +1625,15 @@ def test_expandvars(inp, exp, xession):\n (572392800.0, None, 572392800.0),\n (\"42.1459\", None, 42.1459),\n (\n- dt.datetime(2016, 8, 2, 13, 24),\n- None,\n- dt.datetime(2016, 8, 2, 13, 24).timestamp(),\n+ dt.datetime(2016, 8, 2, 13, 24),\n+ None,\n+ dt.datetime(2016, 8, 2, 13, 24).timestamp(),\n ),\n (\"2016-8-10 16:14\", None, dt.datetime(2016, 8, 10, 16, 14).timestamp()),\n (\n- \"2016/8/10 16:14:40\",\n- \"%Y/%m/%d %H:%M:%S\",\n- dt.datetime(2016, 8, 10, 16, 14, 40).timestamp(),\n+ \"2016/8/10 16:14:40\",\n+ \"%Y/%m/%d %H:%M:%S\",\n+ dt.datetime(2016, 8, 10, 16, 14, 40).timestamp(),\n ),\n ],\n )\n@@ -1663,17 +1687,17 @@ def test_swap_values():\n \"arguments, expected_docstring\",\n [\n (\n- {\"deprecated_in\": \"0.5.10\", \"removed_in\": \"0.6.0\"},\n- \"my_function has been deprecated in version 0.5.10 and will be removed \"\n- \"in version 0.6.0\",\n+ {\"deprecated_in\": \"0.5.10\", \"removed_in\": \"0.6.0\"},\n+ \"my_function has been deprecated in version 0.5.10 and will be removed \"\n+ \"in version 0.6.0\",\n ),\n (\n- {\"deprecated_in\": \"0.5.10\"},\n- \"my_function has been deprecated in version 0.5.10\",\n+ {\"deprecated_in\": \"0.5.10\"},\n+ \"my_function has been deprecated in version 0.5.10\",\n ),\n (\n- {\"removed_in\": \"0.6.0\"},\n- \"my_function has been deprecated and will be removed in version 0.6.0\",\n+ {\"removed_in\": \"0.6.0\"},\n+ \"my_function has been deprecated and will be removed in version 0.6.0\",\n ),\n ({}, \"my_function has been deprecated\"),\n ],\n@@ -1690,18 +1714,18 @@ def my_function():\n \"arguments, expected_docstring\",\n [\n (\n- {\"deprecated_in\": \"0.5.10\", \"removed_in\": \"0.6.0\"},\n- \"Does nothing.\\n\\nmy_function has been deprecated in version 0.5.10 and \"\n- \"will be removed in version 0.6.0\",\n+ {\"deprecated_in\": \"0.5.10\", \"removed_in\": \"0.6.0\"},\n+ \"Does nothing.\\n\\nmy_function has been deprecated in version 0.5.10 and \"\n+ \"will be removed in version 0.6.0\",\n ),\n (\n- {\"deprecated_in\": \"0.5.10\"},\n- \"Does nothing.\\n\\nmy_function has been deprecated in version 0.5.10\",\n+ {\"deprecated_in\": \"0.5.10\"},\n+ \"Does nothing.\\n\\nmy_function has been deprecated in version 0.5.10\",\n ),\n (\n- {\"removed_in\": \"0.6.0\"},\n- \"Does nothing.\\n\\nmy_function has been deprecated and will be removed \"\n- \"in version 0.6.0\",\n+ {\"removed_in\": \"0.6.0\"},\n+ \"Does nothing.\\n\\nmy_function has been deprecated and will be removed \"\n+ \"in version 0.6.0\",\n ),\n ({}, \"Does nothing.\\n\\nmy_function has been deprecated\"),\n ],\n@@ -1827,19 +1851,19 @@ def test_all_permutations():\n [\n (\"test1\", {}, {}), # empty styles\n (\n- \"test2\",\n- {\"Token.Literal.String.Single\": \"#ff0000\"},\n- {\"Token.Literal.String.Single\": \"#ff0000\"},\n+ \"test2\",\n+ {\"Token.Literal.String.Single\": \"#ff0000\"},\n+ {\"Token.Literal.String.Single\": \"#ff0000\"},\n ), # str key\n (\n- \"test3\",\n- {\"Literal.String.Single\": \"#ff0000\"},\n- {\"Token.Literal.String.Single\": \"#ff0000\"},\n+ \"test3\",\n+ {\"Literal.String.Single\": \"#ff0000\"},\n+ {\"Token.Literal.String.Single\": \"#ff0000\"},\n ), # short str key\n (\n- \"test4\",\n- {\"RED\": \"#ff0000\"},\n- {\"Token.Color.RED\": \"#ff0000\"},\n+ \"test4\",\n+ {\"RED\": \"#ff0000\"},\n+ {\"Token.Color.RED\": \"#ff0000\"},\n ), # color\n ],\n )\ndiff --git a/xonsh/tools.py b/xonsh/tools.py\nindex 359c9295aa..36e2a04f89 100644\n--- a/xonsh/tools.py\n+++ b/xonsh/tools.py\n@@ -283,7 +283,8 @@ def add(self, data, front=False, replace=False):\n if data not in self._l:\n self._l.insert(0 if front else len(self._l), data)\n elif replace:\n- self._l.remove(data)\n+ # https://stackoverflow.com/a/25251306/1621381\n+ self._l = list(filter(lambda x: x != data, self._l))\n self._l.insert(0 if front else len(self._l), data)\n \n \n" }
[ { "diff_hunk": "@@ -942,6 +942,30 @@ def test_env_path_add(left, right, exp):\n assert exp == obs\n \n \n+def test_env_path_add_replace_no_dupes_front_replace_existing():\n+ # Test replaces without dupes when added to front when adding existing entry\n+ path = EnvPath([\"/home/wakka\", \"/home/wakka/bin\"])\n+ path.add(\"/home/wakka/bin\", front=True, replace=True)\n+ exp = [\"/home/wakka/bin\", \"/home/wakka\"]\n+ assert exp == path", "line": null, "original_line": 950, "original_start_line": 949, "path": "tests/test_tools.py", "start_line": null, "text": "@user1:\nInlining `exp` will make the tests clearer:\r\n```python\r\nassert path == [\"/home/wakka/bin\", \"/home/wakka\"]\r\n```" } ]
e15ea850ab374527f9c5b4c1a0738e3cf663684a
diff --git a/news/fixduplicatepaths.rst b/news/fixduplicatepaths.rst new file mode 100644 index 0000000000..8beed2482a --- /dev/null +++ b/news/fixduplicatepaths.rst @@ -0,0 +1,15 @@ +**Added:** None + +**Changed:** + +* <news item> + +**Deprecated:** None + +**Removed:** None + +**Fixed:** + +* Fix Duplicate paths left over when add paths to Path via xonsh.tools.EnvPath + +**Security:** None diff --git a/tests/test_tools.py b/tests/test_tools.py index 6dd1b34049..2d9a5c57c0 100644 --- a/tests/test_tools.py +++ b/tests/test_tools.py @@ -942,6 +942,70 @@ def test_env_path_add(left, right, exp): assert exp == obs +def test_env_path_add_replace_no_dupes_front_replace_existing(): + # Test replaces without dupes when added to front when adding existing entry + path = EnvPath( + [os.pathsep.join(["home", "wakka"]), os.pathsep.join(["home", "wakka", "bin"])] + ) + path.add(os.pathsep.join(["home", "wakka", "bin"]), front=True, replace=True) + assert path == [ + os.pathsep.join(["home", "wakka", "bin"]), + os.pathsep.join(["home", "wakka"]), + ] + + +def test_env_path_add_replace_no_dupes_front_replace_multiple(): + # Test replaces without dupes when added to front when multiple existing occurrences + path = EnvPath( + [ + os.pathsep.join(["home", "wakka"]), + os.pathsep.join(["home", "wakka", "bin"]), + os.pathsep.join(["home", "wakka", "bin"]), + ] + ) + path.add(os.pathsep.join(["home", "wakka", "bin"]), front=True, replace=True) + assert path == [ + os.pathsep.join(["home", "wakka", "bin"]), + os.pathsep.join(["home", "wakka"]), + ] + + +def test_env_path_add_replace_no_dupes_back_replace_multiple(): + # Test replaces without dupes when not added to front + path = EnvPath( + [ + os.pathsep.join(["home", "wakka"]), + os.pathsep.join(["home", "wakka", "bin"]), + os.pathsep.join(["home", "wakka", "bin"]), + ] + ) + path.add(os.pathsep.join(["home", "wakka", "bin"]), front=False, replace=True) + assert path == [ + os.pathsep.join(["home", "wakka"]), + os.pathsep.join(["home", "wakka", "bin"]), + ] + + +def test_env_path_add_pathlib(): + os.pathsep.join(["home", "wakka", "bin"]) + path = EnvPath( + [ + os.pathsep.join(["home", "wakka"]), + os.pathsep.join(["home", "wakka", "bin"]), + os.pathsep.join(["home", "wakka", "bin"]), + ] + ) + path.add( + pathlib.Path(os.pathsep.join(["home", "wakka", "bin"])), + front=False, + replace=True, + ) + assert path == [ + os.pathsep.join(["home", "wakka"]), + os.pathsep.join(["home", "wakka", "bin"]), + ] + + # helper def expand(path): return os.path.expanduser(os.path.expandvars(path)) diff --git a/xonsh/tools.py b/xonsh/tools.py index 359c9295aa..8058a8d4d2 100644 --- a/xonsh/tools.py +++ b/xonsh/tools.py @@ -280,10 +280,12 @@ def add(self, data, front=False, replace=False): None """ + data = str(expand_path(data)) if data not in self._l: self._l.insert(0 if front else len(self._l), data) elif replace: - self._l.remove(data) + # https://stackoverflow.com/a/25251306/1621381 + self._l = list(filter(lambda x: x != data, self._l)) self._l.insert(0 if front else len(self._l), data) diff --git a/xonsh/xontribs_meta.py b/xonsh/xontribs_meta.py index 702caa06cd..1261e89e4a 100644 --- a/xonsh/xontribs_meta.py +++ b/xonsh/xontribs_meta.py @@ -141,7 +141,7 @@ def define_xontribs(): ), ), "avox_poetry": Xontrib( - url="github.com/jnoortheen/xontrib-avox-poetry", + url="https://github.com/jnoortheen/xontrib-avox-poetry", description="auto-activate venv as one cd into a poetry project folder. " "Activate ``.venv`` inside the project folder is also supported.", package=_XontribPkg( @@ -193,7 +193,7 @@ def define_xontribs(): package=core_pkg, ), "broot": Xontrib( - url="github.com/jnoortheen/xontrib-broot", + url="https://github.com/jnoortheen/xontrib-broot", description="supports broot with br alias", package=_XontribPkg( name="xontrib-broot", @@ -203,7 +203,7 @@ def define_xontribs(): ), ), "powerline3": Xontrib( - url="github.com/jnoortheen/xontrib-powerline3", + url="https://github.com/jnoortheen/xontrib-powerline3", description="Powerline theme with native $PROMPT_FIELDS support.", package=_XontribPkg( name="xontrib-powerline3",
{ "difficulty": "medium", "estimated_review_effort": 3, "problem_domain": "Bug Fixes" }