question_id int64 59.5M 79.7M | creation_date stringdate 2020-01-01 00:00:00 2025-07-15 00:00:00 | link stringlengths 60 163 | question stringlengths 53 28.9k | accepted_answer stringlengths 26 29.3k | question_vote int64 1 410 | answer_vote int64 -9 482 |
|---|---|---|---|---|---|---|
79,320,289 | 2024-12-31 | https://stackoverflow.com/questions/79320289/why-cant-i-wrap-lgbm | I'm using LGBM to forecast the relative change of a numerical quantity. I'm using the MSLE (Mean Squared Log Error) loss function to optimize my model and to get the correct scaling of errors. Since MSLE isn't native to LGBM, I have to implement it myself. But lucky me, the math can be simplified a ton. This is my impl... | Root Cause scikit-learn expects that each of the keyword arguments to an estimator's __init__() will exactly correspond to a public attribute on instances of the class. Per https://scikit-learn.org/stable/developers/develop.html every keyword argument accepted by __init__ should correspond to an attribute on the insta... | 1 | 1 |
79,320,303 | 2024-12-31 | https://stackoverflow.com/questions/79320303/artifacts-with-pygame-when-trying-to-update-visible-sprites-only | I'm learning the basics of the pygame library and already struggling. The "game" at this point only has a player and walls. There are 2 main surfaces: "world" (the actual game map) and "screen" (which serves as a viewport for "view_src" w/ scaling & scrolling, "viewport" is the corresponding rect). Here's the problem: ... | Found the problem and the fix thanks to Kingsley's nudge. The issue: Group.clear() clears the sprites drawn by the last .draw() of that exact same group. So using a different group for .clear() and .draw() doesn't work, and the continuity it needs to function is also lost by re-assigning the "visible" group each time. ... | 2 | 0 |
79,316,973 | 2024-12-30 | https://stackoverflow.com/questions/79316973/improve-computational-time-and-memory-usage-of-the-calculation-of-a-large-matrix | I want to calculate a Matrix G that its elements is a scalar and are calculated as: I want to calculated this matrix for a large n > 10000, d>30. My code is below but it has a huge overhead and it still takes very long time. How can I make this computation at the fastest possible way? Without using GPU and Minimize th... | A convenient way is to note that each entry could also be written as : with above notation the computation could be much easier and: import numpy as np from tqdm import tqdm from sklearn.gaussian_process.kernels import Matern from yaspin import yaspin import time from memory_profiler import profile ##-----------------... | 1 | 2 |
79,313,502 | 2024-12-28 | https://stackoverflow.com/questions/79313502/extracting-owner-s-username-from-nested-page-on-huggingface | I am scraping the HuggingFace research forum (https://discuss.huggingface.co/c/research/7/l/latest) using Selenium. I am able to successfully extract the following attributes from the main page of the forum: Activity Date View Count Replies Count Title URL However, I am encountering an issue when trying to extract th... | All the data you're after comes from two API endpoints. Most of what you already have can be fetched from the frist one. If you follow the post, you'll get even more data and you'll find the posters section, there you can find your owner aka Original Poster. This is just to push you in the right direction (and no selen... | 2 | 2 |
79,319,663 | 2024-12-31 | https://stackoverflow.com/questions/79319663/fastapi-apache-409-response-from-fastapi-is-converted-to-502-what-can-be-the | I have a FastAPI application, which, in general, works fine. My setup is Apache as a proxy and FastAPI server behind it. This is the apache config: ProxyPass /fs http://127.0.0.1:8000/fs retry=1 acquire=3000 timeout=600 Keepalive=On disablereuse=ON ProxyPassReverse /fs http://127.0.0.1:8000/fs I have one endpoint that... | So, i have found the reason. When there is file upload you need to read the input buffer in any case, even if you want to return the error. In my case i had to add try: except: to empty the buffer when exception happens. Something like try: ... my original code except Exception as e: # Empty input buffer here to avoid ... | 1 | 0 |
79,316,958 | 2024-12-30 | https://stackoverflow.com/questions/79316958/mlagents-learn-help-is-giving-errors-python-3-11-3-10-3-9-3-8 | I am trying to install mlagents. I got to the part in python but after creating a virtual enviorment with pyenv and setting the local version to 3.10, 3.9, and 3.8 it works on none of them. I upgraded pip, installed mlagents, then torch,torchvision, and torchaudio. Then I tested mlagents-learn --help and then because o... | Try deleting your unity project and making a new one. Unity says to use conda so try that too. Use python 3.9. | 1 | 2 |
79,318,540 | 2024-12-30 | https://stackoverflow.com/questions/79318540/django-model-foreign-key-to-whichever-model-calls-it | I am getting back into Django after a few years, and am running into the following problem. I am making a system where there are 2 models; a survey, and an update. I want to make a notification model that would automatically have an object added when I add a survey object or update object, and the notification object w... | GenericForeignKey to the rescue: A normal ForeignKey can only “point to” one other model, which means that if the TaggedItem model used a ForeignKey it would have to choose one and only one model to store tags for. The contenttypes application provides a special field type (GenericForeignKey) which works around this a... | 2 | 2 |
79,319,263 | 2024-12-31 | https://stackoverflow.com/questions/79319263/why-does-geopandas-dissolve-function-keep-working-forever | All, I am trying to use the Geopandas dissolve function to aggregate a few countries; the function countries.dissolve keeps running forever. Here is a minimal script. import geopandas as gpd shape='/Volumes/TwoGb/shape/fwdshapfileoftheworld/' countries=gpd.read_file(shape+'TM_WORLD_BORDERS-0.3.shp') # Add columns count... | Dissolve is working when I try it, it finishes in a few seconds. My Geopandas version is 1.0.1. import geopandas as gpd df = gpd.read_file(r"C:\Users\bera\Downloads\TM_WORLD_BORDERS-0.3.shp") df.plot(column="NAME") df2 = df.dissolve() df2.plot() There are some invalid geometries that might cause problems for you? T... | 1 | 2 |
79,318,939 | 2024-12-31 | https://stackoverflow.com/questions/79318939/loaded-keras-model-throws-error-while-predicting-likely-issues-with-masking | I am currently developing and testing a RNN that relies upon a large amount of data for training, and so have attempted to separate my training and testing files. I have one file where I create, train, and save a tensorflow.keras model to a file 'model.keras' I then load this model in another file and predict some valu... | That error is due to the mask_value that you pass into tf.keras.layers.Masking not getting serialized compatibly for deserialization. But because you masking layer is a tensor containing all 0s anyway, you can instead just pass a scalar value like this and it will eliminate the need to serialize a tensor while storing ... | 1 | 1 |
79,320,886 | 2024-12-31 | https://stackoverflow.com/questions/79320886/numpy-einsum-why-did-this-happen | Can you explain why this happened? import numpy as np a = np.array([[1,2], [3,4], [5,6] ]) b = np.array([[2,2,2], [2,2,2]]) print(np.einsum("xy,zx -> yx",a,b)) and output of the code is:[[ 4 12 20] [ 8 16 24]] Which means the answer is calculated like this : [1*2+1*2 , 3*2+3*2 , ...] But I expected it to be calcul... | Your code is equivalent to: (a[None] * b[..., None]).sum(axis=0).T You start with a (x, y) and b (z, x). First let's align the arrays: # a[None] shape: (1, x, y) array([[[1, 2], [3, 4], [5, 6]]]) # b[..., None] shape: (z, x, 1) array([[[2], [2], [2]], [[2], [2], [2]]]) and multiply: # a[None] * b[..., None] shape: (z... | 1 | 1 |
79,320,784 | 2024-12-31 | https://stackoverflow.com/questions/79320784/bot-not-responding-to-channel-posts-in-telegram-bot-api-python-telegram-bot | I'm developing a Telegram bot using python-telegram-bot to handle and reply to posts in a specific channel. The bot starts successfully and shows "Bot is running...", but it never replies to posts in the channel. Here's the relevant code for handling channel posts: async def handle_channel_post(self, update: Update, co... | The issue is with this part of the code: if message.chat.username != self.channel_username: return The message.chat.username returns the channel username without the '@' and your self.channel.username includes '@' Try this: if message.chat.username != self.channel_username.replace("@", ""): return It removes '@' from... | 3 | 2 |
79,318,200 | 2024-12-30 | https://stackoverflow.com/questions/79318200/return-placeholder-values-with-formatting-if-a-key-is-not-found | I want to silently ignore KeyErrors and instead replace them with placeholders if values are not found. For example: class Name: def __init__(self, name): self.name = name self.capitalized = name.capitalize() def __str__(self): return self.name "hello, {name}!".format(name=Name("bob")) # hello, bob! "greetings, {name.c... | TL;DR The best solution is to override get_field instead of get_value in CustomFormatter: class CustomFormatter(string.Formatter): def get_field(self, field_name, args, kwargs): try: return super().get_field(field_name, args, kwargs) except (AttributeError, KeyError): return f"{{{field_name}}}", None Kuddos to @blhsin... | 2 | 2 |
79,320,041 | 2024-12-31 | https://stackoverflow.com/questions/79320041/python-flask-blueprint-parameter | I need to pass a parameter (some_url) from the main app to the blueprint using Flask This is my (oversimplified) app app = Flask(__name__) app.register_blueprint(my_bp, url_prefix='/mybp', some_url ="http....") This is my (oversimplified) blueprint my_bp = Blueprint('mybp', __name__, url_prefix='/mybp') @repositories_... | you can use g object for the current request which stores temporary data, or you can use session to maintain data between multiple requests which usually stores this data in the client browser as a cookie, or you can store the data in the app.config to maintain a constant value. | 1 | 0 |
79,318,743 | 2024-12-30 | https://stackoverflow.com/questions/79318743/how-to-create-combinations-from-dataframes-for-a-specific-combination-size | Say I have a dataframe with 2 columns, how would I create all possible combinations for a specific combination size? Each row of the df should be treated as 1 item in the combination rather than 2 unique separate items. I want the columns of the combinations to be appended to the right. The solution should ideally be e... | An approach is itertools to generate the combinations. Define the combination size and generate all possible combinations of rows using itertools.combinations Flatten each combination into a single list of values using itertools.chain. combination_df is created from the flattened combinations and the columns are dynam... | 1 | 1 |
79,319,708 | 2024-12-31 | https://stackoverflow.com/questions/79319708/confused-by-documentation-about-behavior-of-globals-within-a-function | Per the Python documentation of globals(): For code within functions, this is set when the function is defined and remains the same regardless of where the function is called. I understood this as calling globals() from within a function returns an identical dict to the one that represented the global namespace when ... | In fact this problem is only loosely related to the globals() builtin function but more closely related to the behaviour of mutable objects. Long story made short, your observation is correct, and the documentation is absolutely correct and accurate. The underlying cause, is that Python variables are only references to... | 1 | 1 |
79,319,434 | 2024-12-31 | https://stackoverflow.com/questions/79319434/duplicate-null-columns-created-during-pivot-in-polars | I have this example dataframe in polars: df_example = pl.DataFrame( { "DATE": ["2024-11-11", "2024-11-11", "2024-11-12", "2024-11-12", "2024-11-13"], "A": [None, None, "option1", "option2", None], "B": [None, None, "YES", "YES", "NO"], } ) Which looks like this: DATE A B 0 2024-11-11 1 2024-11-11 2 202... | I ended up with: ( df_example.pipe( lambda df: df.group_by("DATE").agg( [ pl.col(col).eq(val).any().alias(f"{col}_{val}") for col in df.select(pl.exclude("DATE")).columns for val in df.get_column(col).unique().drop_nulls() ] ) ).sort("DATE") ) | 2 | 1 |
79,319,156 | 2024-12-31 | https://stackoverflow.com/questions/79319156/how-to-add-python-type-annotations-to-a-class-that-inherits-from-itself | I'm trying add type annotations to an ElementList object that inherits from list and can contain either Element objects or other ElementGroup objects. When I run the following code through mypy: from typing import Self class Element: pass class ElementList(list[Element | Self]): pass elements = ElementList( [ Element()... | Your sample list argument to the ElementList constructor contains not just Elements and ElementLists but also actual lists, so a workaround of class ElementList(list["Element | ElementList"]): ... would not have worked, as @dROOOze pointed out in the comment, because list is not a subtype of ElementList. You can work a... | 1 | 1 |
79,317,395 | 2024-12-30 | https://stackoverflow.com/questions/79317395/multi-columns-legend-in-geodataframe | I tried to plot Jakarta's map based on the district. fig, ax = plt.subplots(1, figsize=(4.5,10)) jakarta_mandiri_planar.plot(ax=ax, column='Kecamatan', legend=True, legend_kwds={'loc':'center left'}) leg= ax.get_legend() leg.set_bbox_to_anchor((1.04, 0.5)) I plotted the legend on the right of the map, but I think it'... | Use the ncols keyword: df.plot(column="NAME", cmap="tab20", legend=True, figsize=(8,8)) df.plot(column="NAME", cmap="tab20", legend=True, figsize=(10,10), legend_kwds={"ncols":2, "loc":"lower left"}) | 1 | 1 |
79,315,937 | 2024-12-29 | https://stackoverflow.com/questions/79315937/in-ta-lib-cython-compiler-errors-internalerror-internal-compiler-error-com | While running a program on pycharm I am getting below error while running on pycharm using python. Unable to run the program due to below error: ERROR: Failed building wheel for TA-Lib-Precompiled ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (TA-Lib-Precompiled) > Package :TA-... | The stable release of TA-Lib-Precompiled only has wheels for Python 3.8 - 3.11 for Linux. You can install The Windows Subsystem for Linux (WSL) which provides a Linux environment on your Windows machine and then use a supported Python version such as Python 3.11. See How to install Linux on Windows with WSL for detaile... | 2 | 1 |
79,317,602 | 2024-12-30 | https://stackoverflow.com/questions/79317602/python-selenium-need-help-in-locating-username-and-password | i am new to selenium . i am trying to scrape financial data on tradingview. i am trying to log into https://www.tradingview.com/accounts/signin/ . i understand that i am facing a timeout issue right now, is there any way to fix this? thank you to anybody helping. much appreciated. however, i am facing alot of errors wi... | To locate the login form on the sign-in page, it is necessary to click the "Email" button first in order to proceed with submitting the login form. I have included the following two lines in the script to accomplish this. email_button = driver.find_element(By.XPATH, "//button[@name='Email']") email_button.click() The ... | 1 | 1 |
79,317,247 | 2024-12-30 | https://stackoverflow.com/questions/79317247/how-to-do-a-clean-install-of-python-from-source-in-a-docker-container-image-ge | Currently I have to create Docker images that build python from source (for example we do need two different python versions in a container, one python version for building and one for testing the application, also we need to exactly specify the python version we want to install and newer versions are not supported via... | Research and read dockerfile best practices, for example https://docs.docker.com/build/building/best-practices/#apt-get . Remove src directory and any build aftefacts after you are done installing. Remove packages in the same stage as you install them. Additionally, you might be interested in pyenv project that streaml... | 2 | 1 |
79,317,098 | 2024-12-30 | https://stackoverflow.com/questions/79317098/python-logging-filter-works-with-console-but-still-writes-to-file | I am saving the logs to a text file and displaying them to the console at the same time. I would like to apply a filter on the logs, so that some logs neither make it to the text file nor the console output. However, with this code, the logs that I would like to filter out are still being saved to the text file. The fi... | basicConfig created a FileHandler and a StreamHandler was also created and added to the logger. The filter was only applied to the StreamHandler. To filter both handlers, add the filter to the logger instead: import logging class applyFilter(logging.Filter): def filter(self, record): return not record.getMessage().star... | 1 | 0 |
79,316,851 | 2024-12-30 | https://stackoverflow.com/questions/79316851/sympy-integration-with-cosine-function-under-a-square-root | I am trying to solve the integration integrate( sqrt(1 + cos(2 * x)), (x, 0, pi) ) Clearly, through pen and paper this is not hard, and the result is: But when doing this through Sympy, something does not seem correct. I tried the sympy codes as below. from sympy import * x = symbols("x", real=True) integrate(sqrt(1 ... | Adding a simplification in there will produce the correct result, but I'm not sure why it is having an issue in the first place. integrate(sqrt(1+cos(2*x)).simplify(), (x, 0, pi)) # 2*sqrt(2) | 5 | 3 |
79,316,346 | 2024-12-29 | https://stackoverflow.com/questions/79316346/how-to-include-exception-handling-within-a-python-pool-starmap-multiprocess | I'm using the metpy library to do weather calculations. I'm using the multiprocessing library to run them in parallel, but I get rare exceptions, which completely stop the program. I am not able to provide a minimal, reproducible example because I can't replicate the problems with the metpy library functions and becaus... | You can generalize run_ccl with a wrapper function that suppresses specified exceptions and returns NaN as a default value: from contextlib import suppress def suppressor(func, *exceptions): def wrapper(*args, **kwargs): with suppress(*exceptions): return func(*args, **kwargs) return float('nan') return wrapper with w... | 1 | 2 |
79,316,278 | 2024-12-29 | https://stackoverflow.com/questions/79316278/is-there-a-more-elegant-rewrite-for-this-python-enum-value-of-implementation | I would like to get a value_of implementation for the StrEnum (Python 3.9.x). For example: from enum import Enum class StrEnum(str, Enum): """Enum with str values""" pass class BaseStrEnum(StrEnum): """Base Enum""" @classmethod def value_of(cls, value): try: return cls[value] except KeyError: try: return cls(value) exc... | Since upon success of the first try block the function will return and won't execute the code that follows, there is no need to nest the second try block in the error handler of the first try block to begin with: def value_of(cls, value): try: return cls[value] except KeyError: pass try: return cls(value) except ValueE... | 2 | 2 |
79,316,309 | 2024-12-29 | https://stackoverflow.com/questions/79316309/how-does-this-code-execute-the-finally-block-even-though-its-never-evaluated-to | def divisive_recursion(n): try: if n <= 0: return 1 else: return n + divisive_recursion(n // divisive_recursion(n - 1)) except ZeroDivisionError: return -1 finally: if n == 2: print("Finally block executed for n=2") elif n == 1: print("Finally block executed for n=1") print(divisive_recursion(5)) Here, divisive_recurs... | In one of the comments, you ask "does that mean once the program encounters the crash, it will execute all the finally blocks upward the recursion before it finally crashes". And the answer is basically "yes". An exception isn't really a "crash", or perhaps think of it as a controlled way of crashing. Here is a simple ... | 2 | 3 |
79,316,399 | 2024-12-29 | https://stackoverflow.com/questions/79316399/how-do-i-remove-an-image-overlay-in-matplotlib | Using matplotlib and python, I have a grey-scale image of labeled objects, on which I want to draw a homogeneously coloured overlay image with a position and shape based on a changeable input parameter - an object identifier. Basically an outline and enhancement of on of the objects in the image. I can generate the ove... | If you’re trying to update overlay on a grayscale without accumulating overlays, you should use this approach: import matplotlib.pyplot as plt import numpy as np import numpy.ma as ma def create_interactive_overlay(object_data): """ Creates a figure with a grayscale base image and functions to update overlays. Paramete... | 1 | 1 |
79,306,760 | 2024-12-25 | https://stackoverflow.com/questions/79306760/how-to-get-full-traceback-messages-when-the-open-syscall-is-banned | I am working on providing an environment for running users' untrusted python code. I use the python bindings of libseccomp library to avoid triggering unsafe system calls, and the service is running in a docker container. Here is the script that will be executed in my environment. P.S. The list of banned syscalls is fr... | EDIT: You will have to grant write access to stdout and stderr. Since these files are opened as the process is started, you can selectively restrict write access to these files only without having to worry about untrusted code modifying other files. You can add write permissions to stdout and stderr in your code like t... | 1 | 1 |
79,311,280 | 2024-12-27 | https://stackoverflow.com/questions/79311280/dask-var-and-std-with-ddof-in-groupby-context-and-other-aggregations | Suppose I want to compute variance and/or standard deviation with non-default ddof in a groupby context, I can do: df.groupby("a")["b"].var(ddof=2) If I want that to happen together with other aggregations, I can use: df.groupby("a").agg(b_var = ("b", "var"), c_sum = ("c", "sum")) My understanding is that to be able ... | As answered in Dask Discourse Forum, I don't think your custom Aggregation implementation is correct. However, a simpler solution can be applied: import dask.dataframe as dd import functools data = { "a": [1, 1, 1, 1, 2, 2, 2], "b": range(7), "c": range(10, 3, -1), } df = dd.from_dict(data, 2) var_ddof_2 = functools.pa... | 2 | 3 |
79,314,406 | 2024-12-28 | https://stackoverflow.com/questions/79314406/n-unique-aggregation-using-duckdb-relational-api | Say I have import duckdb rel = duckdb.sql('select * from values (1, 4), (1, 5), (2, 6) df(a, b)') rel Out[3]: ┌───────┬───────┐ │ a │ b │ │ int32 │ int32 │ ├───────┼───────┤ │ 1 │ 4 │ │ 1 │ 5 │ │ 2 │ 6 │ └───────┴───────┘ I can group by a and find the mean of 'b' by doing: rel.aggregate( [duckdb.FunctionExpression('m... | updated. I couldn't find proper way of doing count distinct, but you could use combination of array_agg() and array_unique() functions: rel.aggregate( [duckdb.FunctionExpression( 'array_unique', duckdb.FunctionExpression( 'array_agg', duckdb.ColumnExpression('b') ) )], group_expr='a', ) ┌────────────────────────────┐ ... | 2 | 1 |
79,314,321 | 2024-12-28 | https://stackoverflow.com/questions/79314321/use-an-expression-dictionary-to-calculate-row-wise-based-on-a-column-in-polars | I want to use an expression dictionary to perform calculations for a new column. I have this Polars dataframe: df=pl.DataFrame( "col1": ["a", "b", "a"], "x": [1,2,3], "y": [2,2,5] ) And I have an expression dictionary: expr_dict = { "a": pl.col("x") * pl.col("y"), "b": pl.col("x"), } I want to create a column where e... | pl.when() for conditional expression. pl.coalesce() to combine conditional expressions together. df.with_columns( r = pl.coalesce( pl.when(pl.col.col1 == k).then(v) for k, v in expr_dict.items() ) ) shape: (3, 4) ┌──────┬─────┬─────┬─────┐ │ col1 ┆ x ┆ y ┆ r │ │ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ i64 ┆ i64 ┆ i64 │ ╞═══... | 1 | 2 |
79,310,142 | 2024-12-26 | https://stackoverflow.com/questions/79310142/how-to-extract-sub-arrays-from-a-larger-array-with-two-start-and-two-stop-1-d-ar | I am looking for a way to vectorize the following code, # Let cube have shape (N, M, M) sub_arrays = np.empty(len(cube), 3, 3) row_start = ... # Shape (N,) and are integers in range [0, M-2] row_end = ... # Shape (N,) and are integers in range [1, M-1] col_start = ... # Shape (N,) and are integers in range [0, M-2] col... | I believe this question is a duplicate of the one about Slicing along axis with varying indices. However, since it may not be obvious, I think it's okay to provide the answer in a new context with a somewhat different approach. From what I can see, you want to extract data from the cube using a sliding window of a fixe... | 3 | 1 |
79,313,103 | 2024-12-28 | https://stackoverflow.com/questions/79313103/asof-join-with-multiple-inequality-conditions | I have two dataframes: a (~600M rows) and b (~2M rows). What is the best approach for joining b onto a, when using 1 equality condition and 2 inequality conditions on the respective columns? a_1 = b_1 a_2 >= b_2 a_3 >= b_3 I have explored the following paths so far: Polars: join_asof(): only allows for 1 inequality... | Using Numba here is a good idea since the operation is particularly expensive. That being said, the complexity of the algorithm is O(n²) though it is not easy to do much better (without making the code much more complex). Moreover, the array b_1, which might not fit in the L3 cache, is fully read 5_000_000 times making... | 5 | 2 |
79,313,133 | 2024-12-28 | https://stackoverflow.com/questions/79313133/sqlalchemy-one-or-more-mappers-failed-to-initialize | I know this Question has been asked a lot and believe me I checked the answers and to me my code looks fine even tough it gives error so it's not. Basically, I was trying to set up a relationship between two Entities: User and Workout. from sqlalchemy import Integer,VARCHAR,TIMESTAMP from sqlalchemy.orm import mapped_c... | This is sort of a weird problem that I have not seen a perfect solution to. SQLAlchemy allows this "deferred" referencing of other models/etc by str name so that you don't end up with circular imports, ie. User must import Workout and Workout must import User. The problem that happens is that by not directly referencin... | 2 | 0 |
79,313,343 | 2024-12-28 | https://stackoverflow.com/questions/79313343/how-to-fix-setuptools-scm-file-finders-git-listing-git-files-failed | I am using pyproject.toml to build a package. I use setuptools_scm to automatically determine the version number. I use python version 3.11.2, setuptools 66.1.1 and setuptools-scm 8.1.0. Here are the relevant parts of pyproject.toml # For a discussion on single-sourcing the version, see # https://packaging.python.org/g... | python3 -m build builds in 2 phases: 1st it builds sdist and then it builds wheel from the sdist in an isolated environment where there is no .git directory. It doesn't matter because at the wheel building phase version is already set in sdist and build gets the version from sdist, not from setuptools_scm. In short: yo... | 1 | 1 |
79,313,112 | 2024-12-28 | https://stackoverflow.com/questions/79313112/combine-two-pandas-dataframes-side-by-side-with-resulting-length-being-maxdf1 | Essentially, what I described in the title. I am trying to combine two dataframes (i.e. df1 & df2) where they have different amounts of columns (df1=3, df2=8) with varying row lengths. (The varying row lengths stem from me having a script that breaks main two excel lists into blocks based on a date condition). My goal ... | Your issue arises because you are concatenating dataframes vertically rather than horizontally. To achieve the desired output, you need to align rows from df1 and df2 with the same index and then concatenate horizontally. Here’s the updated code that would produce the output you want. I have added comments on the place... | 4 | 3 |
79,312,644 | 2024-12-27 | https://stackoverflow.com/questions/79312644/extracting-substring-between-optional-substrings | I need to extract a substring which is between two other substrings. But I would like to make the border substrings optional - if no substrings found then the whole string should be extracted. patt = r"(?:bc)?(.*?)(?:ef)?" a = re.sub(patt, r"\1", "bcdef") # d - as expected a = re.sub(patt, r"\1", "abcdefg") # adg - as ... | By making the bc and ef patterns optional, you'll get into situations where the one is matched, while the other is not. Yet, you'd need both of them or neither. The requirement that you need the whole input to match when these delimiters are not present really overcomplicates it. Realise that if there is no match, sub ... | 3 | 3 |
79,312,133 | 2024-12-27 | https://stackoverflow.com/questions/79312133/getting-all-leaf-words-reverse-stemming-into-one-python-list | On the same lines as the solution provided in this link, I am trying to get all leaf words of one stem word. I am using the community-contributed (@Divyanshu Srivastava) package get_word_forms Imagine I have a shorter sample word list as follows: my_list = [' jail', ' belief',' board',' target', ' challenge', ' command... | One solution using nested list comprehensions after stripping forgotten spaces: all_words = [setx for word in my_list for setx in get_word_forms(word.strip()).values() if len(setx)] # Flatten the list of sets all_words = [word for setx in all_words for word in setx] # Remove the repetitions and sort the set all_words =... | 1 | 1 |
79,313,107 | 2024-12-28 | https://stackoverflow.com/questions/79313107/how-to-have-pyright-infer-type-from-an-enum-check | Can types be associated with enums, so that Pyright can infer the type from an equality check? (Without cast() or isinstance().) from dataclasses import dataclass from enum import Enum, auto class Type(Enum): FOO = auto() BAR = auto() @dataclass class Foo: type: Type @dataclass class Bar: type: Type item = next(i for i... | You want a discriminated union (also known as tagged union). In a discriminated union, there exists a discriminator (also known as a tag field) which can be used to differentiate the members. You currently have an union of Foo and Bar, and you want to discriminate them using the .type attribute. However, this field can... | 2 | 2 |
79,312,774 | 2024-12-27 | https://stackoverflow.com/questions/79312774/inconsistent-url-error-in-django-from-following-along-to-beginner-yt-tutorial | As you can see in the first screenshot, /products/new isn't showing up as a valid URL although I followed the coding tutorial from YouTube exactly. For some reason there's a blank character before "new" but no blank space in the current path I'm trying to request. I don't know if that's normal or not. I'm using django ... | Add a trailing slash / to your URLpatterns to resolve this issue i.e. new/ and trending/. Also as mentioned in my comment, I would suggest you upgrade to a secure version of Django to access newer features. | 3 | 2 |
79,310,840 | 2024-12-27 | https://stackoverflow.com/questions/79310840/pil-generate-an-image-from-applying-a-gradient-to-a-numpy-array | I have a 2d NumPy array with values from 0 to 1. I want to turn this array into a Pillow image. I can do the following, which gives me a nice greyscale image: arr = np.random.rand(100,100) img = Image.fromarray((255 * arr).astype(np.uint8)) Now, instead of making a greyscale image, I'd like to apply a custom gradient.... | Method 1: vectorization of your code Your code is almost already vectorized. Almost all operations of it can work indifferently on a float or on an array of floats Here is a vectorized version def get_color_atArr(arr): assert (arr>=0).all() and (arr<=1).all() n=len(gradient) gradient.append(gradient[-1]) gradient=np.ar... | 2 | 2 |
79,311,978 | 2024-12-27 | https://stackoverflow.com/questions/79311978/how-can-i-optimize-python-code-for-analysis-of-a-large-sales-dataset | I’m working on a question where I have to process a large set of sales transactions stored in a CSV file and summarize the results. The code is running slower than expected and taking too much time for execution, especially as the size of the dataset increases. I am using pandas to load and process the data, are there ... | First of all, the df['category'] = np.select(...) line is slow because of the implicit conversion of all strings to a list of string objects. You can strongly speed this up by creating a categorical column rather than string-based one, since strings are inherently slow to compute. df['category'] = pd.Categorical.from_c... | 1 | 3 |
79,311,933 | 2024-12-27 | https://stackoverflow.com/questions/79311933/how-to-solve-multiple-and-nested-discriminators-with-pydantic-v2 | I am trying to validate Slack interaction payloads, that look like these: type: block_actions container: type: view ... type: block_actions container: type: message ... type: view_submission ... I use 3 different models for payloads coming to the same interaction endpoint: class MessageContainer(BaseModel): type: Li... | Not sure it's possible to use 2 discriminators to resolve one type (as you are trying to do). I can suggest you 3 options: 1. Split block_actions into block_message_actions and block_view_actions: from typing import Annotated, Literal from pydantic import BaseModel, Field, TypeAdapter class MessageContainer(BaseModel):... | 1 | 2 |
79,309,271 | 2024-12-26 | https://stackoverflow.com/questions/79309271/pandas-series-subtract-pandas-dataframe-strange-result | I'm wondering why pandas Series subtract a pandas dataframe produce such a strange result. df = pd.DataFrame(np.arange(10).reshape(2, 5), columns='a-b-c-d-e'.split('-')) df.max(axis=1) - df[['b']] What are the steps for pandas to produce the result? b 0 1 0 NaN NaN NaN 1 NaN NaN NaN | By default an operation between a DataFrame and a Series is broadcasted on the DataFrame by column, over the rows. This makes it easy to perform operations combining a DataFrame and aggregation per column: # let's subtract the DataFrame to its max per column df.max(axis=0) - df[['b']] a b c d e b NaN 5 NaN NaN NaN 1 Na... | 1 | 1 |
79,310,713 | 2024-12-27 | https://stackoverflow.com/questions/79310713/how-to-apply-the-capitalize-with-condition | I'm wondering how to use the capitalize function when another column has a specific value. For example, I want to change the first letter of students with Master's degree. # importing pandas as pd import pandas as pd # creating a dataframe df = pd.DataFrame({ 'A': ['john', 'bODAY', 'minA', 'peter', 'nicky'], 'B': ['Mas... | Here is the complete code: import pandas as pd # Creating the DataFrame df = pd.DataFrame({ 'A': ['john', 'bODAY', 'minA', 'peter', 'nicky'], 'B': ['Masters', 'Graduate', 'Graduate', 'Masters', 'Graduate'], 'C': [27, 23, 21, 23, 24] }) # Capitalize column A conditionally based on B df['A'] = df.apply(lambda row: row['A... | 1 | 1 |
79,309,886 | 2024-12-26 | https://stackoverflow.com/questions/79309886/parsing-units-out-of-column | I've got some data I'm reading into Python using Pandas and want to keep track of units with the Pint package. The values have a range of scales, so have mixed units, e.g. lengths are mostly meters but some are centimeters. For example the data: what,length foo,5.3 m bar,72 cm and I'd like to end up with the length co... | Going through the examples, it looks like pint_pandas is expecting numbers rather than strings. You can use apply to do the conversion: from pint import UnitRegistry ureg = UnitRegistry() df["length"].apply(lambda i: ureg(i)).astype("pint[m]") However, why keep the column as Quantity objects instead of just plain floa... | 1 | 2 |
79,309,190 | 2024-12-26 | https://stackoverflow.com/questions/79309190/numpy-convention-for-storing-time-series-of-vectors-and-matrices-items-in-rows | I'm working with discrete-time simulations of ODEs with time varying parameters. I have time series of various data (e.g. time series of state vectors generated by solve_ivp, time series of system matrices generated by my control algorithm, time series of system matrices in modal form, and so on). My question: in what ... | This is strongly dependent of the algorithms applied on your dataset. This problem is basically known as AoS versus SoA. For algorithm that does not benefit much from SIMD operations and accessing all fields, AoS can be better, otherwise SoA is often better. The optimal data structure is often AoSoA, but it is often a ... | 1 | 2 |
79,309,025 | 2024-12-26 | https://stackoverflow.com/questions/79309025/why-does-summing-data-grouped-by-df-iloc-0-also-sum-up-the-column-names | I have a DataFrame with a species column and four arbitrary data columns. I want to group it by species and sum up the four data columns for each one. I've tried to do this in two ways: once by grouping by df.columns[0] and once by grouping by df.iloc[:, 0]. data = { 'species': ['a', 'b', 'c', 'd', 'e', 'rt', 'gh', 'ed... | In groupby - column name is treated as an intrinsic grouping key, while a Series is treated as an external key. Reference - https://pandas.pydata.org/docs/reference/groupby.html When using df.iloc[:, 0]: Pandas considers the string values in the species column as a separate grouping key independent of the DataFrame str... | 2 | 0 |
79,308,731 | 2024-12-26 | https://stackoverflow.com/questions/79308731/safest-way-to-incrementally-append-to-a-file | I'm performing some calculations to generate chaotic solutions to a mathematical function. I have an infinite loop that looks something like this: f = open('solutions.csv', 'a') while True: x = generate_random_parameters() # x is a list of floats success = test_parameters(x) if success: print(','.join(map(str, x)), fil... | One simple approach to ensuring that the current call to print finishes before the program exits from a keyboard interrupt is to use a signal handler to unset a flag on which the while loop runs. Set the signal handler only when you're about to call print and reset the signal handler to the original when print returns,... | 3 | 2 |
79,307,295 | 2024-12-25 | https://stackoverflow.com/questions/79307295/what-is-the-best-way-to-avoid-detecting-words-as-lines-in-opencv-linedetector | I am using OpenCV LineDetector class in order to parse tables. However, I face an issue when I try to detect lines inside the table. for the following image: I use img = cv2.imread(TABLE_PATH) gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) lsd = cv2.createLineSegmentDetector(cv2.LSD_REFINE_ADV, sigma_scale=0.6) dlines =... | You got some lines detected, but that set contained some undesirable ones. You could just filter the set of lines for line length. If you do that, you can easily exclude the very short lines coming from the text in that picture. Implementation: that's a list comprehension, only including lines that are long enough. Wri... | 3 | 0 |
79,332,328 | 2025-1-6 | https://stackoverflow.com/questions/79332328/pydantic-model-how-to-exclude-field-from-being-hashed-eq-compared | I have the following hashable pydantic model: class TafReport(BaseModel, frozen=True): download_date: dt icao: str issue_time: dt validity_time_start: dt validity_time_stop: dt raw_report: str Now I don't want these reports to be considered different just because their download date is different (I insert that with th... | Unfortunately there is no built-in option at the moment, but there are two options that you can consider: Changing from BaseModel to a Pydantic dataclass: from dataclasses import field from datetime import datetime as dt from pydantic import TypeAdapter from pydantic.dataclasses import dataclass @dataclass(frozen=True)... | 5 | 1 |
79,336,604 | 2025-1-7 | https://stackoverflow.com/questions/79336604/failed-creating-mock-folders-with-pyfakefs | I'm working on a project that uses pyfakefs to mock my filesystem to test folder creation and missing folders in a previously defined tree structure. I'm using Python 3.13 on Windows and get this output from the terminal after running my test: Terminal output: (Does anyone have a tip for formatting terminal output with... | The issue has been acknowledged, fixed, and the fix has been included in the 5.7.4 release of pyfakefs. No workaround should thus be necessary, any longer. | 1 | 1 |
79,321,826 | 2025-1-1 | https://stackoverflow.com/questions/79321826/seleniumbase-cdp-mode-opening-new-tabs | I am currently writing a python program which uses a seleniumbase web bot with CDP mode activated: with SB(uc=True, test=True, xvfb=True, incognito=True, agent=<user_agent>, headless=True) as sb: temp_email_gen_url = "https://temp-mail.org/en" sb.activate_cdp_mode(temp_email_gen_url) ... I need to be able to create ne... | For better or worse there isn't an "open tab" feature in CDP mode. The main developer of seleniumbase suggests using a separate driver in CDP mode for each tab as follows, equivalent to using "open in new window" on every link: from seleniumbase import SB # opens all links on the target page with a second driver with S... | 1 | 1 |
79,330,304 | 2025-1-5 | https://stackoverflow.com/questions/79330304/optimizing-sieving-code-in-the-self-initializing-quadratic-sieve-for-pypy | I've coded up the Self Initializing Quadratic Sieve (SIQS) in Python, but it has been coded with respect to being as fast as possible in PyPy(not native Python). Here is the complete code: import logging import time import math from math import sqrt, ceil, floor, exp, log2, log, isqrt from rich.live import Live from ri... | This answer provides a way to significantly speed up the code by: transformation of a fraction of the Python code into a C++ code low-level optimization of the most expensive parts parallelization of one very-expensive part The overall algorithm is left unchanged and the logic of the Python code too. That being said ... | 1 | 1 |
79,336,731 | 2025-1-7 | https://stackoverflow.com/questions/79336731/mock-date-today-but-leave-other-date-methods-alone | I am trying to test some python code that involves setting/comparing dates, and so I am trying to leverage unittest.mock in my testing (using pytest). The current problem I'm hitting is that using patch appears to override all the other methods for the patched class (datetime.date) and so causes other errors because my... | Description of changes to the file main.py I have found a possible solution by the modification of the import in your production code (main.py): instead of import datetime from the module datetime I add the import of the module datetime: # following are my imports import datetime from datetime import date, timedelta ... | 2 | 1 |
79,336,417 | 2025-1-7 | https://stackoverflow.com/questions/79336417/why-is-my-shared-memory-reading-zeroes-on-macos | I am writing an interface to allow communication between a main program written in C and extension scripts written in python and run in a separate python interpreter process. The interface uses a UNIX socket for small amounts of data and POSIX shared memory for large arrays. The C program handles all creation, resource... | I'm a colleague of OP and have looked into the issue we were having. When creating a shared memory object on the C side, use a leading slash in its name: int fd = shm_open("/mem", O_RDWR|O_CREAT, S_IRUSR|S_IWUSR); When accessing it on the Python side, do not use a leading slash as it is added automatically on POSIX s... | 3 | 3 |
79,337,064 | 2025-1-7 | https://stackoverflow.com/questions/79337064/how-to-run-async-code-in-ipython-startup-files | I have set IPYTHONDIR=.ipython, and created a startup file at .ipython/profile_default/startup/01_hello.py. Now, when I run ipython, it executes the contents of that file as if they had been entered into the IPython shell. I can run sync code this way: # contents of 01_hello.py print( "hello!" ) $ ipython Python 3.12.... | From version 8, ipython uses a function called get_asyncio_loop to get access to the event loop that it runs async cells on. You can use this event loop during your startup script to run any tasks you want on the same event loop that async cells will run on. NB. This is only uses for the asyncio package in Python's sta... | 1 | 2 |
79,337,434 | 2025-1-7 | https://stackoverflow.com/questions/79337434/whats-the-best-way-to-use-a-sklearn-feature-selector-in-a-grid-search-to-evalu | I am training a sklearn classifier, and inserted in a pipeline a feature selection step. Via grid search, I would like to determine what's the number of features that allows me to maximize performance. Still, I'd like to explore in the grid search the possibility that no feature selection, just a "passthrough" step, is... | The parameter n_features_to_select can be an integer (number of features) or a float (proportion of features). So instead of [1, 2, 3], the pipeline can run with [1/3, 2/3, 1.0]. To get the scores for each combination of parameters in the grid search, you can run display(pd.DataFrame(grid_search.cv_results_)) The resu... | 1 | 1 |
79,336,594 | 2025-1-7 | https://stackoverflow.com/questions/79336594/uwsgi-with-https-getting-socket-option-missing | I am running a Flask application on Docker with uwsgi. I have been running it for years now, but we need to add https to it. I know I can use an HAProxy and do ssl offloading, but in our current setup we cant do it this way, at least not right now. We need to do the SSL directly on the application. I have tried multipl... | You can use the following example to run your flask application with uwsgi and docker. I will provide a minimal example and you can use it to expand to your needs. The uwsgi conf were extracted from docs. uwsgi.ini [uwsgi] shared-socket = 0.0.0.0:443 https = =0,ssl/server.crt,ssl/server.key master = true module = app:a... | 1 | 0 |
79,330,032 | 2025-1-5 | https://stackoverflow.com/questions/79330032/generalized-nonsymmetric-eigensolver-python | How do I solve a nonsymmetric eigenproblem. In terms of scipy.sparse.linalg.eigsh the matrix needs to be "real symmetric square matrix or complex Hermitian matrix A" (https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.linalg.eigsh.html). and the same for scipy.sparse.linalg.eigs. "M must represent a real... | Despite your vibroacoustic link, you are (presumably) solving the undamped system where M is the mass matrix and K is the stiffness matrix. With solutions proportional to eiωt this becomes or This is the system that you want to solve. However, M should be an invertible matrix, so you can rewrite this as so looking ... | 2 | 4 |
79,335,173 | 2025-1-7 | https://stackoverflow.com/questions/79335173/good-r2-score-but-huge-parameter-uncertainty | I'm using a quadratic function to fit my data. I have good R2 score but huge uncertainty in my fitting parameters Here is the graph and the results: R2 score: 0.9698143924536671 uncertainty in a, b, and y0: 116.93787913, 10647.11867972, 116.93787935 How should I intepret this result? Here is how I defined the quadrati... | Your model is over-parametrized. You can tell when you expand the polynomial: a * (1 - x**2 / (2*b**2)) + y0 -> a - x**2 * a / (2*b**2) + y0 -> y0+a - x**2 * a / (2*b**2) The are only two independent parameters, y0 + a and a / (2*b**2). You will be able to fit just as well with any two of your original parameters, and... | 1 | 3 |
79,333,976 | 2025-1-6 | https://stackoverflow.com/questions/79333976/is-it-possible-to-convert-from-qdatetime-to-python-datetime-without-loosing-time | I am trying to convert a QDateTime object in pyside6 to a python datetime object. Consider the following code: from PySide6.QtCore import Qt, QDateTime, QTimeZone import datetime qdatetime = QDateTime.currentDateTime() print(qdatetime.offsetFromUtc()) qdatetime.setTimeZone(QTimeZone.UTC) print(qdatetime.toString(Qt.ISO... | The toPython() function of PySide originates from the original toPyDateTime() function of PyQt4, and at that time the QDateTime class didn't provide time zone information. The behavior wasn't changed even with the more recent PyQt5/6 implementation, which wasn't updated to reflect the time zone info introduced since Qt... | 2 | 0 |
79,335,053 | 2025-1-7 | https://stackoverflow.com/questions/79335053/replace-substring-if-key-is-found-in-another-file | I have files associated with people scattered around different directories. I can find them with a master file. Some of them need to be pulled into my working directory. Once I've pulled them, I need to update the master file to reflect the change. To keep track of which files were moved, I record the person's name in ... | Frame challenge: I don't need to compare the two files, I can do sed shenanigans. Extract matching lines from master to have the entire line. Wrangle the lines to have just the path. Use sed grouping to copy the old path and filename, then build a sed command in place inside a new file. Execute the new file. This mea... | 3 | 1 |
79,337,201 | 2025-1-7 | https://stackoverflow.com/questions/79337201/mypy-explicit-package-based-vs-setuptools | I’ve a project structured as follows: . ├── hello │ ├── __init__.py │ └── animal.py ├── tests │ ├── __init__.py │ └── test_animal.py ├── README └── pyproject.toml This is just a personal Python library, and doesn’t need to be published or distributed. The usage consists of running pytest and mypy from the root directo... | I found a pip ticket for this exact problem where pytest was confused by the presence of a build directory. One of the suggestions in the ticket was to ignore the build directory. Apparently, in-place builds were introduced in pip 20.1, and is now the default. The following configuration in pyproject.toml solves the pr... | 1 | 0 |
79,337,249 | 2025-1-7 | https://stackoverflow.com/questions/79337249/load-dll-with-ctypes-fails | I use a proprietary Python package. Within this Python package the following DLL load command fails. scripting_api = ctypes.CDLL("scripting_api_interface") Could not find module 'scripting_api_interface' (or one of its dependencies). Try using the full path with constructor syntax. I know the path to the DLL scripting... | Call CDLL with the version that works before importing the package. Once the DLL is loaded, additional loads are ignored. Example (Win11 x64)... test.c (simple DLL source, MSVC: cl /LD test.c): __declspec(dllexport) void func() {} With test.dll in the current directory, loading will fail without an explicit path for D... | 1 | 1 |
79,336,866 | 2025-1-7 | https://stackoverflow.com/questions/79336866/half-precision-in-ctypes | I need to be able to seamlessly interact with half-precision floating-point values in a ctypes structure. I have a working solution, but I'm dissatisfied with it: import ctypes import struct packed = struct.pack('<Ife', 4, 2.3, 1.2) print('Packed:', packed.hex()) class c_half(ctypes.c_ubyte*2): @property def value(self... | Using descriptors gets close to what you want. Declare the ctypes fields with underscores and add the descriptors as class variables. If you are not familiar with descriptors, read the guide in the link above. import ctypes as ct import struct # Descriptor implementation class Half: def __set_name__(self, owner, name):... | 5 | 3 |
79,336,210 | 2025-1-7 | https://stackoverflow.com/questions/79336210/unable-to-accurately-detect-top-7-prominent-peaks-in-data-using-python-s-find-pe | I hope to identify the peaks in a segment of data (selecting the top 7 points with the highest prominences), which are clearly visible to the naked eye. However, I am unable to successfully obtain the results using the find_peaks function. The data is accessible in this gist. Error Result: If I directly use find_peaks:... | The issue with your approach is that you rely on the prominence, which is the local height of the peaks, and not a good fit with your type of data. From your total dataset, it looks indeed clear to the naked eye that there are high "peaks" relative to the top of the large blue area, but this is no longer obvious once w... | 4 | 5 |
79,333,765 | 2025-1-6 | https://stackoverflow.com/questions/79333765/type-hinting-and-type-checking-for-intenum-custom-types | Qt has several IntEnum's that support custom , user-specified types or roles. A few examples are: QtCore.Qt.ItemDataRole.UserRole QtCore.QEvent.Type.User In both cases, a user type/role is created by choosing an integer >= the User type/role myType = QtCore.QEvent.Type.User + 1 The problem is that all of the functio... | In PySide6/PyQt6, the type of a user-defined int enum member should be preserved by using the constructor of the enum: from PySide6.QtCore import QEvent class MyEvent(QEvent): def __init__(self) -> None: super().__init__(QEvent.Type(QEvent.Type.User + 1)) Assuming the latest PySide6 stubs are installed, a file with th... | 2 | 1 |
79,336,023 | 2025-1-7 | https://stackoverflow.com/questions/79336023/forward-fill-numpy-matrix-mask-with-values-based-on-condition | I have the following matrix import numpy as np A = np.array([ [0, 0, 0, 0, 1, 0, 1], [0, 0, 0, 0, 0, 0, 1], [1, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0] ]).astype(bool) How do I fill all the rows column-wise after a column is True? My desired output: [0, 0, 0, 0, 1, 1, 1], [0, 0, 0, 0, 0, 0, 1], [1, 1, 1, 1, 1, 1, 1]... | You could use logical_or combined with accumulate: np.logical_or.accumulate(A, axis=1) Output: array([[False, False, False, False, True, True, True], [False, False, False, False, False, False, True], [ True, True, True, True, True, True, True], [False, False, False, False, False, False, False]]) If you want integers,... | 3 | 6 |
79,335,580 | 2025-1-7 | https://stackoverflow.com/questions/79335580/getting-strange-output-when-using-group-by-apply-with-np-select-function | I am working with a Timeseries data wherein I am trying to perform outlier detection using IQR method. Sample Data: import pandas as pd import numpy as np df = pd.DataFrame({'datecol' : pd.date_range('2024-1-1', '2024-12-31'), 'val' : np.random.random.randin(low = 100, high = 5000, size = 8366}) my function: def is_ou... | You should use groupby.transform, not apply: df['flag'] = df.groupby(df['datecol'].dt.weekday)['val'].transform(is_outlier) Alternatively, explicitly return a Series and use group_keys=False: def is_outlier(x): iqr = x.quantile(.75) - x.quantile(.25) outlier = (x <= x.quantile(.25) - 1.5*iqr) | (x >= x.quantile(.75) +... | 1 | 1 |
79,334,958 | 2025-1-7 | https://stackoverflow.com/questions/79334958/python-web-scraping-bulk-downloading-linked-files-from-the-sec-aaer-site-403 | I've been trying to download 300 linked files from SEC's AAER site. Most of the links are pdf's, but some are websites that I would need to save to pdf instead of just downloading. I'm teaching myself some python web scraping and this didn't seem like too hard a task, but I havent been able to get past the 403 error wh... | After copying all the Headers inside the request header when you manually open the link containing the PDF: pdf_response = requests.get(link_href, headers={ "Host": "www.sec.gov", "User-Agent": "YOUR_USER_AGENT", "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Accept-Language": "en-US,en... | 2 | 1 |
79,334,065 | 2025-1-6 | https://stackoverflow.com/questions/79334065/geopandas-read-file-of-a-shapefile-gives-error-if-crs-parameter-is-specified | All, I use ESRI World Countries Generalized shape file, that is available here using GeoPandas shp_file =gpd.read_file('World_Countries/World_Countries_Generalized.shp') print(shp_file.crs) The CRS I got is EPSG:3857, yet once I add the CRS to the gpd.read_file as the following shp_file1 =gpd.read_file('../../Download... | Since geopandas 1.0, another, faster, underlying library is used by default to read files in geopandas: pyogrio. Some more info can be found here: fiona vs pyogrio. When this new library is used, the crs parameter is not supported. The easiest solution for this specific case is to just remove the crs='EPSG:3857' parame... | 1 | 1 |
79,333,616 | 2025-1-6 | https://stackoverflow.com/questions/79333616/remove-a-tiny-repeated-object-from-an-image-using-opencv | I use OpenCV and Python and I want to remove the "+" sign that is repeated on my image. The following is an example of an image in question. The goal is to produce the same image, but with the "+" signs removed. How can I achieve this? I've tried using the below code to achieve this. img = cv2.imread(image_path) gray ... | This is basically implementing the comment of fmw42; that is, create the mask by thresholding for white regions (also I increased your dilation to 2 iterations for a visually slightly better result): import cv2 import numpy as np image_path = ... # TODO: adjust as necessary img = cv2.imread(image_path) mask = (cv2.cvtC... | 3 | 6 |
79,324,851 | 2025-1-2 | https://stackoverflow.com/questions/79324851/parsing-multi-index-pandas-data-frame-for-tuple-list-appendage | Problem/Task: create a function that inputs a pandas data frame represented by the markdown in Fig 1 and converts/outputs it to a list with the structure represented in Fig 2. I look forward to any feedback/support anyone might have! Fig 1: Pandas Data Frame (Function Input) as Markdown resources ('Widget A (idx = 0... | Here's one approach: Minimal Reproducible Example import pandas as pd import numpy as np data = [[1, np.nan, np.nan], [np.nan, 2, 2], [np.nan, 3, np.nan]] m_idx = pd.MultiIndex.from_tuples( [('A', 't1'), ('A', 't2'), ('B', 't1')] ) idx = pd.Index([f'm_{i}' for i in range(1, 4)], name='resources') df = pd.DataFrame(data... | 1 | 1 |
79,333,087 | 2025-1-6 | https://stackoverflow.com/questions/79333087/how-do-i-resolve-snowparksqlexception-user-is-empty-in-function-for-snowpar | When invoking a Snowpark-registered SPROC, I get the following error: SnowparkSQLException: (1304): <uuid>: 100357 (P0000): <uuid>: Python Interpreter Error: snowflake.connector.errors.ProgrammingError: 251005: User is empty in function MY_FUNCTION with handler compute for the following python code and invocation: def... | Using Snowflake Notebooks: from snowflake.snowpark.session import Session my_session = snowflake.snowpark.context.get_active_session() def my_function(session: Session, input_table: str, limit: int) -> None: return None sproc_my_function = my_session.sproc.register(func=my_function, name='my_function', is_permanent=Tru... | 1 | 1 |
79,326,576 | 2025-1-3 | https://stackoverflow.com/questions/79326576/writing-to-application-insights-from-fastapi-with-managed-identity | I am trying to log from a FastAPI application to Azure application insights. It is working with a connection string, but I would like it to be working with managed identity. The code below does not fail - no errors or anything. But it does not log anything. Any suggestions to sove the problem, or how to troubleshoot as... | Yes, you can use managed identity but you can need to use connection string along with it too as its a mandatory parameter. Also, to use managed identity, you need to deploy your code to any Azure resource such as Function app or Web App. I have deployed below code to web app and then enabled system managed identity... | 1 | 1 |
79,331,362 | 2025-1-5 | https://stackoverflow.com/questions/79331362/how-to-keep-track-of-ongoing-tasks-and-prevent-duplicate-tasks | I have a python script that receives incoming messages from web and process the data: async with asyncio.TaskGroup() as task_group: processor_task = task_group.create_task(processor.start(message), name=f"process_message_{message.sender_id}_task" I added naming for tasks with f-string interpolation so in the debugging... | you can keep track of the working tasks like this background_tasks = set() async with asyncio.TaskGroup() as task_group: if sender_id not in background_tasks: # check if the sender has a task running processor_task = task_group.create_task(processor.start(message), name=f"process_message_{message.sender_id}_task") back... | 1 | 1 |
79,330,931 | 2025-1-5 | https://stackoverflow.com/questions/79330931/gridsearchcv-with-data-indexed-by-time | I am trying to use the GridSearchCV from sklearn.model_selection. My data is a set of classification that is indexed by time. As a result, when doing cross validation, I want the training set to be exclusively the data with time all before the data in the test set. So my training set X_train, y_train looks like Time fe... | the default cross-validation in GridSearchCV does not consider temporal dependency when splitting. You can use TimeSeriesSplit instead of the default CV from model selection. TimeSeriesSplit is built for this exact use case of yours. | 1 | 1 |
79,331,099 | 2025-1-5 | https://stackoverflow.com/questions/79331099/converting-nested-query-string-requests-to-a-dictionary | I'm experiencing some difficulties converting a querystring data to a well formed dictionary in my view. Here's my view class VendorPayloadLayerView(generics.GenericAPIView): permission_classes = (permissions.AllowAny,) def get(self, request, *args, **kwargs): print("Here's the request *****") print(request) payload = ... | That is because a the querystring is just a "one level" dictionary: it makes keys which are strings, to values which are strings. The fact that the value looks like a JSON blob, does not make much sense. Here you even make it worse because your use one key in the querystring, that maps to no value. The key here is a JS... | 1 | 1 |
79,330,953 | 2025-1-5 | https://stackoverflow.com/questions/79330953/lemma-of-puncutation-in-spacy | I'm using spacy for some downstream tasks, mainly noun phrase extraction. My texts contain a lot of parentheses, and while applying the lemma, I noticed all the punctuation that doesn't end sentences becomes --: import spacy nlp = spacy.load("de_core_news_sm") doc = nlp("(Das ist ein Test!)") for token in doc: print(f"... | I can confirm the issue with German, but when I try the equivalent sentence in Dutch the ( and ) are kept as lemma instead of --. So this is something particular in the German model. You can override the default lemmata if you want: import spacy nlp = spacy.load("de_core_news_sm") nlp.get_pipe("attribute_ruler").add([[... | 2 | 2 |
79,330,764 | 2025-1-5 | https://stackoverflow.com/questions/79330764/how-can-i-silence-undefinedmetricwarning | How can I silence the following warning while running GridSearchCV(model, params, cv=10, scoring='precision', verbose=1, n_jobs=20, refit=True)? /opt/dev/myenv/lib/python3.9/site-packages/sklearn/metrics/_classification.py:1531: UndefinedMetricWarning: Precision is ill-defined and being set to 0.0 due to no predicted s... | Try this code import warnings from sklearn.exceptions import UndefinedMetricWarning with warnings.catch_warnings(): # Ignore only UndefinedMetricWarning warnings.filterwarnings("ignore", category=UndefinedMetricWarning) | 1 | 1 |
79,330,650 | 2025-1-5 | https://stackoverflow.com/questions/79330650/there-are-some-basics-i-need-to-understand-regarding-python-custom-user-models | I wanted to delete a user. After a bit of struggling I ended up with: views.py the_user = get_user_model() @login_required def del_user(request): email = request.user.email the_user.objects.filter(email=email).delete() messages.warning(request, "bruker slettet.") return redirect("index") But I really do not understand... | request is an instance of an HttpRequest object. It represents the current received request. The AuthenticationMiddleware authenticates the user that made the request (based on cookies or however you have configured it) and adds the request.user attribute to it, so your code can get who the current user is that made th... | 2 | 3 |
79,330,420 | 2025-1-5 | https://stackoverflow.com/questions/79330420/how-to-get-maximum-average-of-subarray | I have been working this leet code questions https://leetcode.com/problems/maximum-average-subarray-i/description/ I have been able to create a solution after understanding the sliding window algorithm. I was wondering with my code where my logic is going wrong, I do think my my issue seems to be in this section sectio... | temp is the sum of the subarray, while k is the number of elements in it - you shouldn't be comparing the two, they're two totally different things. To implement a sliding window, I'd sum the first k elements of nums, and then run over it where in each iteration I drop the first element and add the last and then check ... | 1 | 1 |
79,329,522 | 2025-1-4 | https://stackoverflow.com/questions/79329522/filtering-from-index-and-comparing-row-value-with-all-values-in-column | Starting with this DataFrame: df_1 = pl.DataFrame({ 'name': ['Alpha', 'Alpha', 'Alpha', 'Alpha', 'Alpha'], 'index': [0, 3, 4, 7, 9], 'limit': [12, 18, 11, 5, 9], 'price': [10, 15, 12, 8, 11] }) ┌───────┬───────┬───────┬───────┐ │ name ┆ index ┆ limit ┆ price │ │ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ i64 ┆ i64 ┆ i64 │ ╞══════... | Option 1: df.join_where (experimental) out = ( df_1.join( df_1 .join_where( df_1.select('index', 'price'), pl.col('index_right') > pl.col('index'), pl.col('price_right') >= pl.col('limit') ) .group_by('index') .agg( pl.col('index_right').min().alias('min_index') ), on='index', how='left' ) ) Output: shape: (5, 5) ┌───... | 2 | 2 |
79,322,010 | 2025-1-1 | https://stackoverflow.com/questions/79322010/how-to-make-mypy-correctly-type-check-a-function-using-functools-partial | I'm trying to create a function that returns a partially applied callable, but I'm encountering issues with mypy type checking. HEre Is my first implementation: Help me to explain my question for stackoverflow. i.e find a title and the body this code : from collections.abc import Callable from functools import partial ... | Callable is not expressive enough to be able to describe the function signature you are returning. Instead, you should use a Protocol. This is caused by your use of partial() to fill in the argument for j. That is, partial(f, j=4)(1, 2) is equivalent to f(1, 2, j=4) which means Python tries to pass both 2 and 4 as the ... | 4 | 4 |
79,328,082 | 2025-1-4 | https://stackoverflow.com/questions/79328082/python-cant-access-attribute-of-an-object-unless-i-check-nullity-why | A LeetCode problem which is about linked lists that goes with this structure: # Definition for singly-linked list. # class ListNode: # def __init__(self, val=0, next=None): # self.val = val # self.next = next Gave an error while attempting to print the val of the next node, but still worked when given a nullity check ... | When you reach the end of the list, nextNode will be None. None doesn't have a val attribute, so nextNode.val raises an exception. If you check whether it's None first, you don't execute that erroneous expression, so there's no error. When you use try/except, it catches the exception. But then in the except: block you ... | 1 | 1 |
79,327,275 | 2025-1-3 | https://stackoverflow.com/questions/79327275/gekko-using-apopt-isnt-optimizing-a-single-linear-equation-represented-as-a-pwl | I've run into an issue where I can't get APOPT to optimize an unconstrained single piecewise linear, and it's really throwing me for a loop. I feel like there's something I'm not understanding about model.pwl, but it's hard (for me) to find documentation outside of the GEKKO docs. Here's my minimal example: model = GEK... | A cubic spline is much more reliable in optimization than a piecewise linear function because it doesn't rely on slack variables and switching conditions. from gekko import GEKKO model = GEKKO(remote=False) model.options.SOLVER = 1 x = model.Var(lb=0, ub=4, integer=True) y = model.Var() model.cspline(x, y, [0, 1, 2, 3,... | 3 | 2 |
79,327,573 | 2025-1-3 | https://stackoverflow.com/questions/79327573/django-db-utils-operationalerror-no-such-column-home-student-schoolyear | models.py ''' class Person(models.Model): firstname = models.CharField(max_length=30) lastname = models.CharField(max_length=30) othernames = models.CharField(max_length=40) dateOfBirth = models.DateField() gender = models.CharField(max_length=20) birthGender = models.CharField(max_length=20) email = models.EmailField(... | make sure you have done what @raphael said you should do. if issue persist, python manage.py makemigrations --merge python manage.py migrate --fake reset db python manage.py migrate *yourapp* zero python manage.py migrate | 1 | 1 |
79,327,540 | 2025-1-3 | https://stackoverflow.com/questions/79327540/how-to-reference-an-inner-class-or-attribute-before-it-is-fully-defined | I have a scenario where a class contains an inner class, and I want to reference that inner class (or its attributes) within the outer class. Here’s a concrete example using Django: from django.db import models from django.utils.translation import gettext_lazy as _ class DummyModel(models.Model): class StatusChoices(mo... | You can probably do this by defining the choices outside the class first, because the Meta class is actually constructed even before the status is accessible: # 🖟 outside DummyModel class StatusChoices(models.TextChoices): ACTIVE = 'active', _('Active') INACTIVE = 'inactive', _('Inactive') class DummyModel(models.Mode... | 1 | 1 |
79,322,646 | 2025-1-2 | https://stackoverflow.com/questions/79322646/problems-obtaining-an-intersected-linestring-using-geopandas | I am having an issue obtaining a linestring from an intersection with a polygon using GeoPandas. The linestring is self-intersecting, which is what is causing my issues. A line intersecting a polygon: Given the following code: import geopandas as gp from shapely.geometry import LineString, Polygon # Draw a polygon tha... | Because line_merge apparently doesn't reconstruct the single linestring when it is self-intersecting, the only option I see is to extract the coordinates and create a new linestring, which results in a single linestring. In the code sample I use some functions that are only available in shapely 2, so it needs a relativ... | 2 | 1 |
79,327,355 | 2025-1-3 | https://stackoverflow.com/questions/79327355/pandas-non-negative-integers-to-n-bits-binary-representation | I have a pandas Series containing strictly non-negative integers like so: 1 2 3 4 5 I want to convert them into n-bits binary representation based on the largest value. For example, the largest value here is 5, so we would have 3 bits/3 columns, and the resulting series would be something like this 0 0 1 0 1 0 0 1 1 1... | If your values are less than 255, you could unpackbits: s = pd.Series([1, 2, 3, 4, 5]) N = int(np.log2(s.max())) powers = 2**np.arange(N, -1, -1) out = pd.DataFrame(np.unpackbits(s.to_numpy(np.uint8)[:, None], axis=1)[:, -N-1:], index=s.index, columns=powers) If your have larger numbers, compute a mask with & and an a... | 2 | 1 |
79,325,274 | 2025-1-3 | https://stackoverflow.com/questions/79325274/how-to-prevent-type-alias-defined-in-a-stub-file-from-being-used-in-other-module | I'm working on a Python 3.13.1 project using mypy 1.14.0 for static type checking. I have a module named module.py with a function function that returns a type with a very long name, Type_whose_name_is_so_long_that_we_do_not_want_to_call_it_over_and_over_again. To make the code more readable, I've defined a type alias ... | Apart from writing your own mypy plugin, there isn't really a way to do this. Prefixing items with an underscore is by far the overwhelmingly adopted convention to indicate names which aren't supposed to be exported (used outside of the module it is defined in); you can see this convention adopted in Python's own types... | 1 | 3 |
79,325,674 | 2025-1-3 | https://stackoverflow.com/questions/79325674/blackboxprotobuf-showing-positive-values-instead-of-negative-values-for-protobuf | I have an issue where blackboxprotobuf takes response of protobuf & returning the dictionary where i see few values where suppose to be negative instead coming as positive value. Calling an APi with lat (40.741895) & long(-73.989308). Using these lat & long, a key is genereated '81859706' that be used in the api. For K... | The bit value that comes for the number is >>> bin(3555038608) '0b11010011111001011001010110010000' If you take 2s complement of that number, you will get the negative that you want. >>> def twos_comp(val, bits): ... """compute the 2's complement of int value val""" ... if (val & (1 << (bits - 1))) != 0: # if sign bit... | 1 | 1 |
79,325,219 | 2025-1-3 | https://stackoverflow.com/questions/79325219/how-can-i-scrape-event-links-and-contact-information-from-a-website-with-python | I am trying to scrape event links and contact information from the RaceRoster website (https://raceroster.com/search?q=5k&t=upcoming) using Python, requests, Pandas, and BeautifulSoup. The goal is to extract the Event Name, Event URL, Contact Name, and Email Address for each event and save the data into an Excel file s... | Use the API endpoint to get the data on upcoming events. Here's how: import requests from tabulate import tabulate import pandas as pd url = 'https://search.raceroster.com/search?q=5k&t=upcoming' headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0... | 3 | 3 |
79,325,561 | 2025-1-3 | https://stackoverflow.com/questions/79325561/replace-first-row-value-with-last-row-value | I'm trying to take the value from the last row of a df col and replace it with the first value. I'm returning a value error. import pandas as pd df = pd.DataFrame({'name': ['tom','jon','sam','jane','bob'], 'age': [24,25,18,26,17], 'Notification': [np.nan,'2025-01-03 14:19:35','2025-01-03 14:19:39','2025-01-03 14:19:41'... | You need to edit this line: df.loc[df.index[0], 'age'] = df.loc[df.index[-1], 'age'] Complete code: import pandas as pd import numpy as np df = pd.DataFrame({'name': ['tom', 'jon', 'sam', 'jane', 'bob'], 'age': [np.nan, 25, 18, 26, 17], 'sex': ['male', 'male', 'male', 'female', 'male']}) df.loc[df.index[0], 'age'] = d... | 2 | 2 |
79,324,668 | 2025-1-2 | https://stackoverflow.com/questions/79324668/how-to-get-only-the-first-occurrence-of-each-increasing-value-in-numpy-array | While working on first-passage probabilities, I encountered this problem. I want to find a NumPythonic way (without explicit loops) to leave only the first occurrence of strictly increasing values in each row of a numpy array, while replacing repeated or non-increasing values with zeros. For instance, if arr = np.array... | Maximum can be accumulated per-row: >>> arr array([[1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 5, 5], [1, 1, 2, 2, 2, 3, 2, 2, 3, 3, 3, 4, 4], [3, 2, 1, 2, 1, 1, 2, 3, 4, 5, 4, 3, 2]]) >>> np.maximum.accumulate(arr, axis=1) array([[1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 5, 5], [1, 1, 2, 2, 2, 3, 3, 3, 3, 3, 3, 4, 4], [3, 3, 3, 3, 3, 3... | 2 | 3 |
79,324,524 | 2025-1-2 | https://stackoverflow.com/questions/79324524/attributeerror-with-instance-of-model-with-generic-foreign-field-in-created-by | I have 3 models that I am dealing with here: SurveyQuestion, Update, and Notification. I use a post_save signal to create an instance of the Notification model whenever an instance of SurveyQuestion or Update was created. The Notification model has a GenericForeignKey which goes to whichever model created it. Inside th... | You should use source, not source_object: Notification( source=instance, start_date=instance.start_date, end_date=instance.end_date ) A GenericForeignKey essentially combines two columns, the source_object (very bad name) that points to the type of the item the GenericForeignKey refers to, and a column that stores the ... | 2 | 1 |
79,322,581 | 2025-1-2 | https://stackoverflow.com/questions/79322581/is-this-a-false-positive-override-error-signature-of-method-incompatible-w | Although the method signature in Sub is compatible with Super, mypy rejects the override: Signature of "method" incompatible with supertype "Super". I'm using python 3.13.1 mypy 1.14.0 First, I made following test.pyi. from typing import overload class Super: def method(self, arg:Other|Super)->Super: pass class Sub(S... | As pointed out in a pyright ticket about this, the typing spec mentions this behaviour explicitly: If a callable B is overloaded with two or more signatures, it is assignable to callable A if at least one of the overloaded signatures in B is assignable to A There's a mypy ticket asking about the same problem. However... | 4 | 4 |
79,323,799 | 2025-1-2 | https://stackoverflow.com/questions/79323799/how-to-convert-matrix-to-block-matrix-using-numpy | Say I have a matrix like Matrix = [[A11, A12, A13, A14], [A21, A22, A23, A24], [A31, A32, A33, A34], [A41, A42, A43, A44]], and suppose I want to convert it to a block matrix [[A,B], [C,D]], where A = [[A11, A12], [A21, A22]] B = [[A13, A14], [A23, A24]] C = [[A31, A32], [A41, A42]] D = [[A33, A34], [A43, A44]]. What d... | Without using loops, you can reshape your array (and reorder the dimensions with moveaxis): A, B, C, D = np.moveaxis(Matrix.reshape((2,2,2,2)), 1, 2).reshape(-1, 2, 2) Or: (A, B), (C, D) = np.moveaxis(Matrix.reshape((2,2,2,2)), 1, 2) For a generic answer on an arbitrary shape: x, y = Matrix.shape (A, B), (C, D) = np.... | 2 | 3 |
79,323,215 | 2025-1-2 | https://stackoverflow.com/questions/79323215/how-to-combine-columns-with-extra-strings-into-a-concatenated-string-column-in-p | I am trying to add another column that will contain combination of two columns (Total & percentage) into a result column(labels_value) which look like: (Total) percentage%. Basically to wrap bracket strings on Total column and add % string at the end of combination of these two columns. df import polars as pl so_df = p... | Here are a few options: so_df.with_columns( labels_value_1=pl.format("({}) {}%", "Total", "percentage"), labels_value_2=pl.concat_str( pl.lit("("), "Total", pl.lit(") "), "percentage", pl.lit("%") ), labels_value_3=( "(" + pl.col("Total").cast(pl.String) + ") " + pl.col("percentage").cast(pl.String) + "%" ), ) # Only T... | 2 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.