Web Development int64 0 1 | Data Science and Machine Learning int64 0 1 | Question stringlengths 28 6.1k | is_accepted bool 2
classes | Q_Id int64 337 51.9M | Score float64 -1 1.2 | Other int64 0 1 | Database and SQL int64 0 1 | Users Score int64 -8 412 | Answer stringlengths 14 7k | Python Basics and Environment int64 0 1 | ViewCount int64 13 1.34M | System Administration and DevOps int64 0 1 | Q_Score int64 0 1.53k | CreationDate stringlengths 23 23 | Tags stringlengths 6 90 | Title stringlengths 15 149 | Networking and APIs int64 1 1 | Available Count int64 1 12 | AnswerCount int64 1 28 | A_Id int64 635 72.5M | GUI and Desktop Applications int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0 | Is there any way that can handle deleted message by user in one-to-one chat or groups that bot is member of it ?
there is method for edited message update but not for deleted message . | true | 48,484,272 | 1.2 | 1 | 0 | 9 | No. There is no way to track whether messages have been deleted or not. | 0 | 1,993 | 0 | 8 | 2018-01-28T07:49:00.000 | telegram-bot,python-telegram-bot | handle deleted message by user in telegram bot | 1 | 2 | 2 | 48,485,447 | 0 |
0 | 0 | I'm looking for a way to implement a partially undirect graph. This is, graphs where edges can be directed (or not) and with different type of arrow (>, *, #, etc.).
My problem is that when I try to use undirect grpah from Networkx and stored arrow type as an attribute, I don't find an efficient way to tell networkx ... | false | 48,503,540 | 0 | 0 | 0 | 0 | I guess you can use a directed graph and store the direction as an attribute if you don't need to represent that directed graph. | 0 | 215 | 0 | 0 | 2018-01-29T14:27:00.000 | python,graph,networkx | Partially undirect graphs in Networkx | 1 | 2 | 2 | 48,504,499 | 0 |
0 | 0 | I'm looking for a way to implement a partially undirect graph. This is, graphs where edges can be directed (or not) and with different type of arrow (>, *, #, etc.).
My problem is that when I try to use undirect grpah from Networkx and stored arrow type as an attribute, I don't find an efficient way to tell networkx ... | true | 48,503,540 | 1.2 | 0 | 0 | 0 | After search it in a lot of different sources, the only way to do a partial undirect graph I've found it is this is through adjacent matrices.
Networkx has a good tools to move between graph and adjacent matrix (in pandas and numpy array format).
The disadvantage is if you need networkx functions you have to program ... | 0 | 215 | 0 | 0 | 2018-01-29T14:27:00.000 | python,graph,networkx | Partially undirect graphs in Networkx | 1 | 2 | 2 | 48,583,218 | 0 |
1 | 0 | I am trying to give temporary download access to a bucket in my s3.
using boto3.generate_presigned_url(), I have only managed to download a specific file from that bucket but not the bucket itself.
is there any option to do so or my only option is to download the bucket content, zip it, upload it, and give access to th... | false | 48,517,407 | 0 | 0 | 0 | 0 | Have you tried cycling through the list of items in the bucket?
do a aws s3 ls <bucket_name_with_Presigned_URL> and then use a for loop to get each item.
Hope this helps. | 0 | 1,170 | 0 | 0 | 2018-01-30T08:57:00.000 | python,amazon-web-services,amazon-s3,boto3,pre-signed-url | boto3 python generate pre signed url for a whole bucket | 1 | 1 | 1 | 52,748,706 | 0 |
0 | 0 | I was working on boto3 module in python and I have had created a bot which would find the publicly accessible buckets, but this is done for a single user with his credentials. I am thinking of advancing the features and make the bot fetch all the publicly accessible buckets throughout every user accounts. I would like ... | false | 48,537,478 | 0.099668 | 0 | 0 | 1 | This is not possible.
There is no way to discover the names of all of the millions of buckets that exist. There are known to be at least 2,000,000,000,000 objects stored in S3, a number announced several years ago and probably substantially lower than the real number now. If each bucket had 1,000,000 of those objects... | 0 | 2,004 | 0 | 0 | 2018-01-31T08:15:00.000 | python-2.7,amazon-s3,boto3,s3-bucket | Find all the s3 public buckets | 1 | 1 | 2 | 48,553,966 | 0 |
1 | 0 | I have automation scripts where the implicitly_wait is parametrized so that the user will be able to set it. I have a default value of 20 seconds which I am aware of but there is a chance that the user has set it with a different value.
In one of my methods I would like to change the implicitly_wait (to lower it as muc... | false | 48,542,904 | 0 | 1 | 0 | 0 | After reading through the Selenium code and playing in the interpreter, it appears there is no way to retrieve the current implicit_wait value. This is a great opportunity to add a wrapper to your framework. The wrapper should be used any time a user wants to change the implicit wait value. The wrapper would store the ... | 0 | 31 | 0 | 0 | 2018-01-31T13:04:00.000 | python,python-3.x,selenium | How can I view the implicitly_wait that the webdriver was set with? | 1 | 1 | 1 | 48,543,491 | 0 |
0 | 0 | I am trying to model the spread of information on Twitter, so I need the number of tweets with specific hashtags and the time each tweet was posted. If possible I would also like to restrict the time period for which I am searching. So if I were examining tweets with the hashtag #ABC, I would like to know that there we... | false | 48,549,453 | -0.099668 | 1 | 0 | -1 | Twitter api provides historical data for a hashtag only up to past 10 days. There is no limit on number of tweets but they have put limitation on time.
There is no way to get historical data related to a hashtag past 10 days except:
You have access to their premium api (Twitter has recently launched its premium api wh... | 0 | 1,686 | 0 | 1 | 2018-01-31T18:51:00.000 | python,r,twitter,tweepy,twython | How can I get the number of tweets associated with a certain hashtag, and the timestamp of those tweets? | 1 | 1 | 2 | 48,580,106 | 0 |
1 | 0 | I'm using python-social-auth to allow users to login via SAML; everything's working correctly, except for the fact that if a logged-in user opens the SAML login page and logs in again as a different user, they'll get an association with both of the SAML users, rather than switch login.
I understand the purpose behind t... | true | 48,559,911 | 1.2 | 0 | 0 | 0 | There's no standard way to do it in python-social-auth, there are a few alternatives:
Override the login page and if there's a user authenticated, then log them out first, or show an error, whatever fits your projects.
Add a pipeline function and set it in the top that will act if user is not None, you can raise an er... | 0 | 187 | 0 | 0 | 2018-02-01T09:58:00.000 | django,python-social-auth | Python-social-auth: do not reassociate existing users | 1 | 1 | 1 | 48,586,214 | 0 |
0 | 0 | Good evening, i have to work on a xml file, the problem is that the elements in the file ends with a different format than usual, for example:
<1ELEMENT>
text
<\1ELEMENT>
I use the function root=etree.parse('filepath'), and, by changing manually in the text out of the compiler the \ in /, the function works correctly.
... | true | 48,571,060 | 1.2 | 0 | 0 | -1 | You could
load the file
do the replacement, e.g.
string_containing_modified_data = data_as_string.replace('\\>', '/>')
use etree.fromstring(string_containing_modified_data) to parse the xml.
If possible, you should try to fix the writer, but I understand if you don't have the opportunity to do so. | 1 | 353 | 0 | 0 | 2018-02-01T20:18:00.000 | python,xml,xml-parsing | (python) parsing xml file but the elements ends with \ | 1 | 2 | 3 | 48,571,357 | 0 |
0 | 0 | Good evening, i have to work on a xml file, the problem is that the elements in the file ends with a different format than usual, for example:
<1ELEMENT>
text
<\1ELEMENT>
I use the function root=etree.parse('filepath'), and, by changing manually in the text out of the compiler the \ in /, the function works correctly.
... | false | 48,571,060 | 0 | 0 | 0 | 0 | This isn't an XML file.
Given that the format of the file is garbage, are you sure the content isn't garbage too? I wouldn't want to work with data from such an untrustworthy source.
If you want to parse this data you will need to work out what rules it follows. If those rules are something fairly similar to XML rules ... | 1 | 353 | 0 | 0 | 2018-02-01T20:18:00.000 | python,xml,xml-parsing | (python) parsing xml file but the elements ends with \ | 1 | 2 | 3 | 48,572,589 | 0 |
1 | 0 | I'm trying to scrape Facebook public page likes data using Python. My scraper uses the post number in order to scrape the likes data. However, some posts have more than 6000 likes and I can only scrape 6000 likes, also I have been told that this is due to Facebook restriction which doesn't allow to scrape more than 600... | false | 48,577,599 | -0.099668 | 0 | 0 | -1 | In tags I see facebook-graph-api, which has limitations. Why don't you use requrests + lxml? It would be such easier, and as you want to scrape public pages, you don't even have to login, so it could be easily solve. | 0 | 1,695 | 0 | 0 | 2018-02-02T07:19:00.000 | python,facebook-graph-api,scrape | scrape facebook likes with python | 1 | 1 | 2 | 48,578,008 | 0 |
0 | 0 | I'm coming from NetBeans and evaluating others and more flexible IDEs supporting more languages (i.e. Python) than just php and related.
I kept an eye on Eclipse that seems to be the best choice; at the time I was not able to find an easy solution to keep the original project on my machine and automatically send / sync... | false | 48,599,891 | 0.379949 | 1 | 0 | 2 | RSE is a very poor solution, as you noted it's a one-shot sync and is useless if you want to develop locally and only deploy occasionally. For many years I used the Aptana Studio suite of plugins which included excellent upload/sync tools for individual files or whole projects, let you diff everything against a remote ... | 0 | 1,140 | 0 | 1 | 2018-02-03T17:13:00.000 | java,php,python,eclipse | Eclipse Oxygen: How to automatically upload php files on remote server | 1 | 1 | 1 | 48,876,177 | 0 |
1 | 0 | I have the bulk of my web application in React (front-end) and Node (server), and am trying to use Python for certain computations. My intent is to send data from my Node application to a Python web service in JSON format, do the calculations in my Python web service, and send the data back to my Node application.
Fla... | false | 48,600,583 | 0 | 0 | 0 | 0 | In terms of thoughts:
1) You can build a REST interface to your python code using Flask. Make REST calls from your nodejs.
2) You have to decide if your client will wait synchronously for the result. If it takes a relatively long time you can use a web hook as a callback for the result. | 1 | 415 | 0 | 0 | 2018-02-03T18:27:00.000 | python,node.js,rest,api,web-services | Python web service with React/Node Application | 1 | 1 | 1 | 48,600,936 | 0 |
1 | 0 | I'm trying to decide if I should use gevent or threading to implement concurrency for web scraping in python.
My program should be able to support a large (~1000) number of concurrent workers. Most of the time, the workers will be waiting for requests to come back.
Some guiding questions:
What exactly is the difference... | false | 48,608,845 | 0 | 0 | 0 | 0 | The python thread is the OS thread which controlled by the OS which means it's a lot heavier since it needs context switch, but the green thread is lightweight and since it's in userspace the OS does not create or manage them.
I think you can use gevent, Gevent = eventloop(libev) + coroutine(greenlet) + monkey patch, G... | 1 | 1,130 | 0 | 1 | 2018-02-04T14:02:00.000 | python,multithreading,concurrency,python-multithreading,gevent | Python Threading vs Gevent for High Volume Web Scraping | 1 | 1 | 2 | 57,702,150 | 0 |
1 | 0 | I’ve started working a lot with Flask SocketIO in Python with Eventlet and are looking for a solution to handle concurrent requests/threading. I’ve seen that it is possible with gevent, but how can I do it if I use eventlet? | true | 48,611,425 | 1.2 | 0 | 0 | 5 | The eventlet web server supports concurrency through greenlets, same as gevent. No need for you to do anything, concurrency is always enabled. | 0 | 2,111 | 0 | 2 | 2018-02-04T18:10:00.000 | python,socket.io,webserver,flask-socketio,eventlet | Handle concurrent requests or threading Flask SocketIO with eventlet | 1 | 1 | 2 | 48,616,158 | 0 |
0 | 0 | I have created simple test cases using selenium web driver in python. I want to log the execution of the test cases at different levels. How do I do it? Thanks in advance. | false | 48,617,220 | 0 | 1 | 0 | 0 | I created library in python for logging info messages and screenshots in HTML file called selenium-logging
There is also video explanation of package on youtube (25s) called "Python HTML logging" | 0 | 1,240 | 0 | 1 | 2018-02-05T06:53:00.000 | python,selenium-webdriver | Logging in selenium python | 1 | 1 | 1 | 69,818,465 | 0 |
1 | 0 | I'm having a little trouble figuring out if I should have an API for admins and for users splitted. So:
Admins should login using /admin/login with a POST request, and users just /login.
Admins should access/edit/etc resources on /admin/resourceName and users just /resourceName. | true | 48,646,826 | 1.2 | 0 | 0 | 1 | You should only have one endpoint, not one for each type of user. What if you have moderators? Will you also create a /mods/login ?
What each user should and shouldn't have access to should be sorted out with permissions. | 0 | 40 | 0 | 1 | 2018-02-06T15:46:00.000 | python,rest,falconframework | Should I make an API for users and an API for admins? | 1 | 1 | 1 | 48,646,870 | 0 |
1 | 0 | i have the below error while running my code on amazon ec2 instance and when trying to import the h5py package i have permission denied error
ImportError: load_weights requires h5py | false | 48,676,037 | 0 | 0 | 0 | 0 | just solve it using sudo pip install hypy. | 0 | 539 | 0 | 0 | 2018-02-08T01:26:00.000 | python-3.x,amazon-web-services,h5py | Import error requires h5py | 1 | 1 | 1 | 48,676,038 | 0 |
0 | 0 | Need to extract specific data from grafana dashboard. Grafana is connected to graphite in the backend. Seems there is no API to make calls to grafana directly.
Any help?
Ex: I need to extract AVG CPU value from graph of so and so server. | false | 48,683,976 | 0.197375 | 0 | 0 | 1 | The only way I found in grafana 7.1 was to:
Open the dashboard and then inspect the panel
Open the query tab and click on refresh
Use the url and parameters on your own query to the api
note: First you need to create an API key in the UI with the proper role and add the bearer to the request headers | 0 | 2,210 | 0 | 4 | 2018-02-08T11:04:00.000 | python-3.x,graphite,grafana-api | How can we extract data from grafana dashboard? | 1 | 1 | 1 | 62,026,844 | 0 |
1 | 0 | My server in Python (Tornado) send a csv content on a GET request.
I want to specify the content type of the response as "text/csv", but when I do this the file is downlaoded when I send the GET request on my browser.
How can I specify the header "Content-type : text/csv" without having making it a downlaodable file b... | false | 48,685,070 | 0.379949 | 0 | 0 | 2 | The content-type header is what tells the browser how to display a given file. It doesn't know how to display text/csv, so it has no choice but to treat it as an opaque download. If you want the file to be displayed as plain text, you need to tell the browser that it has content-type text/plain.
If you need to tell ot... | 0 | 50 | 0 | 0 | 2018-02-08T12:04:00.000 | python,http,request,tornado | GET response - Do NOT send a downlaodable file | 1 | 1 | 1 | 48,722,882 | 0 |
0 | 0 | I have packages stored in s3 bucket. I need to read metadata file of each package and pass the metadata to program. I used boto3.resource('s3') to read these files in python. The code took few minutes to run. While if I use aws cli sync, it downloads these metafiles much faster than boto. My guess was that if I do not ... | false | 48,692,483 | 0 | 1 | 0 | 0 | It's true that the AWS CLI uses boto, but the cli is not a thin wrapper, as you might expect. When it comes to copying a tree of S3 data (which includes the multipart chunks behind a single large file), it is quite a lot of logic to make a wrapper that is as thorough and fast, and that does things like seamlessly pick... | 0 | 4,228 | 0 | 3 | 2018-02-08T18:32:00.000 | python,amazon-s3,boto,boto3 | Is aws CLI faster than using boto3? | 1 | 1 | 3 | 63,806,918 | 0 |
1 | 0 | I have some 1000 html pages. I need to update the names which is present at the footer of every html page. What is the best possible efficient way of updating these html pages instead of editing each name in those html pages one by one.
Edit: Even if we use some sort of scripts, we have to make changes to every html fi... | false | 48,697,322 | 0 | 0 | 0 | 0 | You can use domdocument and domxpath to parse the html file(you can use php file_get_contents to read the file )
it looks like i can't post links | 0 | 1,154 | 0 | 0 | 2018-02-09T01:15:00.000 | javascript,php,python,html,css | How to update static content in multiple HTML pages | 1 | 1 | 3 | 60,838,059 | 0 |
0 | 0 | I'm learning about basic back-end and server mechanics and how to connect it with the front end of an app. More specifically, I want to create a React Native app and connect it to a database using Python(simply because Python is easy to write and fast). From my research I've determined I'll need to make an API that com... | false | 48,698,110 | 0.197375 | 0 | 0 | 2 | You have to create a flask proxy, generate JSON endpoints then use fetch or axios to display this data in your react native app. You also have to be more specific next time. | 0 | 6,331 | 0 | 2 | 2018-02-09T03:04:00.000 | python,rest,http,react-native,server | How to create a Python API and use it with React Native? | 1 | 1 | 2 | 48,718,794 | 0 |
1 | 0 | I have two instances. One is on the Public Subnet & the other is on the Private subnet of AWS. In the private system, I am performing some computation. And the public system is acting as the API endpoint.
My total flow idea is like this: When some request comes to the public server, the parameters should be forwarded t... | false | 48,702,061 | 0 | 0 | 0 | 0 | This is a commonly used pattern when separating Web Servers and App Servers in traditional Web Application setup, keeping the Web Servers in public subnets (Or keeping internet accessible) and the business rules kept in App Servers in the private network.
However, it also depends on the complexity of the system to just... | 0 | 59 | 0 | 0 | 2018-02-09T08:58:00.000 | python,amazon-web-services,http,server | Best way for communicating between two servers | 1 | 1 | 2 | 48,702,358 | 0 |
0 | 0 | I'm working with splinter and Python and I'm trying to setup some automation and log into Twitter.com
Having trouble though...
For example the password field's "name=session[password]" on Twitter.com/login
and the username is similar. I'm not exactly sure of the syntax or what this means, something with a cookie...
But... | false | 48,709,350 | 0 | 1 | 0 | 0 | What's the purpose of doing this rather than using the official API?
Scripted logins to Twitter.com are against the Terms of Service, and Twitter employs multiple techniques to detect and disallow them. Accounts showing signs of automated login of this kind are liable to suspension or requests for security re-verificat... | 0 | 59 | 0 | 0 | 2018-02-09T15:43:00.000 | python,python-3.x,twitter,login,splinter | How to log into Twitter with Splinter Python | 1 | 1 | 1 | 48,716,354 | 0 |
0 | 0 | I am trying to use websocket.WebSocketApp, however it's coming up with the error: module 'websocket' has no attribute 'WebSocketApp'
I had a look at previous solutions for this, and tried to uninstall websocket, installed websocket-client and still comes up with the same error.
My File's name is MyWebSocket, so I don't... | false | 48,730,108 | 0.099668 | 0 | 0 | 1 | Just installing websocket-client==1.2.0 is ok.
I encountered this problem when I was using websocket-client==1.2.3 | 0 | 7,871 | 0 | 1 | 2018-02-11T09:32:00.000 | python,websocket,pip | AttributeError: module 'websocket' has no attribute 'WebSocketApp' pip | 1 | 1 | 2 | 70,608,299 | 0 |
0 | 0 | Can anyone of you help me with an automation task which involves connecting through rdp and automating certain task in a particular application which is stored in that server.
I have found scripts for rdp connection and for Windows GUI automation seperately.
But in the integration, I have become a bit confused.
It wil... | true | 48,764,814 | 1.2 | 0 | 0 | 2 | It is not possible to automate a RDP window using pywinauto as RDP window itself is an image of a desktop. Printing control identifiers of the RDP window gives the UI of the screen.
Solution is to install python+pywinauto in the remote machine. | 0 | 1,322 | 1 | 0 | 2018-02-13T10:40:00.000 | python-3.x,user-interface,rdp,pywinauto | GUI Automation in RDP | 1 | 1 | 1 | 49,934,460 | 0 |
0 | 0 | I have a python project with Selenium that I was working on a year ago. When I came back to work on it and tried to run it I get the error ImportError: No module named selenium. I then ran pip install selenium inside the project which gave me Requirement already satisfied: selenium in some/local/path. How can I make my... | false | 48,766,723 | 0 | 0 | 0 | 0 | Is it possible that you're using e.g. Python 3 for your project, and selenium is installed for e.g. Python 2?
If that is the case, try pip3 install selenium | 0 | 141 | 0 | 0 | 2018-02-13T12:20:00.000 | python,selenium,import | Import error "No module named selenium" when returning to Python project | 1 | 1 | 1 | 48,767,767 | 0 |
0 | 0 | I got this error in in Python3.6 ModuleNotFoundError: No module named 'oauth2client.client',i tried pip3.6 install --upgrade google-api-python-client,But I don't know how to fix
Please tell me how to fix,
Thanks | false | 48,780,634 | 0.761594 | 0 | 0 | 5 | Use below code, this worked for me:
pip3 install --upgrade oauth2client | 0 | 7,084 | 0 | 2 | 2018-02-14T06:04:00.000 | python-3.x | ModuleNotFoundError: No module named 'oauth2client.client' | 1 | 1 | 1 | 52,187,177 | 0 |
1 | 0 | I am developing a desktop application that must send a specified url to a Flask application hosted online, and subsequently receive data from the same Flask app. 2 applications communicating back & forth. I am able to make GET and POST requests to this Flask app, but I am unaware of how to construct specific URL's whic... | false | 48,795,392 | 0 | 0 | 0 | 0 | If your HTTP client is written in python the simplest solution would be to use a higher level HTTP library like requests or urllib2. If you want to get the path mappings against your Flask app views you could print them by introspecting the app object and export them to json or some other format and use them in your cl... | 0 | 113 | 0 | 0 | 2018-02-14T20:09:00.000 | python,sockets,flask | Python - Using socket to construct URL for external Flask server's view function | 1 | 1 | 1 | 48,795,531 | 0 |
0 | 0 | I am trying to use the Select function in Selenium for Python 3 to help with navigating through the drop down boxes. However, when I try to import org.openqa.selenium.support.ui.Select I get an error message:
"No module named 'org'"
Would appreciate any help on this. I saw there was a similar question posted a few week... | true | 48,812,910 | 1.2 | 0 | 0 | 4 | The path 'org.openqa.selenium.support.ui.Select' is a Java descriptor. In Python, make sure you have the Python Selenium module installed with pip install selenium, and then import it with import selenium.
For the Select function specifically, you can import that with the following
from selenium.webdriver.support.ui im... | 0 | 5,709 | 0 | 1 | 2018-02-15T17:20:00.000 | python,selenium | Selenium / Python - No module named 'org' | 1 | 1 | 1 | 48,813,013 | 0 |
1 | 0 | I made server using python on laptop. And I made client using Java on samelaptop. They were connected, and They were communicated.
But when I made client using Java on another laptop, client didn't find server
What is wrong?? and What could I do?? | false | 48,852,421 | 0 | 0 | 0 | 0 | On the laptop running the server:
The client can access using localhost:<port> or 0.0.0.0:<port>
Connecting from another laptop (same network):
You have to connect to: <pc-server-local-ip>:<port>
To get <pc-server-local-ip, using the laptop running your server:
- Windows : type ipconfig in console, value next to IPV4... | 0 | 27 | 0 | 0 | 2018-02-18T13:52:00.000 | java,python,server,client | python server and java client(another PC) Error | 1 | 1 | 1 | 48,852,566 | 0 |
0 | 0 | In order to test our server we designed a test that sends a lot of requests with JSON payload and compares the response it gets back.
I'm currently trying to find a way to optimize the process by using multi threads to do so. I didn't find any solution for the problem that I'm facing though.
I have a url address and ... | true | 48,880,508 | 1.2 | 0 | 0 | 0 | Well, you have couple of options:
Use multiprocessing.pool.ThreadPool (Python 2.7) where you create pool of threads and then use them for dispatching requests. map_async may be of interest here if you want to make async requests,
Use concurrent.futures.ThreadPoolExecutor (Python 3) with similar way of working with Thr... | 1 | 177 | 0 | 0 | 2018-02-20T08:13:00.000 | python,multithreading,python-2.7,asynchronous | using threading for multiple requests | 1 | 1 | 2 | 48,881,300 | 0 |
0 | 0 | We are currently trying to process user input and checking if user has entered a food item using elastic search.
With elastic search we are able to get results for wide range of terms: Garlic , Garlic Extract etc...
How should we handle use cases E.g. Blueberry Dish-washing soap Or Apple based liquid soap . How do we ... | true | 48,891,679 | 1.2 | 0 | 0 | 2 | Your objective requires that you perform part of speech tagging on your query, and then use those tags to identify nouns.
You would then need to compare the extracted nouns to a pre-curated list of food strings and, after identifying those that are not food, remove the clauses of which those nouns are the subject and /... | 0 | 35 | 0 | 1 | 2018-02-20T18:11:00.000 | python,elasticsearch,nlp | How to filter out elastic searches for invalid inputs | 1 | 1 | 1 | 48,898,306 | 0 |
1 | 0 | What is the current best practice and method of loading a webpage (that has 10 - 15 seconds worth of server side script).
User clicks a link > server side runs > html page is returned (blank
page for 10 - 15 seconds).
User clicks a link > html page is immediately returned (with progress
bar) > AJAX post request to th... | false | 48,896,407 | 0.197375 | 0 | 0 | 2 | Best Practice would be for the the script to not take 10-15 seconds.
What is your script doing? Is it generating something that you can pre-compute and cache or save in Google Cloud Storage?
If you're daisy-chaining datastore queries together, is there something you can do to make them happen async in tandem?
If it re... | 0 | 73 | 0 | 3 | 2018-02-21T00:27:00.000 | javascript,python,html,ajax,google-app-engine | Best practice for loading webpage with long server side script | 1 | 2 | 2 | 48,897,992 | 0 |
1 | 0 | What is the current best practice and method of loading a webpage (that has 10 - 15 seconds worth of server side script).
User clicks a link > server side runs > html page is returned (blank
page for 10 - 15 seconds).
User clicks a link > html page is immediately returned (with progress
bar) > AJAX post request to th... | true | 48,896,407 | 1.2 | 0 | 0 | 1 | The way we're doing it is using the Ajax approach (the second one) which is what everyone else does.
You can use Task Queues to run your scripts asynchronously and return the result to front end using FCM (Firebase Cloud Messaging).
You should also try to break the script into multiple task queues to make it run faster... | 0 | 73 | 0 | 3 | 2018-02-21T00:27:00.000 | javascript,python,html,ajax,google-app-engine | Best practice for loading webpage with long server side script | 1 | 2 | 2 | 48,899,196 | 0 |
1 | 0 | Right now, I am generating the Allure Report through the terminal by running the command: allure serve {folder that contains the json files}, but with this way the HTML report will only be available to my local because
The json files that generated the report are in my computer
I ran the command through the terminal ... | true | 48,914,528 | 1.2 | 1 | 0 | 5 | It's doesn't work because allure report as you seen is not a simple Webpage, you could not save it and send as file to you team. It's a local Jetty server instance, serves generated report and then you can open it in the browser.
Here for your needs some solutions:
One server(your local PC, remote or some CI environme... | 0 | 10,030 | 0 | 7 | 2018-02-21T20:04:00.000 | python,automation,frameworks,allure | Is there a way to export Allure Report to a single html file? To share with the team | 1 | 2 | 6 | 48,926,889 | 0 |
1 | 0 | Right now, I am generating the Allure Report through the terminal by running the command: allure serve {folder that contains the json files}, but with this way the HTML report will only be available to my local because
The json files that generated the report are in my computer
I ran the command through the terminal ... | false | 48,914,528 | 0 | 1 | 0 | 0 | Allure report generates html in temp folder after execution and you can upload it to one of the server like netlify and it will generate an url to share. | 0 | 10,030 | 0 | 7 | 2018-02-21T20:04:00.000 | python,automation,frameworks,allure | Is there a way to export Allure Report to a single html file? To share with the team | 1 | 2 | 6 | 63,722,118 | 0 |
0 | 0 | I am basically running my personal project,but i'm stuck in some point.I am trying to make a login request to hulu.com using Python's request module but the problem is hulu needs a cookie and a CSRF token.When I inspected the request with HTTP Debugger it shows me the action URL and some request headers.But the cookie ... | true | 48,940,807 | 1.2 | 0 | 0 | 0 | First create a session then use GET and use session.cookies.get_dict() it will return a dict and it should have appropriate values you need | 0 | 882 | 0 | 0 | 2018-02-23T03:48:00.000 | python,post,cookies,request | How to get cookies before making request in Python | 1 | 1 | 1 | 48,940,818 | 0 |
0 | 0 | I am creating a REST API. Basic idea is to send data to a server and the server gives me some other corresponding data in return. I want to implement this with SSL. I need to have an encrypted connection between client and server. Which is the best REST framework in python to achieve this? | true | 48,942,393 | 1.2 | 0 | 0 | 3 | You can choose any framework to develop your API, if you want SSL on your API endpoints you need to setup SSL with the Web server that is hosting your application
You can obtain a free SSL cert using Let's encrypt. You will however need a domain in order to be able to get a valid SSL certificate.
SSL connection between... | 0 | 6,205 | 0 | 1 | 2018-02-23T06:32:00.000 | python,rest,django-rest-framework,flask-restful,falcon | REST API in Python over SSL | 1 | 1 | 2 | 48,942,911 | 0 |
0 | 0 | Basically, I want to use python to query my IB order history and do some analyze afterwards. But I could not find any existing API for me to query these data, does anyone have experience to do this? | false | 48,942,917 | 1 | 0 | 0 | 6 | You have to use flex queries for that purpose. It has full transaction history including trades, open positions, net asset value history and exchange rates. | 0 | 4,976 | 0 | 5 | 2018-02-23T07:13:00.000 | python,api,interactive-brokers | Interactive brokers: How to retrieve transaction history records? | 1 | 2 | 2 | 51,470,089 | 0 |
0 | 0 | Basically, I want to use python to query my IB order history and do some analyze afterwards. But I could not find any existing API for me to query these data, does anyone have experience to do this? | true | 48,942,917 | 1.2 | 0 | 0 | 2 | TWS API doesn't have this functionality. You can't retreive order history, but you can get open orders using recOpenOrders request and capture executions in realtime by listening to execDetails event - just write them to a file and analyse aftewards. | 0 | 4,976 | 0 | 5 | 2018-02-23T07:13:00.000 | python,api,interactive-brokers | Interactive brokers: How to retrieve transaction history records? | 1 | 2 | 2 | 49,012,298 | 0 |
0 | 0 | Hi i am new to GRPC and i want to send one message from server to client first. I understood how to implement client sending a message and getting response from server. But i wanna try how server could initiate a message to connected clients. How could i do that? | false | 48,969,107 | 0 | 0 | 0 | 0 | Short answer: you can't
gRPC is a request-response framework based on HTTP2. Just as you cannot make a website that initiates a connection to a browser, you cannot make a gRPC service initiating a connection to the client. How would the service even know who to talk to?
A solution could be to open a gRPC server on the ... | 0 | 325 | 0 | 0 | 2018-02-25T00:38:00.000 | python,grpc | How to let server send the message first in GRPC using python | 1 | 1 | 1 | 49,018,750 | 0 |
1 | 0 | I was working with Pyrebase( python library for firebase) and was trying .stream() method but when I saw my firebase dashboard it showed 100 connection limit reached. Is there any way to remove those concurrent connection? | false | 48,973,464 | 0 | 0 | 0 | 0 | There is a limit of 100 concurrent connections to the database for Firebase projects that are on the free Spark plan. To raise the limit, upgrade your project to a paid plan. | 0 | 399 | 0 | 0 | 2018-02-25T12:34:00.000 | python,rest,firebase,firebase-realtime-database | Firebase connection limit reached | 1 | 1 | 1 | 48,975,506 | 0 |
0 | 0 | I am using a python gRPC client and make request to a service that
responds a stream. Last checked the document says the iterator.next()
is sync and blocking. Have things changed now ? If not any ideas on overcoming this shortcoming ?
Thanks
Arvind | true | 48,979,972 | 1.2 | 0 | 0 | 1 | Things have not changed; as of 2018-03 the response iterator is still blocking.
We're currently scoping out remedies that may be ready later this year, but for the time being, calling next(response_iterator) is only way to draw RPC responses. | 0 | 1,426 | 0 | 1 | 2018-02-26T00:34:00.000 | python,grpc | Is grpc server response streaming still blocking? | 1 | 1 | 2 | 49,501,641 | 0 |
0 | 0 | Does someone have a solution for detecting and mitigating TCP SYN Flood attacks in the SDN environment based on POX controller? | false | 49,003,874 | 0 | 0 | 0 | 0 | As my understand, you may need to prepare third-party program for collect flow information (e.g. sFlow). and write one program for communicating with SDN Controller. SDN Controller cover all traffic on switches. It don't handle over L4 event in general case | 0 | 613 | 1 | 0 | 2018-02-27T08:05:00.000 | python,sdn,pox | Python Code to detect and mitigate TCP SYN Flood attacks in SDN and POX controller | 1 | 1 | 1 | 49,146,800 | 0 |
0 | 0 | I want to use Lyft Driver api like in the Mystro android app however iv searched everywhere and all I could find is lyft api.
To elaborate more on what I'm trying to achieve, I want api that will allow me to intergrate with the lyft driver app and not the lyft rider app, I want to be able to for example view nearby rid... | false | 49,011,180 | 0.197375 | 0 | 0 | 1 | The Mystro app does not have any affiliation with either Uber or Lyft nor do they use their APIs to interact with a driver (as neither Uber or Lyft have a publicly accessible driver API like this). They use an Android Accessibility "feature" that let's the phone look into and interact with other apps you have running.... | 0 | 249 | 0 | 0 | 2018-02-27T14:36:00.000 | android,python,ios,node.js,lyft-api | How do I use Lyft driver API like Mystro android app? | 1 | 1 | 1 | 52,992,307 | 0 |
0 | 0 | Both an existing raspberry pi 3 assistant-sdk setup and a freshly created one are producing identical errors at all times idle or otherwise. The lines below are repeating over and do not seem to be affected by the state of the assistant. Replicates across multiple developer accounts, devices and projects. Present wi... | false | 49,041,313 | 0.197375 | 0 | 0 | 1 | This fixed it for me: pip3 install google-assistant-library==0.1.0 | 0 | 126 | 0 | 0 | 2018-03-01T01:33:00.000 | python,raspberry-pi,raspberry-pi3,google-assistant-sdk | Assistant SDK on raspberry pi 3 throwing repeated location header errors | 1 | 1 | 1 | 49,223,023 | 0 |
1 | 0 | At the moment i am working on an odoo project and i have a kanban view. My question is how do i put a kanban element to the bottom via xml or python. Is there an index for the elements or something like that? | false | 49,046,224 | 0 | 0 | 0 | 0 | I solved it myself. I just added _order = 'finished asc' to the class. finished is a record of type Boolean and tells me if the Task is finished or not. | 0 | 63 | 0 | 0 | 2018-03-01T09:10:00.000 | python-2.7,odoo-8,odoo | Is there a way to put a kanban element to the bottom in odoo | 1 | 1 | 1 | 49,046,718 | 0 |
0 | 0 | I'm just starting with Selenium in python, and I have set up an ActionChains object and perform()ed a context click. How do I tell whether a context menu of any sort has actually popped up? For example, can I use the return value in some way?
The reason is that I want to disable the context menu in some cases, and want... | true | 49,058,060 | 1.2 | 0 | 0 | 2 | Selenium cannot see or interact with native context menus.
I recommend testing this in a JavaScript unit test, where you can assert that event.preventDefault() was called. It's arguably too simple/minor of a behavior to justify the expense of a Selenium test anyway. | 0 | 155 | 0 | 0 | 2018-03-01T20:20:00.000 | python,selenium,contextmenu | Selenium: how to check if context menu has appeared | 1 | 1 | 1 | 49,062,808 | 0 |
1 | 0 | I would like to use ActionChains function of Selenium.
Below is like my codes. But It does not work when it opens right click menu.
The ARROW_DOWN and ENTER are implemented in main window not, right click menu.
How can the ARROW_DOWN and ENTER code be implemented in right click menu.
Brower = webdriver.Chrome()
actionC... | false | 49,063,955 | 0.099668 | 0 | 0 | 1 | Selenium cannot see or interact with native context menus. | 0 | 913 | 0 | 0 | 2018-03-02T06:38:00.000 | python-3.x | (Python Selenium with Chrome) How to click in the right click menu list | 1 | 1 | 2 | 49,070,495 | 0 |
0 | 0 | as the title said, i'm looking for a way to send an AT command to a remote xbee and read the response.
my code is in python and i'm using digi-xbee library.
another question: my goal of using AT command is to get the node ID of that remote xbee device when this last one send me a message, i don't want to do a full scan... | false | 49,088,925 | 0 | 0 | 0 | 0 | There is a way to send a command to a remoted xbee: First, connect to the local XBee and then send a command to the local Xbee so the local Xbee can send a remote_command to the remoted XBee.
Here are the details:
Create a bytearray of the command. For e.g:
My command is: 7E 00 10 17 01 00 13 A2 00 41 47 XX XX FF FE... | 0 | 777 | 1 | 0 | 2018-03-03T20:31:00.000 | python,xbee | how to send remote AT command to xbee device using python digi-xbee library | 1 | 2 | 2 | 52,888,170 | 0 |
0 | 0 | as the title said, i'm looking for a way to send an AT command to a remote xbee and read the response.
my code is in python and i'm using digi-xbee library.
another question: my goal of using AT command is to get the node ID of that remote xbee device when this last one send me a message, i don't want to do a full scan... | false | 49,088,925 | 0 | 0 | 0 | 0 | when you recive a message you get an xbee_message object, first you must define a data receive callback function and add it to device . In that message you call remote_device_get_64bit_addr(). | 0 | 777 | 1 | 0 | 2018-03-03T20:31:00.000 | python,xbee | how to send remote AT command to xbee device using python digi-xbee library | 1 | 2 | 2 | 49,214,426 | 0 |
0 | 0 | We are trying to convert a gRPC protobuf message to finally be a json format object for processing in python.
The data sent across from server in serialized format is around 35MB and there is around 15K records. But when we convert protobuf message into string (using MessageToString) it is around 135 MB and when we co... | false | 49,091,459 | 0 | 1 | 0 | 0 | Fixed the issue by only picking the fields that is needed when deserializing the data, rather than deserialize all the data returned from the server. | 0 | 1,175 | 0 | 0 | 2018-03-04T02:49:00.000 | python,json,protocol-buffers,grpc | Converting gRPC protobuf message to json runs for long | 1 | 1 | 1 | 50,177,889 | 0 |
0 | 0 | I want to share my local WebSocket on the internet but ngrok only support HTTP but my ws.py address is ws://localhost:8000/
it is good working on localhost buy is not know how to use this on the internet? | false | 49,129,451 | 0.197375 | 0 | 0 | 1 | You can use ngrok http 8000 to access it. It will work. Although, ws is altogether a different protocol than http but ngrok handles it internally. | 0 | 2,555 | 0 | 2 | 2018-03-06T11:12:00.000 | python,websocket,localhost,ngrok,serve | how to use ws(websocket) via ngrok | 1 | 1 | 1 | 52,701,751 | 0 |
1 | 0 | I want to put jpg in dropzone from other window.
Can I do that?
In my test I open new window (my html with jpg) and I want to drag and drop it to dropzone on my main window.
I have error:
Message: stale element reference: element is not attached to the page document.
Maybe there is another solution for placing this fil... | false | 49,148,607 | 0 | 0 | 0 | 0 | I solved the problem by creating a script in AutoIT. | 0 | 87 | 0 | 0 | 2018-03-07T09:40:00.000 | python,selenium,drag-and-drop,webdriver | Can i use drag and drop from other window? Python Selenium | 1 | 1 | 1 | 49,151,725 | 0 |
1 | 0 | I have a query as to whether what I want to achieve is doable, and if so, perhaps someone could give me some advice on how to achieve this.
So I have set up a health check on Route 53 for my server, and I have arranged so that if the health check fails, the user will be redirected to a static website I have set up at a... | true | 49,198,057 | 1.2 | 0 | 0 | 2 | Make up a filename. Let's say healthy.txt.
Put that file on your web server, in the HTML root. It doesn't really matter what's in the file.
Verify that if you go to your site and try to download it using a web browser, it works.
Configure the Route 53 health check as HTTP and set the Path for the check to use /healt... | 0 | 42 | 0 | 0 | 2018-03-09T16:24:00.000 | python,amazon-web-services,amazon-route53,health-monitoring | Intentionally Fail Health Check using Route 53 AWS | 1 | 1 | 1 | 49,201,020 | 0 |
0 | 0 | In the docs for heapq, its written that
heapq.heappushpop(heap, item)
Push item on the heap, then pop and return the smallest item from the heap. The combined action runs more efficiently than heappush() followed by a separate call to heappop().
Why is it more efficient?
Also is it considerably more efficient ? | true | 49,228,574 | 1.2 | 0 | 0 | 4 | heappop is pop out the first element, then move the last element to fill the in the first place, then do a sinking operation, which moving the the element down through consecutive exchange. thus restore the head
it is O(logn)
then you headpush, place the element in the last place, and bubble-up
like heappop but revers... | 0 | 963 | 0 | 2 | 2018-03-12T05:21:00.000 | python-3.x,heap | How is heapq.heappushpop more efficient than heappop and heappush in python | 1 | 2 | 2 | 49,232,244 | 0 |
0 | 0 | In the docs for heapq, its written that
heapq.heappushpop(heap, item)
Push item on the heap, then pop and return the smallest item from the heap. The combined action runs more efficiently than heappush() followed by a separate call to heappop().
Why is it more efficient?
Also is it considerably more efficient ? | false | 49,228,574 | 0.197375 | 0 | 0 | 2 | heappushpop pushes an element and then pops the smallest elem. If the elem you're pushing is smaller than the heap's minimum, then there's no need to do any operations., because we know that the element we're trying to push (which is smaller than the heap min), will be popped if we do it in two operations.
This is effi... | 0 | 963 | 0 | 2 | 2018-03-12T05:21:00.000 | python-3.x,heap | How is heapq.heappushpop more efficient than heappop and heappush in python | 1 | 2 | 2 | 57,665,038 | 0 |
1 | 0 | I have required to send POS Receipt to customer while validating POS order, the challenge is ticket is defined in point_of_sale/xml/pos.xml
receipt name is <t t-name="PosTicket">
how can i send this via email to customer. | false | 49,235,894 | 0 | 1 | 0 | 0 | You can create a wizard at the time of validation of POS order which popup after validating order. In that popup enter mail id of customer and by submit that receipt is directly forwarded to that customer. | 0 | 205 | 0 | 0 | 2018-03-12T12:59:00.000 | python-3.x,odoo,point-of-sale,odoo-11 | Send POS Receipt Email to Customer While Validating POS Order | 1 | 1 | 1 | 52,870,087 | 0 |
0 | 0 | I am playing around with scapy (module for Python). I want to build packages and send them across my local network from one host to another. When I buil my package like that, I do not receive anything on my destination host:
packet = Ether() / IP(dst='192.168.0.6') / TCP(dport=8000) => sendp(packet).
However, when I bu... | true | 49,243,269 | 1.2 | 0 | 0 | 1 | send() uses Scapy's routing table (which is copied from the host's routing table when Scapy is started), while sendp() uses the provided interface, or conf.iface when no value is specified.
So you should either set conf.iface = [iface] ([iface] being the interface you want to use), or specify sendp([...], iface=[iface]... | 0 | 404 | 1 | 1 | 2018-03-12T19:41:00.000 | python,wireshark,scapy | Can't send ethernet packages across my LAN | 1 | 1 | 1 | 49,250,191 | 0 |
0 | 0 | I am trying to click on an element but getting the error:
Element is not clickable at point (x,y.5)
because another element obscures it.
I have already tried moving to that element first and then clicking and also changing the co-ordinates by minimizing the window and then clicking, but both methods failed. The possib... | false | 49,252,880 | -0.099668 | 0 | 0 | -2 | I found that sometimes the webpage is not fully loaded and the answer is as simple as adding a time.sleep(2) | 0 | 15,329 | 0 | 11 | 2018-03-13T09:46:00.000 | python,selenium,selenium-webdriver | Element is not clickable at point (x,y.5) because another element obscures it | 1 | 2 | 4 | 68,574,224 | 0 |
0 | 0 | I am trying to click on an element but getting the error:
Element is not clickable at point (x,y.5)
because another element obscures it.
I have already tried moving to that element first and then clicking and also changing the co-ordinates by minimizing the window and then clicking, but both methods failed. The possib... | true | 49,252,880 | 1.2 | 0 | 0 | 11 | There is possibly one thing you can do. It is very crude though, I'll admit it straight away.
You can simulate a click on the element directly preceding the element in need, and then simulate a key press [TAB] and [ENTER].
Actually, I've been seeing that error recently. I was using the usual .click() command provided ... | 0 | 15,329 | 0 | 11 | 2018-03-13T09:46:00.000 | python,selenium,selenium-webdriver | Element is not clickable at point (x,y.5) because another element obscures it | 1 | 2 | 4 | 49,261,182 | 0 |
0 | 0 | I develop HTTP GET Webservices (REST) in a distributed microservices architecture.
For performance issues, I need the cache on the clients of the webservices.
Is there an urllib-like library that uses HTTP cache headers of the webservices to cache?
Note: requests-cache does not seem to read http headers | false | 49,273,441 | 0 | 0 | 0 | 0 | Why we need to cache the HTTP headers?
Normally, only GET responses are valuable to be cached on the client. | 0 | 13 | 0 | 0 | 2018-03-14T09:02:00.000 | python-3.x,rest,web-services,urllib,microservices | urllib-like library that caches accordingly to HTTP headers in python? | 1 | 1 | 1 | 49,355,304 | 0 |
0 | 0 | I have a remote cron job that scrapes data using selenium every 30 minutes. Roughly 1 in 10 times the selenium script fails. When the script fails, I get an error output instead (various selenium error messages). Does this cause the cron job to stop? Shouldn't crontab try to run the script again in 30 minutes?
After a ... | false | 49,283,567 | 0 | 1 | 0 | 0 | ANSWER: The website I was scraping was sophisticated enough to find out I was using selenium because cron was running the job every 30 minutes on the dot. So they flagged my VM's IP address after the 4-5th attempt.
My solution was simple: add randomness to the interval with which I scrapped the website using random.un... | 0 | 306 | 0 | 0 | 2018-03-14T16:56:00.000 | python-3.x,selenium,cron | Does cron job persist/run again if the python script fails? | 1 | 2 | 2 | 49,321,214 | 0 |
0 | 0 | I have a remote cron job that scrapes data using selenium every 30 minutes. Roughly 1 in 10 times the selenium script fails. When the script fails, I get an error output instead (various selenium error messages). Does this cause the cron job to stop? Shouldn't crontab try to run the script again in 30 minutes?
After a ... | false | 49,283,567 | 0 | 1 | 0 | 0 | Who is sending the error output? If it's the cron daemon, then your job should be dead; if the selenium process itself is sending the mail, then it may still be running, and stuck. | 0 | 306 | 0 | 0 | 2018-03-14T16:56:00.000 | python-3.x,selenium,cron | Does cron job persist/run again if the python script fails? | 1 | 2 | 2 | 49,287,487 | 0 |
1 | 0 | lets say i have used with selenium
chrome_options.add_argument("--headless")
but now i want the browser to open. Is this possible? thanks | false | 49,290,655 | 0 | 0 | 0 | 0 | No, it isn’t possible.
The --headless option is a command-line flag used to instantiate the browser, meaning it is being told to execute headlessly for the entirety of its existence. | 0 | 51 | 0 | 0 | 2018-03-15T02:31:00.000 | python,selenium,selenium-chromedriver | How to open headless browser in selenium? | 1 | 1 | 1 | 49,301,624 | 0 |
0 | 0 | During the installation of exchangelib the installation tries to connect to the internet to get dependencies.
On this computer it is not possible to to open the firewalls to provide the access - it is a very restricted system.
Is there a way for an offline installation of the exchangelib?
Best Regards
Klaus Heubisch | false | 49,293,207 | 0 | 0 | 0 | 0 | You have a couple of different possibilities. I think the most simple one is to create a virtualenv on a system that does have Internet access and install exchangelib and its dependencies there. You can then copy that virtualenv to the system with no Internet access.
Virtualenvs contain absolute paths, so you would nee... | 0 | 224 | 0 | 2 | 2018-03-15T06:54:00.000 | python-3.x,exchange-server,exchangelib | I cannot install exchangelib on a very restricted system which has no internet connection and it is not possible to create one | 1 | 1 | 1 | 49,296,477 | 0 |
0 | 0 | I am trying test my python script using jenkins.Issue I am facing is with the test report generation
I have created a folder 'test_reports' in my jenkins workspace.
C:\Program Files (x86)\Jenkins\jobs\PythonTest\test_reports
But then when I run the script from jenkins I get the error as,
ERROR: Step ‘Publish JUnit test... | false | 49,361,114 | 0 | 1 | 0 | 0 | This was an expected result because,The script file I wrote was not a unit-test module.It was just a normal python file(It wasn't supposed to create any XML results).
Once I created the script using unit-test framework and import the xml runner,I was able to generate the xml files of the result. | 0 | 648 | 0 | 0 | 2018-03-19T10:49:00.000 | python,testing,jenkins,report | How to configure xml Report in jenkins | 1 | 1 | 1 | 49,377,372 | 0 |
0 | 0 | I am trying to pull invoices by Accounts and have not managed to find a way to link the two. Am I missing something?
I tried through Contacts but it doesn't seem to have an Account or Account ID to match
I am using Pyxero for this, however this doesn't seem relevant, more so the data from xero api.
Thanks | false | 49,382,835 | 0 | 0 | 0 | 0 | I've figured it out - these details only appear when pulling an invoice one by one or paginated in the line items column. | 0 | 127 | 0 | 0 | 2018-03-20T11:18:00.000 | python,xero-api | Xero Api matching Accounts with Invoices | 1 | 1 | 1 | 49,387,718 | 0 |
0 | 0 | Good day. I have a question about proceeding accepted connections. I have a pythons tornado IOLoop and listening socket. When a new client is connected and this connection is accepted by tornado handler client - interaction begins. That interaction includes multiple requests/responses, so there is a reason to poll acce... | true | 49,390,862 | 1.2 | 0 | 0 | 0 | I've searched how "torando.web" does it. It works with default IOLoop instance and that instance accepts connections and handles (processes) new sockets that were created after connections were accepted. The second part is done by IOStream.
So the answer is to use the same IOLoop object and not to poll sockets manually | 0 | 120 | 1 | 0 | 2018-03-20T17:40:00.000 | python,select,tornado,epoll,ioloop | IOLoop/epoll/select for accepted connections | 1 | 1 | 1 | 49,399,600 | 0 |
0 | 0 | I have tried downloading small files from google Colaboratory. They are easily downloaded but whenever I try to download files which have a large sizes it shows an error? What is the way to download large files? | false | 49,428,332 | 0 | 0 | 0 | 0 | Google colab doesn't allow you to download large files using files.download(). But you can use one of the following methods to access it:
The easiest one is to use github to commit and push your files and then clone it to your local machine.
You can mount google-drive to your colab instance and write the files there. | 0 | 7,373 | 0 | 11 | 2018-03-22T12:08:00.000 | python-3.x,tensorflow,gpu,google-colaboratory | How to download large files (like weights of a model) from Colaboratory? | 1 | 1 | 4 | 49,431,101 | 0 |
0 | 0 | I'm using the warcio library to read and write warc files.
When trying to write a record of a response object from requests.get(URL,stream=False), warcio is writing only HTTP headers to the record but not the payload. However, when stream mode is enabled it works fine.
Is there a way store the payload when stream mod... | true | 49,429,211 | 1.2 | 0 | 0 | 0 | I've found a workaround but not sure if it's the correct way. Instead of making request object streamable, I've made the payload streamable
BytesIO(response.text.encode()) and this seems to work. | 0 | 461 | 0 | 1 | 2018-03-22T12:52:00.000 | python,python-3.x,python-requests,warc | Creating a warc record with requests.get() response using warcio | 1 | 1 | 1 | 49,430,305 | 0 |
0 | 0 | Im am writing a piece of code where it is vital that the browser stays open however i need to be able to close windows, to stop the browser from over populating. I have been using the webbrowser module but it seems that webbrowser doesnt have a way of close the tab once open. Any ideas?
Remember the browser must stay o... | false | 49,457,439 | 0 | 0 | 0 | 0 | Webbrowser is a limited api module for interfacing with popular browsers.
The way I see it you have a few options:
Find a module pertaining to the particular browser you're dealing with.
Work with the api of the browser(s) you're working with directly
Request feature of webbrowser in the future, but won't help you no... | 0 | 42 | 0 | 0 | 2018-03-23T19:56:00.000 | python | I need to be able to close an internet tab, but i cannot close the browser | 1 | 1 | 1 | 49,458,231 | 0 |
0 | 0 | I'm using ldap3.
I can connect and read all attributes without any issue, but I don't know how to display the photo of the attribute thumbnailPhoto.
If I print(conn.entries[0].thumbnailPhoto) I get a bunch of binary values like b'\xff\xd8\xff\xe0\x00\x10JFIF.....'.
I have to display it on a bottle web page. So I have t... | false | 49,458,945 | 0.099668 | 0 | 0 | 1 | The easiest way is to save the raw byte value in a file and open it with a picture editor. The photo is probably a jpeg, but it can be in any format. | 0 | 4,282 | 0 | 1 | 2018-03-23T22:11:00.000 | python-3.x,ldap | how to get and display photo from ldap | 1 | 1 | 2 | 49,470,034 | 0 |
0 | 0 | Am using Python 3.6.5rcs , pip version 9.0.1 , selenium 3.11.0. The Python is installed in C:\Python and selenium is in C:\Python\Lib\site-packages\selenium. The environment variables have been set.
But the code
from selenium import webdriver
gives an unresolved reference error.
Any suggestion on how to fix the proble... | false | 49,482,586 | 0.066568 | 0 | 0 | 1 | I used this command to resolve my error.
pip install webdriver_manager | 1 | 8,895 | 0 | 1 | 2018-03-26T00:59:00.000 | python,selenium,pycharm | Pycharm Referenced Error With Import Selenium Webdriver | 1 | 3 | 3 | 68,728,420 | 0 |
0 | 0 | Am using Python 3.6.5rcs , pip version 9.0.1 , selenium 3.11.0. The Python is installed in C:\Python and selenium is in C:\Python\Lib\site-packages\selenium. The environment variables have been set.
But the code
from selenium import webdriver
gives an unresolved reference error.
Any suggestion on how to fix the proble... | false | 49,482,586 | 0.066568 | 0 | 0 | 1 | I found this worked for me. I'm using PyCharm Community 2018.1.4 on Windows.
Navigate to: File->Settings->Project: [project name] -> Project Interpreter
On this page click the configuration wheel at the top which should provide a drop down menu. Click "Add" and a window should appear called "Add Python Interpreter"
You... | 1 | 8,895 | 0 | 1 | 2018-03-26T00:59:00.000 | python,selenium,pycharm | Pycharm Referenced Error With Import Selenium Webdriver | 1 | 3 | 3 | 51,881,788 | 0 |
0 | 0 | Am using Python 3.6.5rcs , pip version 9.0.1 , selenium 3.11.0. The Python is installed in C:\Python and selenium is in C:\Python\Lib\site-packages\selenium. The environment variables have been set.
But the code
from selenium import webdriver
gives an unresolved reference error.
Any suggestion on how to fix the proble... | true | 49,482,586 | 1.2 | 0 | 0 | 3 | Pycharm > Preferences > Project Interpreter
Then hit the '+' to install the package to your project path.
Or you can add that path to your PYTHONPATH environment variable in your project. | 1 | 8,895 | 0 | 1 | 2018-03-26T00:59:00.000 | python,selenium,pycharm | Pycharm Referenced Error With Import Selenium Webdriver | 1 | 3 | 3 | 49,482,631 | 0 |
0 | 0 | I am testing complex and non-public webpages with python-selenium, which have interconnected iframes.
To proper click on a button or select some given element in a different iframe I have to switch to that iframe . Now, as contents of the pages might reload to the correct iframe I constantly have to check if the corre... | false | 49,492,516 | 0 | 0 | 0 | 0 | Unfortunately the API is built that way and you can't do anything about it. Each IFrame is a separate document as such, so eventually search a object in every IFrame would mean Selenium has to switch to every IFrame and do that for you.
Now you can build a workaround by storing the IFrame paths and using helper method... | 0 | 717 | 0 | 0 | 2018-03-26T13:21:00.000 | python,selenium,iframe | Is there a workaround to avoid iframes in selenium testing? | 1 | 1 | 2 | 49,492,946 | 0 |
1 | 0 | I'm crawling web pages to create a search engine and have been able to crawl close to 9300 pages in 1 hour using Scrapy. I'd like to know how much more can I improve and what value is considered as a 'good' crawling speed. | false | 49,494,093 | -0.066568 | 0 | 0 | -1 | I'm no expert but I would say that your speed is pretty slow. I just went to google, typed in the word "hats", pressed enter and: about 650,000,000 results (0.63 seconds). That's gonna be tough to compete with. I'd say that there's plenty of room to improve. | 0 | 453 | 0 | 5 | 2018-03-26T14:38:00.000 | python,scrapy,web-crawler | What is a good crawling speed rate? | 1 | 2 | 3 | 49,523,000 | 0 |
1 | 0 | I'm crawling web pages to create a search engine and have been able to crawl close to 9300 pages in 1 hour using Scrapy. I'd like to know how much more can I improve and what value is considered as a 'good' crawling speed. | false | 49,494,093 | 0 | 0 | 0 | 0 | It really depends but you can always check your crawling benchmarks for your hardware by typing scrapy bench on your command line | 0 | 453 | 0 | 5 | 2018-03-26T14:38:00.000 | python,scrapy,web-crawler | What is a good crawling speed rate? | 1 | 2 | 3 | 70,224,507 | 0 |
0 | 0 | os.path.ismount() will verify whether the given path is mounted on the local linux machine. Now I want to verify whether the path is mounted on the remote machine. Could you please help me how to achieve this.
For example: my dev machine is : xx:xx:xxx
I want to verify whether the '/path' is mounted on yy:yy:yyy.
How ... | false | 49,504,741 | 0 | 0 | 0 | 0 | If you have access to both machines, then one way could be to leverage python's sockets. The client on the local machine would send a request to the server on the remote machine, then the server would do os.path.ismount('/path') and send back the return value to the client. | 0 | 128 | 0 | 0 | 2018-03-27T05:06:00.000 | python,python-2.7 | Verify mountpoint in the remote server | 1 | 1 | 1 | 49,505,061 | 0 |
0 | 0 | A robot is connected to a network with restricted outbound traffic. Only inbound traffic is allowed from one specific IP address(ours IP, e.g. 111.111.111.111). All outgoing traffic is forbidden.
There is settings and dhcp corresponding to external IP(e.g. 222.222.222.222). We want to connect to Pepper from the IP 111.... | true | 49,508,903 | 1.2 | 0 | 0 | 3 | NAOqi connections go through port 9559 by default, so you could check whether that one is blocked.
If you are unable to connect through port 9559, you can do a port forwarding. But I think this is a more network related question. | 0 | 490 | 1 | 0 | 2018-03-27T09:18:00.000 | python,networking,connection,pepper,choregraphe | How to connect Choregraphe/Python script to remote Pepper robot from different network? | 1 | 1 | 1 | 49,528,384 | 0 |
0 | 0 | I want to include google correlate into my application using Python but I require its API to do so. Please help me where to look at or share me some insights about it. Thanks. | false | 49,526,421 | 0 | 0 | 0 | 0 | Google correlate data is valid up till 2017 March, not sure if it's deprecated but it definitely won't be useful if you're after up-to-date correlations | 0 | 218 | 0 | 0 | 2018-03-28T04:59:00.000 | python-3.x | Is there any google correlate API that I can refer to? | 1 | 1 | 1 | 55,298,815 | 0 |
0 | 0 | So this is a bit of a tricky situation. Using Three.js/ReactJS and canvas.
Scenario: When I click and drag a sphere beyond its boundaries a tooltip will show a warning message over the mouse pointer. When I release the mouse the tooltip will disappear. When I click and drag the sphere back to a position inside the bou... | false | 49,539,286 | 0 | 0 | 0 | 0 | to get around my issue, I had to do this
chain = ActionChains(page.driver).move_to_element_with_offset(sphere_order_panel, -1047, 398).click_and_hold()
chain = chain.move_to_element_with_offset(sphere_order_panel, -1047, 398)
chain.perform() | 0 | 198 | 0 | 0 | 2018-03-28T16:04:00.000 | python,reactjs,selenium,canvas,three.js | Python Selenium: Show Tooltip on Mouse Pointer (Three.js/React/Canvas) | 1 | 1 | 1 | 49,634,904 | 1 |
0 | 0 | In select,there is a list for error socket or epoll has event for ERROR
But in selectors module just has events for EVENT_READ and EVENT_WRITE.
therefore,how can I know the error socket without event? | true | 49,547,266 | 1.2 | 0 | 0 | 6 | An error on the socket will always result in the underlying socket being signaled as readable (at least). For example, if you are waiting for data from a remote peer, and that peer closes its end of the connection (or abends, which does the same thing), the local socket will get the EVENT_READ marking. When you go to r... | 0 | 624 | 0 | 2 | 2018-03-29T02:53:00.000 | python-3.x,sockets | why python selectors module has no event for socket error | 1 | 1 | 1 | 49,563,017 | 0 |
0 | 0 | to access google drive files, you need to call: google.colab.auth.authenticate_user(), which presents a link to an authetication screen, which gives a key you need to paste in the original notebook
is it possible to skip this all together? after all the notebook is already 'linked' to a specific account
is it possibl... | true | 49,548,471 | 1.2 | 0 | 0 | 2 | Nope, there's no way to avoid this step at the moment.
No, there's no safe way to save this token between runs.
Sharing the notebook doesn't share the token. Another user executing your notebook will go through the auth flow as themselves, and will only be able to use the token they get for Drive files they already hav... | 0 | 763 | 0 | 3 | 2018-03-29T05:12:00.000 | python,google-authentication,google-colaboratory | When accessing google driver from google colab, is it possible to eliminate, or simplify authentication? | 1 | 1 | 1 | 49,583,498 | 0 |
0 | 0 | In cases where I need to cancel an order, I need to know whether to void or refund the transaction. I'm trying to learn whether the transaction has settled using the Transaction Details API.
transactionDetailsResponse.transaction.transactionStatus seems like it might be the right thing to look at. Does anyone know wh... | true | 49,558,617 | 1.2 | 0 | 0 | 0 | That is the right place to look. The possible values for that field are:
uthorizedPendingCapture
capturedPendingSettlement
communicationError
refundSettledSuccessfully
refundPendingSettlement
approvedReview
declined
couldNotVoid
expired
generalError
failedReview
settledSuccessfully
settlementError
underReview
voided
F... | 0 | 70 | 0 | 0 | 2018-03-29T14:25:00.000 | python,authorize.net | How can I tell if an authorize.net transaction has settled? | 1 | 1 | 1 | 49,566,293 | 0 |
1 | 0 | I'm developing a chatbot using heroku and python. I have a file fetchWelcome.py in which I have written a function. I need to import the function from fetchWelcome into my main file.
I wrote "from fetchWelcome import fetchWelcome" in main file. But because we need to mention all the dependencies in the requirement fil... | false | 49,561,062 | 0 | 0 | 0 | 0 | If we need to import function from fileName into main.py, write "from .fileName import functionName". Thus we don't need to write any dependency in requirement file. | 0 | 694 | 0 | 0 | 2018-03-29T16:33:00.000 | python,heroku | Heroku Python import local functions | 1 | 1 | 2 | 49,571,369 | 0 |
1 | 0 | Is there a way to use Celery for:
Queue a HTTP call to external URL with Form parameters (HTTP Post to
URL)
The external URL will respond HTTP response, 200, 404, 400 etc, if
response is in form of error non-200-ish response it will retry for
a certain number of retry and will retire as needed
Add Task / Job / Work qu... | false | 49,608,179 | 0.066568 | 0 | 0 | 1 | you can use flower rest API to do the same, as flower is a monitoring tool for celery. But it comes with rest api to add task and all
https://flower.readthedocs.io/en/latest/index.html | 0 | 7,519 | 1 | 4 | 2018-04-02T08:53:00.000 | python,rest,celery | Celery REST API | 1 | 1 | 3 | 58,300,556 | 0 |
0 | 0 | Windows command netsh interface show interface shows all network connections and their names. A name could be Wireless Network Connection, Local Area Network or Ethernet etc.
I would like to change an IP address with netsh interface ip set address "Wireless Network Connection" static 192.168.1.3 255.255.255.0 192.168.1... | false | 49,624,485 | 0.379949 | 0 | 0 | 2 | I don't know of a Python netsh API. But it should not be hard to do with a pair of subprocess calls. First issue netsh interface show interface, parse the output you get back, then issue your set address command.
Or am I missing the point? | 0 | 2,892 | 1 | 0 | 2018-04-03T07:25:00.000 | python,static-ip-address | How to find out Windows network interface name in Python? | 1 | 1 | 1 | 49,625,033 | 0 |
1 | 0 | My website is hosted on Google App Engine using Standard Python.
In request handlers, I am setting HTTP header "cache-control: max-age=3600, public"
So frontend server "Google Frontend" caches the response for 1hr(which I want to save cost).
In rare cases the content of page changes and I want the content in frontend c... | false | 49,658,369 | 0.379949 | 0 | 0 | 2 | When you set cache-control via the header or meta tag, that tells the browser to store the response. So, the next time, it will not even ping your server. This means that you cannot invalidate that cache after set.
What you need is a backend cache. Frameworks like Django, Flask, etc. make this easy. You can set a te... | 0 | 130 | 1 | 0 | 2018-04-04T18:53:00.000 | python,google-app-engine | How to invalidate cashed URL response from GAE "server: Google Frontend" | 1 | 1 | 1 | 49,659,939 | 0 |
0 | 0 | I'm using python 2.7.10 virtualenv when running python codes in IntelliJ. I need to install requests[security] package. However I'm not sure how to add that [security] option/config when installing requests package using the Package installer in File > Project Structure settings window. | false | 49,679,283 | 0 | 0 | 0 | 0 | Was able to install it by doing:
Activating the virtualenv in the 'Terminal' tool window:
source <virtualenv dir>/bin/activate
Executing a pip install requests[security] | 1 | 852 | 0 | 1 | 2018-04-05T18:38:00.000 | python,python-2.7,intellij-idea,virtualenv | How to Install requests[security] in virtualenv in IntelliJ | 1 | 1 | 1 | 49,679,964 | 0 |
0 | 0 | I am using a azure http triggered fucntion to perform a task and I am passing the function key as http header parameter and then my payload is a json with some data that I invoking down stream procedures.I am using urllib(python lib) for this request and this is the response I am getting but the function is getting tri... | false | 49,682,697 | 0 | 0 | 0 | 0 | This was more of a Firewall issue.We have been trying to connect to a azure analysis service from ADW and we have added the IP filtering(Our Corporate public IP) for AAS and then when the Function's procedure is trying to connected to AAS it is facing some IP issue(this is NOT the corporate public IP). We have added th... | 1 | 410 | 0 | 0 | 2018-04-05T22:51:00.000 | python,azure,azure-functions | Azure HTTP trigger function call returning 417 error code | 1 | 1 | 1 | 49,696,760 | 0 |
0 | 1 | I am currently working a large graph, with 1.5 Million Nodes and 11 Million Edges.
For the sake of speed, I checked the benchmarks of the most popular graph libraries: iGraph, Graph-tool, NetworkX and Networkit. And it seems iGraph, Graph-tool and Networkit have similar performance. And I eventually used iGraph.
With t... | true | 49,713,991 | 1.2 | 0 | 0 | 0 | The cutoff really depends on the application and on the netwrok parameters (# nodes, # edges).
It's hard to talk about closeness threshold, since it depends greatly on other parameters (# nodes, # edges,...).
One thing you can know for sure is that every closeness centrality is somewhere between 2/[n(n-1)] (which is ... | 0 | 676 | 0 | 1 | 2018-04-08T03:04:00.000 | python,igraph | Cutoff in Closeness/Betweenness Centrality in python igraph | 1 | 1 | 1 | 51,268,892 | 0 |
0 | 0 | I want to know after finding the closest match from the text section of the response table how chatterbot is generating the "in_response_to" list and the "in_response_to_contains" list. If somebody could enlight me this then it would be a great help. | true | 49,719,767 | 1.2 | 0 | 0 | 0 | The in_response_to list is generated based on previous input statements that the bot receives. So for example, lets say that the following interaction occurs:
User: "Hello, how are you?"
Bot: "I am well, how are you."
User: "I am also well."
In this case, the bot would learn based on how the user responded to it. So ... | 0 | 98 | 0 | 1 | 2018-04-08T15:51:00.000 | python-3.x,chatterbot | how chatterbot is creating the in_response_to and in_response_to_contains list | 1 | 1 | 1 | 50,184,310 | 0 |
0 | 0 | I have a tika code in a server. I want to create an SFTP session with another server with files and run Apache tika on that server. I am using python as back end. Will this work ? is my approach correct ?
Thanks | true | 49,776,224 | 1.2 | 1 | 0 | 0 | So, what I was planning to do was not ideal . .
Apache Tika requires to scan physical files to fetch metadata. I made a bridge and started from sessions pulling files to the server Tika code was hosted. | 0 | 73 | 0 | 0 | 2018-04-11T13:19:00.000 | python,sftp,apache-tika | using apache tika for scanning documents on servers using sftp | 1 | 1 | 1 | 50,650,506 | 0 |
1 | 0 | I'm working on a some automation work, as per my requirement I need to click on Chrome Physical buttons like left nav, right nav, bookmarks, menu etc. I can do with shortcuts but my requirement is to click on browser buttons. Any ideas would be helpful. Thanks in advance. | false | 49,799,864 | 0 | 0 | 0 | 0 | This can't be done with selenium webdriver and I think also not with the standalone selenium server. Selenium only allows to interact with the DOM.
The only way to achieve what you want to do is to use an automation tool that actually runs directly in the OS that you use. Java can be used to write such a program.
I wou... | 0 | 1,530 | 0 | 0 | 2018-04-12T15:00:00.000 | java,python,google-chrome,selenium,selenium-chromedriver | Selenium click chrome physical buttons like menu, left, right navigation, bookmarks | 1 | 1 | 3 | 49,800,000 | 0 |
1 | 0 | I've got an issue with scrapy and python.
I have several links. I crawl data from each of them in one script with the use of loop. But the order of crawled data is random or at least doesn't match to the link.
So I can't match url of each subpage with the outputed data.
Like: crawled url, data1, data2, data3.
Data 1, d... | false | 49,896,079 | -0.066568 | 0 | 0 | -1 | Ok, It seems that the solution is in settings.py file in scrapy.
DOWNLOAD_DELAY = 3
Between requests.
It should be uncommented. Defaultly it's commented. | 0 | 79 | 0 | 0 | 2018-04-18T09:29:00.000 | python,scrapy | Scrapy - order of crawled urls | 1 | 2 | 3 | 49,899,202 | 0 |
1 | 0 | I've got an issue with scrapy and python.
I have several links. I crawl data from each of them in one script with the use of loop. But the order of crawled data is random or at least doesn't match to the link.
So I can't match url of each subpage with the outputed data.
Like: crawled url, data1, data2, data3.
Data 1, d... | false | 49,896,079 | 0 | 0 | 0 | 0 | time.sleep() - would it be a solution? | 0 | 79 | 0 | 0 | 2018-04-18T09:29:00.000 | python,scrapy | Scrapy - order of crawled urls | 1 | 2 | 3 | 49,898,314 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.