I am not sure I would use "just" in this context ;) This is the kind of trolling I can really appreciate. I can't stop laughing at the full script. So much dedication that went into this troll post (This monstrosity freaking runs and outputs something "meaningful").
Firstly, you can write e. g. {0:.2f} to specify a float with 2 decimals, see e. g. https://www.digitalocean.com/community/tutorials/how-to-use-string-formatters-in-python-3
Secondly, the best formatting method is f-strings, see e. g. https://www.blog.pythonlibrary.org/2018/03/13/python-3-an-intro-to-f-strings/
By reading this, you'll become an angr pro and will be able to fold binaries to your whim
If i understand what I've read correctly this is a tool for picking apart and experimenting with existing compiled binaries. My guess is to probe for exploits.
https://plot.ly/ipython-notebooks/big-data-analytics-with-pandas-and-sqlite/
I used this tutorial recently to load an 80gb csv file into an SQLite database then used pandas to do very similar analysis to you. Very recommended!
> for her employers to fire her due to threats by anons and harrassers is equally disgraceful.
It's not clear whether that was the reason. The official statement does say that she "put our business in danger", which may be a reference to the DoS. But immediately before that they give a much better reason, which is that due to her mistakes she could no longer be effective in her role.
If you've already set up Anaconda then you should be ready to go with a Jupyter notebook (https://jupyter.org/). He can write the code and execute it with the results being displayed below each cell. The command (jupyter notebook) is simple to remember and the notebook functions the same on every device (at least in my experience).
As he progresses I think you're on the right track looking at a text-editor and then moving to PyCharm :)
vim is like the universal text editor for
It can handle everything pretty good. However, it is not an expert in any single file type. vim really shines when you edit whatever.py and then stuff.go and then reddit_bot.cpp and then video.ini and then fix_flicker.patch before tweaking your systemd.conf and comparing your nginx.conf to animal_list.json.
It is very different from what is a regular text editor nowadays and you will learn a lot basic editing up front before it becomes useful.
There is a lot of stuff in vim for historical reasons and a lot of plugins that can add language specific features. I recommend visiting https://neovim.io/ if you want to start using vim.
For those of you having auto-complete issues with Visual Studio Code, I wanted to make you aware that we are working on a new auto-complete engine, the Python Language Server, and you can try it out by changing your settings.
It gets better every week, we are currently working through a set of performance improvements before we make this the default. If you run into issues, check out our troubleshooting guide for common setup problems and how to file issues.
You can get it directly from the publisher (which also links to other places) and of course it's on Amazon.
If you're saying I look like John Oliver, I'm flattered!
My lab has been using Python3 in production for 2+ years. The web framework people were the biggest stick-in-the-muds, but they've mostly come along at this point. For example, Django works on Python3 as of last December.
PyPy has experimental support for Python3, but it's not ready for general use yet. At the rate they've been working, I'd expect something usable (if not 100% stable) by the end of the year.
The moral of the story is: people continue to use Python2 mostly because of large, messy, legacy code-bases that they don't have the resources to port yet. As a new user, you don't have this issue. Don't be part of the problem ;)
As far as data science and scientific computing goes, there are 2 workflows/environments that are common.
1. Text Editor + IPython + Jupyter Notebooks
When people refer to IPython, they usually are referring to an improved REPL. What is a REPL? It's an interactive session where you can type Python expressions or commands, and it will let you interact with the results. Go here to try: https://repl.it/ Python comes with it's own repl, but IPython is an improved version of it.
Jupyter Notebooks (formerly IPython Notebooks) takes IPython REPLs and put them in your browser. It lets you create a virtual notebook for Python code with results. It can be shared with multiple people. Also, Jupyter notebooks supports other languages too.
2. Spyder
Spyder IDE is an IDE that is specifically made for Data Scientists. Unlike other IDEs like PyCharm, this one is lightweight and operates under the assumption that your products are mainly number crunching and analysis. Other IDEs are purpose-built for developers make full-blown applications.
Finally, let's talk about Anaconda for a bit. Anaconda is a distribution of Python (for a lack of a better word). What I mean is this: Anaconda comes with Python and all the popular libraries/tools for scientific computing/data science. This is helpful because installing Python can be difficult or even time consuming. Anaconda has almost all that you need precompiled and ready for you to use no matter if you are running Windows, MacOS, or Linux.
RenPy is used commercially for visual novels and point and click adventures.
There's quite a few RenPy games on Steam and many more on those adult anime dating game vendor sites.
SCAPY!
http://www.secdev.org/projects/scapy/
Scapy lets you mess around and fuck around with network packets, and do.. Stuff. It's extremely good for learning and exploring low-level networking from a high level language.
It's fun to automate things.
The cool thing about online services, is that most of them have APIs!
So automate something dumb and fun, using Python as simple glue:
Write a script that sends you an email every time something specific is mentioned in your Twitter feed
Hit the Reddit API, grab the top 10 daily hottest youtube links from /r/<music_genre_you_like>, then hit Youtube and download MP3s of the linked songs, or parse the Artist and Song Title from the string and add 'em to a Spotify playlist or something
Poll Fitbit daily to add your nightly sleep stats to a Google spreadsheet or Evernote table
Poll Facebook API every 10 minutes, and send someone specific a random message over Messenger every time they log in
Write a bot for Twillio to fuck with your friends. Aim to pass a simple Turing test, at least for a few minutes
Use Google Maps to plot your photographed bird sightings to a map using photo geo data (i.e., iterate all photos in a Amazon Cloud Drive photo gallery named "Bird Club", and add new ones' plots to a Google Map, save said map to PDF using wkhtml2pdf, and upload it Dropbox)
These are probably very dumb, but you get the gist.
I handle pretty much all of the programming tasks for my research group. The other researchers know some matlab and that was what everything was written in before I showed up. I prefer python much more than matlab so I've converted everything to python and flask web apps.
Flask is very simple. You can use Miguel Grinberg's Mega Tutorial if you've never done any web dev. If you know the basics of MVC, then Flask should take an hour to become comfortable with.
Another Terminal based YouTube player that I extensively use : https://github.com/mps-youtube/mps-youtube
Features as provided on the official page :
Search and play audio/video from YouTube
Search tracks of albums by album title
Search and import YouTube playlists
Create and save local playlists
Download audio/video
Convert to mp3 & other formats (requires ffmpeg or avconv)
View video comments
Works with Python 3.x
Works with Windows, Linux and Mac OS X
Reminds me of the <code>is_computer_on()</code> function in the BeOS API.
>Returns 1 if the computer is on. If the computer isn't on, the value returned by this function is undefined.
> especially for getting smoother output.
Well, you are asking quite a lot of your terminal emulator here... I'm actually impressed by the performance you've been able to squeeze out thus far.
One thing you might want to try is to use a GPU accelerated terminal emulator like alacritty.
I know that people constantly complain about me using scribd, so the PDF is available separately: http://pocoo.org/~mitsuhiko/badideas.pdf
The code is on github: https://github.com/mitsuhiko/badideas
String concatenation is cheap in CPython, but not in PyPy or possibly other implementations. There it's quadratic because the JIT can't optimize it.
.join() is more portably performant.
Head down to "String concatenation is expensive."
Try Pyinstaller instead: http://www.pyinstaller.org/
Its function is similar to py2exe but it also generates Mac OS X and Linux binaries.
See: http://www.pyinstaller.org/wiki/SupportedPackages -- Note: wxPython is supported.
The question is indeed tricky, but is this really the sort of questions you're expecting?
In my career both being interviewed, and now as an engineer at a FAANG conducting interviews myself, I've never really seen (or use) this type of 'gotcha' questions.
The LeetCode (https://leetcode.com/) type of questions are more common, with a large part of the focus being on *how* you approach the problem and not just memorizing the algorithms to solve it (although that helps).
From experience, the true difficulty usually comes from knowing how to apply the right (general) knowledge, calming your nerves so you can think and code in front of another person, and marching your way out of a difficult situation step-by-step --- not technical text-book exam type questions you mention here.
I mean, sure --- if you are doing for a job that needs that kind of thinking, then I won't press on it. But I'm just wondering if you are focused on the right type of 'hard.'
I've been using Power BI this summer to create dashboards with custom R visuals using ggplot2 and custom R/HTML visuals using plotly and I've found the R integration really useful in creating some custom plots that my clients have found pretty impressive so I'm happy to see Microsoft expand this into python too.
There's a blog post to accompany this announcement with some demo Power BI dashboards with python visuals.
Couple tips:
overall i like the design man, pretty good looking site you have there
I think you would get the best performance out of a tuple. e.g.
random.choice(('foo', 'bar', 'baz'))
A quick test seems to support this:
>test.py Three element list: 6.46300005913 seconds Three element tuple: 5.5640001297 seconds Three element string split: 8.19199991226 seconds
More info (from the author of FastAPI) on the drama that pulled this from 3.10 at the last minute to protect Pydantic/FastAPI/etc...
A best of both worlds approach is being worked on for 3.11
In case you're unaware, or anyone else is loo8for a more substantial book, Fluent Python covers Pythonic usage through 3.5, that should at least get you most of the way there.
Hey, founder of Anvil here - "end-to-end Python" is exactly what we do!
Anvil is a replacement for the whole web framework in pure Python - no HTML, no JS, no HTTP API requred[*]. Your client side code is in Python, your server-side code is in Python, you can call straight from one to the other - and you can publish it instantly on the web. It's astonishing how fast a good Python developer can put up a web app!
As well as our hosted service (which has a free tier), we also provide on-site installations for people who need their apps on their own servers. I'm happy to answer any questions.
[*] Unless you want those things, that is. You can use custom HTML layouts, and working with REST APIs is pretty slick.
As a full featured IDE i would recommend PyCharm. A good alterantive would be a good programming text editor like Sublime Text 2. I would choose Sublime Text over vim, because it's much easier to learn an contains a full python interperter to extend it's functionality - perfect for every python programmer. Personaly i use a combination of both.
Django has very good defaults and you can make pretty small Django applications. Not that Flash isn't a good framework, but if you would want to expand a website like OP suggested, you would probably be happy with Django's standard lib. For example: the admin, forms and authentication parts could come in handy.
Are you aware of Kivy? It's a lot better than pygame for mobile games (and IMO for just about anything else, but I'm a kivy dev so I could be biased about it).
> Python's way of drawing to the screen seems slow
Pygame uses sdl1, which is not bad for some things but is vastly slower than a modern opengl pipeline. Kivy's own graphics api is much more modern abstraction of opengl and pushes most computation to the gpu, so it doesn't have these problems.
Kivy is not perfect, and still has plenty of its own limitations (as does anything), I don't want to present it as necessarily solving your problems. But I do think python's mobile power should not be judged based on pygame.
I'm sorry, but that's the opposite of "pythonic".
Magically adding all created users to some hidden list, preventing them from being deleted ever and setting up for hilarious bugs when you decide to work with another set of users elsewhere in the program? Not pythonic, it obviously violates like four and a half of the rules.
Accessing that list by iterating over the class object (with the actual method implemented in a metaclass somewhere else)? Dude, WTF.
Make a, I don't know, Context
or Users
or whatever object representing your collection of users and give it an new_user
method that creates and adds a user. Then implement iteration as usual. That's it, no possibility for weird bugs, you always see where you pass your collection, and you don't have to use complicated, rarely used features to implement weird, unexpected functionality.
I made a small modification in the program. Added:
import multiprocessing
Changed:
class Worker(threading.Thread):
To:
class Worker(multiprocessing.Process):
And run the code:
python gildetector.py
2.7.1 (r271:86832, Nov 27 2010, 18:30:46) [MSC v.1500 32 bit (Intel)] # of threads: 1, took 0.259523, effectively 1 cores # of threads: 2, took 0.273987, effectively 1.89442 cores # of threads: 3, took 0.28669, effectively 2.71571 cores # of threads: 4, took 0.298521, effectively 3.47744 cores # of threads: 5, took 0.465576, effectively 2.78711 cores # of threads: 6, took 0.522528, effectively 2.98001 cores Python is utilizing 3.5/4 cores Rejoice! You're awesome!
So I am awesome on CPython 2.7 too! Here run it yourself and see if everyone is awesome:
I have no relation to Plotly but I've been tinkering with the draft version for a couple of months now and waiting very impatiently for the public release of this tool.
Here's the public github: https://github.com/plotly/dash
And here's the user guide/documentation: https://plot.ly/dash/
If someone from Plotly had a fancy announcement all planned out, let me know and I'll delete this post. I was just very excited to see criddyp's code merge this morning.
From http://www.python.org/static/humans.txt:
> Standards: HTML5, CSS3, W3C (as much as possible) > > Core: Python 3 and Django 1.5 > > Components: Modernizr, jQuery, Susy (susy.oddbird.net) > > Software: SASS and Compass, Coda, Sublime Text, Terminal, Adobe CS, Made on Macs > > Hardware Stack: Ubuntu 12.04, Postgresql 9.x, Nginx, Gunicorn > > Helpers: South, Haystack, Pipeline
Are you familiar with PEP-3107, function annotations, implemented by Python3? It provides some level of syntactic expressiveness for implementing some form of static type checking (static as in expressed in source code, but not static in the sense of a "pre-run" check).
Some interesting discussion on this Stackoverflow post
I'm loving the energy, but his entire thing is blind leading the blind.
It's like hearing a kid say: "YEAH!! I'm going to jump to the moon! here I go!! ready!! Watch! OOF! OK watch out for the space martians! beep boop beep!".
With the physicists be all like: O_o mmmmm kay.
How are you dealing with feature reduction, curse of dimensionality, lack of data due to survivorship bias and how are you mitigating the problem of the efficient market hypothesis and back testing on data that's different than what actually happened at the time?
And the kiddos are like: "LOL what are those things?". Put down the Caffeine man, it's making you think you're superman and you're not. A thousand people think they can step into the ring with the prize fighter. The most exuberant and excited ones like this are the first to fall.
The developers and Quants who have what it takes in this department are quite dull. You may have passed one on the train and thought he was a hobo. You're a great cheerleader though. Word the wise, don't quit your day job until after the software is proven to work, not before.
Not to be a downer though. Keep going, you'll learn. https://www.udacity.com/course/machine-learning-for-trading--ud501
Save this video and re-watch it in 10 years. You're going to cringe so hard you're face is going to melt off.
Archlinux made the switch almost 5 years ago with barely any issues. If you need to use Python 2, just specify it at the beginning of your script, or simple use a virtualenv with Python 2.
To add to this, here's the definitive answer on Stack Overflow regarding "Asking the user for input until they give a valid response", just to save y'all a search.
I don't know pyramid or pylons, but everyone I know who works with web frameworks these days won't shut up about FastAPI as the new darling, especially its built-in support for async functionality.
There's a website called edabit with a ton of exercises for Python (and many other languages), classified by difficulty level. I think people that are registered can submit new ones.
Yeah. It's probably time we start moving to encrypted DNS, something like https://www.opendns.com/about/innovations/dnscrypt/ maybe?
If the DNS request is encrypted, and the HTTP request is also encrypted, there's not a lot left for the ISP to know about, is there?
Not really. Just recently git moved from SHA-1 to SHA-256. This code is probably just brute-forcing with very poor efficiency if it doesn't use a dedicated compiled C module. If you really want to crack some hashed passwords use tools like hashcat. It proved to be quite efficient even on a typical home computer's GPU.
Hi, author of Enaml here. Development of new features in Enaml has slowed in the past year as my work focus has shifted. Enaml is fairly stable however, and is currently in production at multiple Fortune 500 companies, including a top 10 investment bank. Happy to answer any questions about the project.
Also, Enaml seems to be trending on Hacker News today, if you want to read the comments there.
Edit:
Here is my most recent talk on Enaml, which will answer most of the "what is this?" questions: https://vimeo.com/79536617
> That's gigantic.
2 meg is gigantic but I'm being sarcastic? Is this real life?
It's trivial compared to using something like Docker for deployment but people do it just to overcome the issues of replicating run environments.
PyCharm has a free community edition. You only have to pay if you want stuff like web development and database features.
Scroll to the bottom here to compare: https://www.jetbrains.com/pycharm/features/
> But I just don't get why Python insists on such things as aesthetics, after all, we are writing computer code, not English, right?
~~CODE �� IS �� WRITTEN �� FOR �� HUMANS �� NOT �� COMPUTERS~~
You write your code so that other people can read it, not so that computers can understand it. People will have maintain your code, you will have to maintain your code, and it's a good idea to make code easily readable so that your brain doesn't explode.
> [...] a computer language is not just a way of getting a computer to perform operations but rather that it is a novel formal medium for expressing ideas about methodology. Thus, programs must be written for people to read, and only incidentally for machines to execute.
H. Abelson, G.J. Sussman, J. Sussman, "Structure and Interpretation of Computer Programs"
Don't learn Python. Learn programming. Learn how to write code. Learn how to find solutions to problems. Learn how to make them better.
That will take some time. I recommend Introduction to Algorithms (CRLS) for that. Once you have that down, you can start trying to implement those solutions in Python (or any other language, since programming is universal). Once you know programming, learning a language is easy.
I like to follow advice from Robert Martin's Clean Code book:
> The proper use of comments is to compensate for our failure to express ourself in code. Comments are always failures. We must have them because we cannot always figure out how to express ourselves without them, but their use is not a cause for celebration. So when you find yourself in a position where you need to write a comment, think it through and see whether there isn’t some way to turn the tables and express yourself in code.
> The older a comment is, and the farther away it is from the code it describes, the more likely it is to be just plain wrong. The reason is simple. Programmers can’t realistically maintain them.
> Comments Do Not Make Up for Bad Code! One of the more common motivations for writing comments is bad code. We write a module and we know it is confusing and disorganized. We know it’s a mess. So we say to ourselves, “Ooh, I’d better comment that!” No! You’d better clean it! Clear and expressive code with few comments is far superior to cluttered and complex code with lots of comments. Rather than spend your time writing the comments that explain the mess you’ve made, spend it cleaning that mess.
Example:
Bad:
// Check to see if the employee is eligible for full benefits if ((employee.flags & HOURLY_FLAG) && (employee.age > 65))
Good:
if (employee.isEligibleForFullBenefits())
Another data point, with some idea of system setup and use-case...
I don't think you should get hung up on what someone else says is the "best" tool. You should pick the tool you feel most productive with and master it. In 20 years, I've only picked two editors and no IDEs. An IDE helps primarily in two ways:
For the most part, I found that PyCharm (and I tried it) was too opinionated and for what I do, the controls/UI was not readily available. Some of the simplest things (like font selection) is buried.
I'm equally not fond of the anaconda ecosystem as it does not play nice outside of its ecosystem. I feel like, in some ways, Continuum Analytics is the Microsoft of Python - they want to control that entire development stack.
If you're already an emacs nut, I encourage you to check out spacemacs: http://spacemacs.org/
if
? meh, never needed it.
https://repl.it/@darkarchon/ExtrovertedGrossFeeds
def print(msg): ''' Because this isn't necessary either. ''' sys.stdout.write(msg + '\n')
class Else(Exception): pass
class If: def init(self, condition): while not condition: raise Else
def enter(self): pass
def exit(self, *args): pass
try: with If(True): print('this should print') except Else: print('this shouldnt print')
try: with If(False): print('this shouldnt print') except Else: print('this should print')
I've read so many of those Reilly books and they are all super dull and sometimes hard to follow. Best python book I came across is this Python Crash Course: A Hands-On, Project-Based Introduction to Programming https://www.amazon.com/dp/1593276036/ref=cm_sw_r_cp_apa_i_OByyCbMTJD8GC
To clarify what this means: Python 2 will still be available in the repositories, and will still be /usr/bin/python
. What's changing is that core applications like update manager will run on Python 3, rather than Python 2, and when that's complete, the default installation won't include Python 2.
I think the goal is rather ambitious for Quantal: there's quite a few parts that need porting or packaging. Hopefully at least one application can be ported this time, and the rest will make it for R.
No, in fact this is one of the things I suggested in a comment on one of your previous posts.
if name == "main": your_function()
You'll put that outside your function. It won't cause recursion there, it'll start the program properly. There are good reasons for this convention. It has to do with the way modules are handled in Python. You FreeCodeCamp has a decent write up about this.
Mimicking known, popular search engine bots is a great way to get your traffic blocked by sites that use Akamai, Cloudflare, etc. With about 30 seconds worth of effort you can find the AS numbers and netblocks assigned to Google, Microsoft, etc. I know for a fact that Akamai's bot manager is very good at identifying legitimate bot traffic from these companies versus fraudulent bot traffic. It's trivial for them to determine if a Googlebot request came from a Google IP address or some third party IP address. If they see Googlebot crawling a site from some third party non-Google IP address they know it's bogus.
Looks cool. Tried it out on Mac OS Sierra. Didn't need to do anything about the 32-bit, so that was easy :)
Have you taken a look at Pypy? It could potentially increase the performance dramatically.
I tried starting it with Pypy, but no window came up. Mouse got captured, and no exceptions. So it's probably some minor detail, as Pyglet should otherwise work http://pypy.org/compat.html
Dependency links are a setuptools feature that allows you to give setuptools external URLs to find your dependencies at. They're mostly used to point to dependencies that aren't hosted on PyPi, and only a small minority of publicly-available packages use them.
This does not affect most dependency resolution, since most PyPi packages only depend on other PyPi packages.
Edit: Here's some discussion about the feature and its inclusion in pip, including some examples.
You might also like his course on Udacity which you can work through for free.
https://www.udacity.com/course/design-of-computer-programs--cs212
Apart from that I wish PyPy were completely cpython 3.6-compatible for all platforms, and the main implementation, for performance and speed reasons, there is nothing directly bad about Python, for a dynamically typed language. Previously, I would have said it needed good string formatting, but with the new f-strings, I don't really feel that's a problem. It would also be nicer if there was a universal, consistant and simple way to package python applications into nice non-python-requiring distributables, but there are (slightly painful) ways to now.
That's not to say that there isn't things I like in other languages that I wish Python were to have: Ruby's blocks are nice, for example, but they're not things whose absence makes Python bad.
Yes, I wish there were a static language based on Python, something like Crystal is attempting to become for Ruby, but I wouldn't want Python itself to be static, that's not what it is.
Basically, Calibre distributes the python runtime that it uses with the application, along with the libraries it uses locally. It's effectively installing a private version of python (or at least, as subset of it) alongside itself.
There are a few tools that'll do this process for you, such as py2exe or pyinstaller.
yes. and many people seem to misunderstand what it is.
i manually created syntax highlighting that reflects how it works: here
it’s an expression. no evaling after the fact. no security risk. no reduced readability once your syntax highlighting is updated.
Whenever I think about metaclasses I'm reminded of this quote from the Zen of Python:
> Simple is better than complex. >Complex is better than complicated.
Use of metaclasses is an example of the latter. 99% of the time, you don't need or want to use them. But there are very rare cases when introducing the complexity of a metaclass is better than the alternative, complicated solution (which often involves using a hundred nested if/elif/else or try/except/finally statements, and/or the dreaded eval).
The Django ORM is a great real-world example. Those interested in the actual implementation can read the code in ModelBase and Model (lines 28 and 275) and the get_model function. This code looks complex (and it is), but is much easier to understand than you'd think. The major (imo) feature it adds is the handling of type translation from the database to your program, which saves a huge amount of time for the programmer. I don't want to ramble on too much about Django because not everyone here is a Django user, but I think this is a really cool example of real-world metaclass use.
Via HN. Discussion: https://news.ycombinator.com/item?id=7861942
Current top comment, by HN user jnbiche:
>I'm really glad to see some of the Python committers taking a serious look at the GIL. Python is either posed for great victory (given its rapid rate of adoption in academia) or slow failure (given the rapid rate at which server apps are starting to migrate from Python to Go). > >However, between accomplishments like Micropython (huge potential for Python on mobile/resource constrained devices) , PyPy's slow but steady gains, and projects like this, it's at least an interesting time for Pythonistas. > >Now, if we could only get an optional static type checker... (heresy, I know). Dynamic typing is great for quick prototyping, and I would never want to lose that in Python, but I'm very uneasy now taking on any large projects or long-term projects without static typing. Mypy holds some promise here, but I think it will take sponsorship from a big company to push something like this to a mature state.
A couple tips for Anaconda:
conda install
, especially obscure pure-python packages. Use pip to install these, but...conda install pip
and then you can pip install
anything.*If you don't, pip tries to install things in the global site-packages directory, which doesn't work.
They definitely do use open source libraries - especially newer companies which wanted to get up and running in a hurry. Older companies may be a bit less trusting of open source, but even the sockets code in Windows, if I recall correctly, is based on BSD sockets.
> For something as simple as an HTML parser...
HTML parsers are not simple! Especially for a crawler, they have to handle as much broken HTML as possible, and even valid HTML needs a big spec.
Because parsing websites is so central to Google's business, I would expect that they maintain their own parser, though it may have been forked from an open source one. They're more likely to use open source code for auxiliary things, where they're not going to get a competitive advantage by writing their own. For instance, Android includes SQLite for apps to store data.
This is actually covered in PEP-328 If you're on Python 2.5 or later then you can use:
from future import absolute_import
and get pretty much exactly the behaviour you need. I believe this is standard in the 3.x series.
One of the nicer things in the Python community is code style consistency. PEP 8 is used for the standard library and although that's not necessary for user code, it's a nice guide nevertheless. One of the things it recommends is to avoid comparing boolean expressions with True/False. ;)
Your script works and that's what counts, but here are some tips anyway:
os.path.join()
for path concatenation instead of adding them together. This way you won't have to worry about missing or duplicate path separators in each part.os.path.splitext(s)[1]
to get the extension. Before comparing it, use os.path.normcase()
to convert it to the proper case instead of always using lower()
. This will handle the case sensitivity difference between Windows and Linux.'/'
in literal path strings. Windows accepts them just find and you can use os.path.normpath()
if you want to convert them to the native platform format (e.g. for displaying). It looks cleaner and, on Windows, it avoids the need for escaping the backslashes or using raw strings.You could also speed up the grouping()
function by splitting the extension strings once at the start of the script and using 'in'
instead of count()
to search for a match. count()
has to test all elements and although it doesn't matter for such a small number of items, it's (again) a matter of style.
EDIT: Also, the special Windows folders like the desktop and My Documents and localized and therefore you need to use a special function to get their real names if the system language is not english (in greek, for example, they are called "Επιφάνεια εργασίας" and "Τα έγγραφά μου" respectively). One way is to use shell.SHGetFolderPath()
from Mark Hammond's pywin32, which every Windows Python user should have installed.
Oh, and Keith's suggestions are also great.
Actually, there’s nothing wrong with decimal per se, but if you use integers, then the unit of the currency would be cents. So you wouldn’t store 1.99, but 199. This is how for example Stripe works, you can see it at work in their API: https://stripe.com/docs/api/balance/balance_object. I think the idea behind it is that you don’t need sub cent precision in financial applications and if you store a price as an integer in cents it’s much simpler.
ProjectEuler.net is good fun. If you like fractals, fire up turtle! It's really easy (it was originally designed for kids learning Python) and lets you do some cool stuff. Here are some examples: pythagoras tree, fern.
Current Python
if case in [0]: print "You typed in zero" elif case in [1, 9]: print "a perfect square" elif case in [2]: print "an even number" elif case in [3, 5, 7]: print "a prime number" elif case in [4]: print "a perfect square" elif case in [6, 8]: print "an even number" else: print "Only single-digit numbers are allowed"
vs c-style switch
switch(n) { case 0: printf("You typed zero.\n"); break; case 1: case 9: printf("n is a perfect square\n"); break; case 2: printf("n is an even number\n"); case 3: case 5: case 7: printf("n is a prime number\n"); break; case 4: printf("n is a perfect square\n"); case 6: case 8: printf("n is an even number\n"); break; default: printf("Only single-digit numbers are allowed\n"); break; }
What are you gaining here from a syntax perspective?
In C-code, switches are a translation to an if-else chain and a convenience of syntax. Python doesn't benefit at all from this approach and it definitely is only worth adding new syntax if there's something special about it that isn't available already within the language. The advantages of the switch statement were going to fall under what the interpreter did behind the scenes (if it did anything at all).
PEP 3103 has most of the discussion in it. If you read through it, you'll gain a better insight.
A lot of other people have mentioned that you were really being examined on your problem-solving and algorithm skills, not your general coding skills.
Beyond that though, it seems like you missed a pretty important part of the solution: the standard library's sort is O(n log n), whereas this is simply an array merge, and is O(n). That's why it should be faster (much faster, even when compared to the c-implemented sort
routine). Not realising that (or, if you did realise it, not mentioning it to the interviewers) may have had a significant part to play in you not getting the job.
Here's a quick solution I hacked together:
def merge_sorted(xs, ys): res = [] xs_i, ys_i = iter(xs), iter(ys) x, y = next(xs_i, None), next(ys_i, None) while x is not None and y is not None: if x < y: res.append(x) x = next(xs_i, None) elif y < x: res.append(y) y = next(ys_i, None) else: res.append(x) x, y = next(xs_i, None), next(ys_i, None) if x is not None: res.append(x) if y is not None: res.append(y) res += xs_i res += ys_i return res
This version is indeed faster than yours at larger sizes. I started to see a difference around 10,000 elements. You can see (and run) the testing code here.
Easiest way is to download a windows installer for one of the many scientific Python distributions. My personal choice is Continuum's Anaconda, but there a few more outlined in this SciPy wiki page.
there was a interesting find at google: "Being good at programming competitions correlates negatively with being good on the job"[1].
i like this quora answer for your question. tl,dr: the most valued skills that you should learn to be a good programmer in the market is not the same subset of skills that a competitive programmer is required. i would recommend you to work on projects but with a small team on it (5~15 devs).
I like to use Kivy for GUI stuff, but I always do it with Python 2.7, as it just works (I generally download everything from Python Unofficial Binaries Page, Kivy and Pygame in this case).
I think they do support Python 3 now though, but it's not quite as super simple as using the unofficial binaries downloads; see here.
I had the same reaction. Didn't even bother messing with it due to the $250/year offline fee or being forced to make my data public. You can get it up and running ipython/jupyter here: https://plot.ly/python/offline/
I don't think that will happen anytime soon. Flask depends on werkzeug and Jinja2 and that is a good thing. Bottle has a strict "no dependencies" policy and that is a good thing, too. Both frameworks have their right to exist.
But fear not, I am working on a plugin for Bottle that adds support for werkzeugs Request/Response objects and exceptions and integrates the interactive debugger. Jinja2 is supported anyway, so, with that plugin, you could turn Bottle into Flask quite easily.
Edit: I am Marcel :)
You could take a look at Codewars. It has a variety of tasks created by users, sorted by difficulty and a tag system. It also has a gamified account rank system that's used to offer you random problems of "appropriate" difficulty, but you can just search for problems by tag or difficulty if that doesn't appeal to you.
After you successfully complete an exercise you can view other people's solutions. Or you can do that if you're stuck, but then you don't "rank up" from that problem; again, this might or might not appeal to you.
Might not be technically correct, but: Anaconda is a distribution of Python that contains everything you need to get started, and its own package manager (like pip). You can easily create virtual environments and install packages, and it will take care of all/most of the "behind the scenes" dependency management. It's great for people like me who just want a Python with mostly generic packages that "works", and one which I can recreate on multiple machines.
Whatsapp offers api for businesses, I guess that's what KLM is using. It's even in their customer stories
I sort of stopped going to a meetup group because the guy running it was using Python 2... for tutorials and beginner stuff, "because there's no reason to use Python3".
I think there are situations where there's a good excuse to use Python 2. But a beginner book is not one of them.
I think more people need to take a stand on this issue in a significant way.
"The C Programming Language" is a great book too. But I've heard in many places that it's no longer a good resource for learning modern C.
That was tough to read. Whatever happened to prose? (edit: I see, that were just slides of a presentation)
Meanwhile a lot of what he says is basically just what proponents of Haskell and many other functional languages have been arguing since the ancient days: Write as many small, pure functions as possible, treat IO-dependent code as filthy and keep it separate and as minimal as possible. It's not as much a question of "architecture" as it is one of programming paradigm. Some languages make it easy (or force you) to code in this style, others make it more difficult and non-obvious.
Python is an imperative language with some functional programming capabilities (even though Guido hates them). If you're wondering why I say "some" try composing two functions. In Python it is much simpler to write a loop than to use the functional approach, because functional programming in an imperative language typically involves a lot of line noise. If you try to follow the "pure" paradigm as suggested in OP's post, you either end up writing completely indecipherable one-liners or you have to define multiple generators in a row to do what could be a simple for loop. Either way the result becomes harder to read and to understand. Which is sad but a natural limitation of the language.
cx_Freeze - like py2exe but supports Python 3: http://cx-freeze.sourceforge.net/
EDIT: cx_Freeze collects all the files you need to run a program, and makes an exe to start it. You can distribute them in a zip file, or it can make an MSI installer for you. It doesn't have a single-file exe mode, though.
I've used PyInstaller with some success. One advantage is that you can build executables for Linux, Windows and Mac OS X with a single tool. Example: The logview application, which is a Qt-based viewer for logging events from the standard library logging package. Although I had to do a little extra work to build the .dmg on Mac OS X, PyInstaller got me to the point where I had a working app folder.
Generally, the user doesn't need to have Python installed in order to use a PyInstaller-built application.
AFAIK py2exe is Windows-only - you mentioned that you work on Linux, but are your users Windows-only?
Hey, mitsuhiko.
Firstly, awesome work on Flask. I love that you can use it for tiny apps or legitimate sites, and it's very easy to get into.
My problem is with Blueprints. The documentation of them is rather limited in actual examples of what they're for. I understand how to create them, but the documentation is lacking as to why I would. Are they like Django apps where you can separate your site code based on functionality? I tend to learn best when I have real-world examples, and Blueprints are one of the things about Flask that I haven't been able to really "get".
I've also been trying to figure out a good way to structure larger projects. I think if there was one thing I'd like to see, it would be an example project structure with multiple blueprints and an app.
Thanks for all your hard work with Flask and its extensions!
Honestly I'd rather stab my eyes out with icepicks than write GUI software in Python and I say that as a 10 year python dev. It doesn't have a good impedance match with ANY toolkit and NONE of the available toolkits support 2 way data binding AFAIK.
I'd write the GUI layer in the toolkits native language (eg. c++ for QT) and embed the python interpreter or call with RPC.
Or just write a web app and use D3.js for visualization. D3 uses a declarative language to bind data to DOMs. It can render HTML, SVG, and canvas.
Here's a POC web app I wrote that used a python backend to stream live data via websockets to a D3 based browser frontend https://github.com/n1ywb/wavefront-web/blob/master/wavefrontweb/static/js/main.js
PyCharm. They have a free community edition, or a licensed professional edition.
https://www.jetbrains.com/pycharm/features/editions_comparison_matrix.html
If you are using it for an open source project, you can apply to use it for free. I use it daily, and it is awesome.
Not only is that easy to read, but it's used similarly in other languages as well. It's a good standard way to specify what you're ignoring or throwing away, especially in unpacking.
If you're using the scientific stack (numpy/scipy/matplotlib/ipython etc.), Anaconda is the way to go. It obviates all issues surrounding installation of packages and use of mulitple python versions and/or multiple virtualenv-like environments (anaconda envs are like virtualenv but much, much better for compiled packages).
The main complaint I've heard about Anaconda is that it doesn't use the system Python, but this is confusing to me: I'm not sure why you'd ever want to use the system python for scientific computing or development. For example, on *nix systems like OSX, there are system tools which depend on Python. This means that if you, say, update numpy on your system, you could break other system applications or utilities that you had no idea were using it.
Use Anaconda. You'll be happy you did.
Edit: I'd actually recommend miniconda instead of Anaconda in most cases: it gives you a minimal installation from which you can conda install
or pip install
any other tools you need. Don't worry too much about which miniconda Python version you choose: it's really easy to add an environment with another.
I tend to go with the first one, but a mantra of Python is:
Explicit is better than implicit.
So IMO if the second one is clearer to you because it is more explicit, go with that.
Acrylic: Supreme Tech 12" x 24" Acrylic See-Through Mirror, 3mm https://www.amazon.co.uk/dp/B01G4MQ5OW/ref=cm_sw_r_cp_apip_uClgNZZuxvo6v I'm hosting on ACEPC T11 mini windows 10 pc but raspberry pi would be possible if you're comfortable with Linux :)
>You can view it here: https://repl.it/repls/FloralwhiteLuckyField just click the 'run' button (top center) it won't open the urls in a new tab. but everything else should work as expected.
More than 2500 commits. More than 100 contributors. More than 11 months in the making: https://www.freecadweb.org/downloads.php
0.18 is not a big change in terms of features but very important for the core. FreeCAD now supports Python 3 and QT5. Still, some important changes and convenience functions have been added in the user space as well: https://www.freecadweb.org/wiki/Release_notes_0.18
You sound like someone that would love to check out Coursera's An Introduction to Interactive Programming in Python which is a free seminar offered by the fantastic Coursera project.
I'm having a hard time making sure I don't enroll in tens of dozens of those myself, found this Python one today and it puts me up at 10 courses for this semester. I wish I had been this active when I was at University myself...
Awesome piece of software, let's you do map reduce sort of stuff and I believe a lot more. It can run on hadoop.
Basically, you might set up a pipeline so that you do a series of operations to a massive amount of data. For example, let's say you had a huge dump of metadata of wikipedia pages, one line of JSON in a file per URL. You have this huge - let's say 50 terabyte - file.
You can do things like map, filter, etc on it and write it out very easily in pyspark. Something that looks like:
data.filter(lambda x: x['genre'] == 'history').map(lambda x: x['citations']).map(lambda x: [get_hostname(i['url']) for i in x]).toJSON('myresults.json')
That's just example pseudocode, but the idea is you can do things like this very quickly over huge amounts of data. That pipeline might filter out all metadata of pages related to history, pull out the citations, get the hostname of the URLs in the citations and dump it all to a file.
And behind the scenes the data is shuffled to X number of servers, they all work on their dataset, then return results, so with a small amount of code, you can essentially compute stuff like that on terabytes of data in minutes (or even seconds with enough servers), by distributing it to a huge cluster of servers.
It abstracts a lot of stuff you'd need to do to work with hadoop - data shuffling and all that. Extremely cool stuff. I hate to say "big data" but this is an example of how you might work with big data. Extremely scalable, extremely flexible in what you can do, and easy as hell to solve random problems that someone might throw at you. It doesn't matter if the data is a huge file of serial JSON, if its in mysql, or even if it's a huge parquet file. It'll abstract all that bs out so you can do what you need to do, and use all your computing resources available to do it as quickly as possible.
I'd recommend using pyenv to manage your python versions. 3.5.0b3 is already added to pyenv.
Virtualenv is for isolating python versions, but it will not install them for you as /u/Southclaw implied.