A couple tips for Anaconda:
conda install
, especially obscure pure-python packages. Use pip to install these, but...conda install pip
and then you can pip install
anything.*If you don't, pip tries to install things in the global site-packages directory, which doesn't work.
> To be honest, I'm not entirely sure why this hasn't been done before.
Because it's a lot harder than we think.
Disclaimer: I'm not a data scientist, but I work with a lot of them. I have therefore been in a position to see the R vs. Python wars from the outside, to to speak. And I can tell you that even with all the underlying advantages going for it, including its massive community, Python is only now getting to the point where it can seriously compete with R in this area.
The Python infrastructure for data scientists is now massive, yet still not as unified as that of R. That said, tools like Anaconda are now making it possible even for less technically inclined scientists to install and maintain their own Python data analysis stack, including:
In short, it's getting to the point where it's becoming conceivable to use Python as a viable replacement for R (or Mathematica) for data analysis.
I'd love to see Haskell getting to that point, but it'll be a long road. For one thing, we don't have a community the size of Python's, especially not in data science.
PS: Anyone who is … aware enough of PLT to be reading /r/haskell and yet who still uses R should read the following paper:
http://r.cs.purdue.edu/pub/ecoop12.pdf
Once you read that and understand it, you will ought never to want to touch R again. If the authors are right (and I see no reason to doubt them), programming in R should be considered positively hazardous. And we probably ought to re-evaluate the level of trust we put into any data produced by R.
If you're using the scientific stack (numpy/scipy/matplotlib/ipython etc.), Anaconda is the way to go. It obviates all issues surrounding installation of packages and use of mulitple python versions and/or multiple virtualenv-like environments (anaconda envs are like virtualenv but much, much better for compiled packages).
The main complaint I've heard about Anaconda is that it doesn't use the system Python, but this is confusing to me: I'm not sure why you'd ever want to use the system python for scientific computing or development. For example, on *nix systems like OSX, there are system tools which depend on Python. This means that if you, say, update numpy on your system, you could break other system applications or utilities that you had no idea were using it.
Use Anaconda. You'll be happy you did.
Edit: I'd actually recommend miniconda instead of Anaconda in most cases: it gives you a minimal installation from which you can conda install
or pip install
any other tools you need. Don't worry too much about which miniconda Python version you choose: it's really easy to add an environment with another.
I haven't tried Python on Bash.
The 'recommended' way, however, of installing python on Windows (incl. win10) is to use Continuum's anaconda. This is especially useful if you'll want to c-compiled packages, such ad Numpy or Scikit-learn, as these are otherwise known to be tricky to get running on Windows. If you're not using such packages, the normal cPython distribution will be just fine.
BTW, I typically use the miniconda which is smaller, and then download specific packages based on needs.
Continuum's 'conda' is great for this.
Download Anaconda if you want a Python distribution with a bunch of scientific/numerical packages.
Download Miniconda to just get the conda package manager (and Python).
You can use 'conda' to create environments. When you make an environment, you can specify the exact Python version to be contained in it. For example:
conda create --name py3 python=3
(I believe you can also specify specific versions, like 3.4.2)
I do that on my system to have a Python 3 environment. Then in the command line, I can just do:
activate py3
And now 'python', 'pip', and 'conda' all point to their executables in the py3 environment (which is just a folder at <root Miniconda folder>\envs\py3)
Yes, as /u/frodre points out.
In fact, these days, Anaconda is a bit too fully-featured for the average data science programming stack. So, instead, I now use miniconda to set things up. Each project I work on gets its own environment and only the packages it needs, and porting that environment configuration is incredibly easy. As an example, consider Christine Doig's Bokeh/Blaze tutorial from the SciPy currently happening. She provides this simple yaml file with the distribution for her tutorial on git:
name: scipy-tutorial dependencies: - blaze=0.8.0 - bokeh=0.9.0 - ipython=3.2.0 - ipython-notebook - netcdf4 - boto
From within the folder containing the code, creating the special, minimal environment necessary to run her code just requires executing the command conda env create. And voila! Special environment with the specific package versions she was using.
You can specify versions of your packages if you're worried about conflicts. Additionally, if there are any other dependencies, conda will inspect your configuration and compute the necessary packages and versions to download automatically for you. I really can't recommend it highly enough.
I'd highly suggest the Anaconda python distribution, it solves a lot of the python installation issues via their conda package manager. It not only creates virtual environments for python packages but has support for the binary dependencies as well.
Look into using virtual environments, they are exactly the reason they were made when you dont want to mess with your system level Python setup. There a few different virtual environment managers but my favorite is miniconda.
why not use ananconda python distribution and Excellent! Conda package manager?
https://store.continuum.io/cshop/anaconda/
and the lite version MiniConda :
http://conda.pydata.org/miniconda.html
They have huge list of binary package for all major platforms.
Calibre is a rubbish example, its install instructions include using the system python to execute random code off the internet with NO validation.
Malware, anyone?
While there are tools like cx_Freeze, py2exe, and so forth; IMO the best way is to build your application like a standard python module, then ship the required python environment including that module. Look at examples like miniconda which package python and conda together. Just substitute your application for conda.
The difficulty in making anything fancy is that you cannot avoid getting into system specifics, for example getting into the start menu in Windows requires registry changes, but neither of these concepts make sense on any other platform. That being said, it should be easy to include a "build" system for each platform, ignore this directory when packaging but you'd stick in your e.g. NSIS config, dpkg/rpm build, hdiutil etc. setup. So you can maintain a single codebase, but still build nice system-specific installers.
Why don't you try conda environments that come with the conda distribution. For your use case, you probably would also want to look into binstar (also from Continuum Analytics), that they recently renamed anaconda cloud. Binstar will give you a hosted service to package and deploy your own packages and exactly those version you need to go along with them for all your environments. I found it to be an big improvement and a pretty comprehensive workflow.
Probably the best way to install python on any system is anaconda (Homebrew does not ship with OS X, it's a third party utility) is anaconda. It comes with a bunch of default packages already installed, and it makes it really easy to add new ones.
Creating virtual environments is also really simple (for example, if you wanted to do something with python2.7 or try out the latest release of python, even if packages you like aren't compatible yet). All you need to do is conda create
with the name of the environment you want and the specifics.
All you need to do is install the anaconda like you would any other package.
Also, if you are low on hard drive space, you can look into miniconda instead. It's anaconda without the extra utilities (e.g. Spyder)
I would consider grabbing the whl file from Christoph Gohlke's website: http://www.lfd.uci.edu/~gohlke/pythonlibs/
pip can directly install whl files: pip install <whl_file>
or check out the free conda/anaconda if you need more of the scientific stack and a nice way of isolating different environments:
First, have a look at the details of how to do Conda packaging: http://conda.pydata.org/docs/build.html
Once you've prepared a package, request an invite for binstar, and upload your package there. Then you can promote your channel to people who want to use your package.
What goes into the Anaconda distribution is up to the people who make it, but once you've made a package and shown that people use it, you can ask them if they want to include it.
Ah but conda is even better than pip ! It replaces virtualenv, pip and something like nvm/rvm but for python. Choose your python version, and install package into an environment, all in one slinky tool. Without breaking compatibility.
http://conda.pydata.org/docs/using/envs.html
And that's how programming works. We iterate and improve.
yarn hasn't broken anything - it's pure win.
FWIW, I have Theano and Keras (Keras is similar to Lasagne that Alex uses, it sits on top of Theano) running on Win7.
Grab http://conda.pydata.org/miniconda.html, then run "conda install mingw libpython" then I was able to install theano from the 3rd party anaconda libraries, and finally pip install Keras.
You got the basics wrong here.
Pycharm
is and Integrated Development Environment it's a program for managing your python projects(including writing code).Python
is a programming language.scikit-learn
is a python package (a sharable program so to say)python
in your operating system is a python interpreter system package. It's basically a program that runs your python code. There are various versions like python2, python3 and python3.5. Usually your operating system will refer to the newest python3 version which is 3.5.Distro
is a linux distribution that has linux kernel at the core but uses different software at desktop level. In layman terms you could call distro an operating system similar to windows and like. Few distro examples: Ubuntu, Mint, Arch. If you are new to linux I would suggest going with either Ubuntu or Mint. Now if you want to run scikit-learn
you might need to dedicate a bit of time to it since there is a bit of a learning curve and lengthy setup process. Luckily some people are aware of it and have developed various tools to deal with that like conda.
I recommend you read through something like: http://conda.pydata.org/docs/intro.html which will introduce you to a conda package manger.
Anaconda FTW. Although if OP is not interested by the scientific ecosystem coming with the regular anaconda, he may consider to install the minimal installer miniconda (http://conda.pydata.org/miniconda.html) instead
The 'conda create' environment management is super easy.
http://conda.pydata.org/docs/faq.html#managing-environments
It will let you set up python 2/3 environments with whatever packages you want. Plus, it won't redownload what packages you already have.
The anaconda distribution is awesome for scientific computing. Plus if you have a .edu email address you can get a lot of their paid packages, including the MKL optimized linear algebra package.
> Numpy How in the hell do you include this in a setup.py without forcing people to compile it constantly.
That's not really an issue with numpy, that's an issue with the infrastructure for distributing Python packages. For a long time, there was no good way to distribute pre-compiled binary packages. You used to use eggs for that, but when everyone was told to use pip instead of easy_install, eggs got left behind. Now, finally, there are wheels, but last time I checked, you couldn't upload Linux wheels to PyPI.
Conda exists precisely to distribute things like numpy as pre-compiled binary packages.
Another approach is to configure conda to automatically install pip into all new envs. Add the create_default_packages setting to your .condarc file and specify pip as a default package. This way every new env you create will get pip installed by default. Then when you source activate the env the correct pip is in your path and everything works as expected. Here are some docs about it http://conda.pydata.org/docs/config.html
I second Gohlke's work heartily.
If you try the conda route, consider miniconda if you don't need all the scientific packages that come with the default install.
Or, if you do actually want to try compiling yourself, it's easiest to use the compiler that compiled your version of Python: https://wiki.python.org/moin/WindowsCompilers.
conda from Continuum Analytics is basically like debian packages, but it's cross platform and includes an environment management system.
disclaimer: I work for Continuum Analytics.
If you are not breaking your schools T&C then you can download miniconda. It is a self contained python distribution, meaning you can install whatever you want in your user directory.
I think it includes 'pip' but if not then you can do 'conda install pip'. After that just use pip to install whatever you like.
Get Anaconda python, I like using 3.4 as my base. Then use the command:
conda create -n py27 anaconda python=2.7
That will install anaconda using python 2.7 (I think it's =2.7, if not try =27). That's like virtualenv, so to 'switch' to python 2.7, type:
activate py27
Note that new cmd prompts will default to python3.4. Disclaimer though, while I just did this today on both my OSX and Xubuntu laptops, I haven't tried on Windows. But this link here says it should work.
Edit: I should mention that if you want python 3.3, use the same but name it something else (instead of py27), and do python=3.3
If you really have to use Docker, then this is fine. But if you just want "better isolation than virtualenv", you should look at using conda: http://conda.pydata.org/
You get isolation of binary libraries, without needing to deal with VMs - and it's completely cross-platform!
since your using anaconda, you probably want to check out conda environments. They work a lot like virtualenv but are optimized for working with anaconda
So, you don't download both installers. Download one, then create an environment as suggested by u/drfrogsplat. I installed Anaconda 2.7 first, so to get 3.5, I ran:
conda create -n py35 python=3.5 anaconda
It's pretty straightforward, but conda create
makes the new environment, -n py35
is the name of the environment python=3.5
specifies the python version, and anaconda
says that you want this environment to have access to all packages in the anaconda distribution.
http://conda.pydata.org/docs/using/envs.html Explains how to actually start using the environment.
Finally, you can use either version of Spyder to edit your code, but you probably need to switch between to run the code (Personally, I just run from the terminal when using 3.5).
You could accomplish this with anaconda's python virtual environments. As a pro, you don't muck around with the defaults at all, and you can switch between environments as your projects may require. The con is that you may have many different concurrent installations of various packages, organized by your conda virtual environment.
There is documentation on doing that here: http://conda.pydata.org/docs/test-drive.html
Your system uses python so you can't fully remove it. Anyways the whole point of virtualenv is to allow multiple copies of python that are all completely independent.
I would argue though that conda is a better solution than brew.
Anaconda's miniconda... Keeps the install small, and anything you need is just a conda install
away (as opposed to pip install
... Takes care of pesky problems installing on windows)
If you want your own Python installation, you could install Miniconda , and then every package you need, either with:
conda install package_name
or
pip install package_name
Maybe someone knows if you can make the two Python distributions to "talk each other".
> You should learn to use pip
pip uninstall homercles
But to address one of your points seriously, when you say "you get a whole load of other shit you dont need", first, one could make the same argument about the standard library of Python. But as I mentioned above, you're free to use Miniconda, which includes only a base installation.
Installing stuff like numpy on Windows is always touchy since a proper build environment is often absent. That said, I'd recommend trying to install numpy via conda (the package manager that runs Anaconda).
Conda (also refered to as miniconda) is designed as a standalone package and environment manager and it has worked great for me so far. You can get miniconda here - http://conda.pydata.org/miniconda.html
Once you have conda installed you can install numpy with:
C:> conda install numpy
Conda also plays nicely along side pip.
A lot of these packages have system dependencies that pip itself can't satisfy. There's a few ways around this. One way is to install those system dependencies first and then run pip install for the package. Another way is to use your distributions packaged version of a python package.
For example, a python package that has system dependencies is lxml
. To install it via pip you first need to satisfy it's dependencies on libxsl and libxml as well as python's header files, so it would look like this
apt-get install python-dev libxml2 libxml2-dev libxslt-dev pip install lxml
Or to install it system-wide you could use the packaged version directly:
sudo apt-get install python-lxml
Both work similarly. If you aren't into the idea of installing global packages, you'll have to go the dependencies route and then use pip to install into a virtualenv. I'd recommend looking into your distro's packages for these packages.
Another option would be to look at something like Conda which is an alternative package manager that packages it's libraries as binaries so you don't have to compile them yourself on your system each install.
Yet another option is further encapsulate things with containers using Docker, which makes things nice and repeatable for deploying to other machines / servers.
If you want to manage & deploy local packages while also making use of externally managed dependencies, you might think about setting up a local conda channel that's accessible from computers at your company.
If you're not familiar with conda, it's an open source cross-platform package management and distribution system, written in Python, but able to manage packages and dependencies in any language.
If you're looking to go further, you can try switching it to websockets https://github.com/miguelgrinberg/Flask-SocketIO. Also if you use conda build for the package, when you install your package you can also specify entry point for starting your server. http://conda.pydata.org/docs/building/meta-yaml.html#python-entry-points
And if you don't need all the libraries and want a smaller download, try Miniconda at about one-tenth the size of the full package. With the conda manager, it still makes it trivially easy to download and update other packages.
I don't know how to uninstall things on Mac but I would recommend not to pip install things into your built in Python. leave that alone because it may not be easy to rebuild it should something happen.
Install miniconda which is the easiest way to get Python and manage multiple Python installs. As long as you keep track of what python
and pip
executables you are using (and if you say yes to the installer's question about editing your path then the miniconda executables will always be the default) the other Pythons won't interfere.
Of course you may want to uninstall them anyway, but the way you would do that would depend on how you installed them in the first place.
> Suppose I want to use only Conda, should I install it with Pip?
You mean you don't want to install Anaconda or Miniconda and want to use only conda with your already installed Python?
I have only experience of using conda with Anaconda, so I can't answer precisely base on my own experience.
But according to the conda documentation, the answer is YES.
The documentation says:
"Conda is also available on pypi, although that approach may not be as up-to-date.
(...)
pip install conda uses the released version on pypi. This version allows you to create new conda environments using any python installation, and a new version of Python will then be installed into those environments."
> And after creating a virtual environment with Conda, do I have to install pip inside that environment to get pip packages such as Spyder and other things?
Maybe pip will be installed in a virtual environment by default, but I'm not sure. Please try it and if pip is not installed, use ensurepip in the virtual environment.
But if you use conda for virtual environment management, also using conda for package management is the best practice, especially you intend to use Python for scientific computing or data analysis (please read "If you had to pick one Python weakness…" section of this article. The article is also good for introducing conda).
Spyder and many other scientific packages can be installed by conda. You should try conda first, and only use pip if the library can't be installed by conda. Conda and pip can be used together in the same environment.
I was thinking of conda itself, which the package you linked to seems to work with. I haven't used conda to manage an environment for Julia, so I'm not able to evaluate the need for Conda.jl.
conda will automatically put it in the right place within the directory you set up when you installed Anaconda/miniconda. On my system, it's ~/anaconda/envs/
More info here: http://conda.pydata.org/docs/using/envs.html
If you use conda and the Anaconda build of Python, it has an "environment" feature which is more robust than virtualenv, because it completely isolates libraries at the dynamic linker level. Best of all, package installs into these environments are completely self-contained into a single subdirectory. So, you then don't have to convince your sysadmins to install matplotlib or django from some unstable RPM repo, AND they don't have to worry that your package install is going to splat stuff all over /usr/local/lib.
Just. Use. Conda.
It works. It solves all these problems, and more. It's completely open source. It's built by the folks that have built MULTIPLE python distributions, fighting against all of the weirdest esoteric compiler and linker and windows roaming profile permissions problems. And it's been shipping millions of packages to the scientific python/pydata ecosystems for a long time now.
You don't need to go off the deep end and create virtual machines just because the language's packaging clusterfuck doesn't take into account native libs. Conda takes those into account in the metadata, and all the packages in the Anaconda repository are built to be relocatable so they can get robustly isolated at the C ABI level when they are installed into a conda environment.
I have a github account that I've synchronized with one of my dropbox folders containing IPython notebooks. I have notebooks containing cheat sheets and an assortment of anything related to data analysis. Python is such a mature language with a huge, active community, there is probably a module or framework already made that you need. You can search pypi or ask #python IRC channel or search stack overflow.
For handling URLs: people recommend requests library
For web development: django or flask
For data analysis/statistics/scientific computing: pandas, statsmodels, numpy, scipy, ipython
For working with Excel: openpyxl, xlwings, xlrd
For working with Windows ODBC data sources: pyodbc
Working with dates: arrow
Working with system processes: psutil
web scraping: lxml, beautifulsoup4, scrapy
package and virtual environment management: anaconda, virtualenv
For plotting charts/visualizations: matplotlib, seaborn, bokeh, plotly, ggplot, mpld
Working with ORMs: sqlalchemy
Working with cryptography or passwords: pycrypto, passlib, getpass
For Windows development: pywin32
...and on...and on...and on...
Unfortunately, you are correct, and Python packaging is pretty unfriendly towards compiling C-extensions.
/beginrant
Before I explain a practical (if messy) way to deal with your problem, I'll just mention that this is the exact reason that I champion the use of conda. Have you tried compiling your Cython extension on Windows? Don't be surprised if Visual Studio chokes on compiling code that compiles fine on Linux/OSX. Conda provides a much simpler way to make sure that end users (particularly on Windows) can use your package without learning the pains of a Windows development environment.
/endrant
Anyhow, see here for a practical way to deal with making sure Numpy is installed before compilation. I might also point you to a package that I've worked on that gives a nice pattern for trying to cope with Numpy and Cython not being installed before your package. You can probably modify the Numpy build_ext
example in order to try and cope with your problem of cython-murmurhash not being installed before preshed.
binstar allows you to distribute modules build with conda. recently triggering a build from pushing to github is a recent addition.
great to see the CI and distributing code brought together. a welcome suprise to me is support for linux, osx & windows. AFAIK it makes binstar the only CI service to cover all of those.
I get the feeling you're not asking "how do I run pip?" but rather how to package up your own packages/repository in order to use conda to install these custom packages across your environment.
This is documented over here
> Downgrading can happen e.g. if you update everything and then install a new package which does not support the newest numpy version yet
Sure, if packages don't pin their dependency requirements, this can happen. But if a certain version of a dependency is required for an installed package and that version is specified in its requirements, then conda will halt installation rather than break those dependencies. From conda's documentation:
> conda - conda install > > ... > > This command accepts a list of package specifications (e.g, bitarray=0.8) and installs a set of packages consistent with those specifications and compatible with the underlying environment. If full compatibility cannot be assured, an error is reported and the environment is not changed.
To my knowledge, pip has no such feature.
conda is a package/environment manager. You can store anything on anaconda repo, not just conda packages or python packages. You can even store pypi packages and point pip to your internal mirror (https://docs.continuum.io/anaconda-repository/admin/mirrors-pypi-repository). Some other examples are r (http://conda.pydata.org/docs/r-with-conda.html) and binaries (postgresql for example https://anaconda.org/conda-forge/postgresql)
yeah you will have to re-install all the other packages you want to use so they are linked to the correct python installation you want to run. Also you shouldn't have to use python3
if your default python 3.x. The nice thing about anaconda is you can use conda
environments (kinda like virtualenv but better imo) and specify the version of python you want to use. Whenever I start a new project I just run
conda create -n new-project python
which creates a new environment called new-project
My default anaconda installation uses python 3 but if you want a python 2 environment you just do
conda create -n new-python2-project python=2
now you can activate your new environment by running
source activate new-project
or
source activate new-python2-project
You can list all your environments with
conda env list
using enviornments like this is a really good way to work in case you have specific versions you need installed for one project but another project needs another version.
For more info on conda
commands see docs:
It's important to include in any anacondapy, the 'envs' documentation (in my opinion at least, it helps setup a space you can messup and break and then destroy easily).
Let's see:
Don't do this from IPython. The program is going to request your input for permission to download a fresh copy of all the packages. I don't think that prompt is going to appear for you. That's probably why it was just sitting there.
Run from the actual command line. You should see
Using Anaconda Cloud api site https://api.anaconda.org Fetching package metadata: .... Solving package specifications: .........
followed by other output.
>"pull back & nuke it from orbit?" I think you mean you want to remove the env and start over? That's in the documentation under 'Remove': http://conda.pydata.org/docs/using/envs.html
Inline code is through the ` character (next to the 1).
If you use the Anaconda python distribution, you can easily create environments that let you quickly switch between Python 2 and Python 3 (plus packages you have specified for the environment).
It's really simple - you can download the installer here. Once you've got it installed, you can create new Python environments with
> conda create -n environmentname python=3.4 package1 package2 package3
And then switch to the environment with > source activate environmentname
To install something, e.g GCC, you can just use > conda install gcc
As it seems many here are recommending, I also recommend going Anaconda, and more specifically the package manager Conda. Since trying Conda, I have had zero desire to go back to virtualenv.
I didn't see anyone answering your questions with a focus on the package manager, so let me do that:
conda install xyz
and then I can call the executable. Easy.conda instal -c bioconda perl
(-c is the channel of the kind users who shared the package). Need the latest gcc available without messing with the server? conda install -c r gcc
. You can package all kinds of stuff into your environments. R, ruby, specific python versions, etc.I'm sure there are more reasons, but I think the focus on Conda rather than Anaconda is the answer to why you would be interested when you already have a working environment. As soon as you need something outside of Python, Conda makes it much easier.
Anaconda combined with virtual environments are a great way to get started in Python.
http://conda.pydata.org/docs/using/envs.html
Good overview of virtual environments
http://www.numericalexpert.com/blog/conda_virtual_environments/
I understand. That is certainly a big challenge. I don't advise trying to break any corp rules but you might want to check out miniconda
You can install without admin rights and easily update and manage your python libraries using conda. Obviously you need to be comfortable that IT can support this process but it is a relatively simple install.
Good luck.
Additionally if you want the minimal environment go with Miniconda from http://conda.pydata.org/miniconda.html. I don't know why they make this download hard to find, but I prefer it. To get the full Anaconda experience just do conda install anaconda
after setting up Miniconda.
Thee ways I can think of right now:
Put all required modules and your own script file in the same folder. Add an (empty) file named __setup__.py (two underscores on either side) to the folder. That will mark it for python to contain importable modules
Compile pip from source and put it into your ~/bin. Needs some setup (altering $PYTONPATH), but it'll do user local installs that won't require super user privileges
Install anaconda python (http://anaconda.continuum.io). That will install and run without super user privileges and keep everything in its own folder in your user home. Then install anaconda's pip with
conda install pip
and install everything you need either with conda, or with pip if its now in anaconda's repo.
I usually use the third option as it's the easiest and requires no messing with packets or folders. If you're short on space in your user home try miniconda instead, which is a minimal install of anaconda http://conda.pydata.org/miniconda.html
These days you'd be way better off just using anaconda for all your Windows Python needs. It's like seriously the best thing that happened to Python packaging ever. And yeah, conda install paramiko
Just Works™.
And then use conda package manager to create environment and install biopython (and lots of other stuff).
Added plus: it is not only python-specific.
Give miniconda or a full Anaconda install a go. Should get you set up in minutes. If you're on a Linux these few lines should get you set up:
wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh; bash miniconda.sh -b -p $HOME/miniconda export PATH="$HOME/miniconda/bin:$PATH" conda create -n env_name python=3 numpy matplotlib source activate env_name
Not sure what you mean by installing the javascript. Installed plotly by doing:
pip install plotly
If you mean, how I installed my entire Python / Jupyter Notebook stack, I installed a virtual environment using miniconda3. Then installed the needed packages.
$ conda info
command displays information about current conda install, and "envs directories :" shows the directories that conda will use to create environments.
You can overwrite the configuration of this directories and assign more than one directory for it with your conda configuration file: .condarc.
A .condarc file is not included by default. You can use a text editor to create a new file named .condarc and save to your user home directory.
The .condarc configuration file follows simple YAML syntax. To specify directories in which environments are located, you need to set the "envs_dirs" key in the .condarc file as follows:
envs_dirs: - ~/my-envs - ~/my-other-envs - ~/anaconda3/envs
If this key is set, the default environment directory ($PREFIX/anaconda(3)/envs) is not used unless explicitly included (I added it third in the example above). You can make sure with $ conda info
command that the "envs directories" is overwritten.
The first directory is used as default environment directory. For example, if you run the command $ conda create -n testenv python=2
, testenv environment is created in the my-envs directory.
If you want to create an environment in the second directory (~/my-other-envs), you can use the command $ conda create -p ~/my-other-envs/testenv2 python=2
. To activate testenv2 environment, you only need to run the command $ source activate testenv2
instead of $ source activate ~/my-other-envs/testenv2
. Similarly, the command prompt shows only (testenvs2) instead of (/home/user/my-other-envs/testenv2).
See:
http://conda.pydata.org/docs/config.html#the-conda-configuration-file-condarc
http://conda.pydata.org/docs/install/central.html#specify-environment-directories-envs-dirs
I'm not using anaconda but I think you should be able to create a python3 environment with:
conda create --name py3 python=3
then activate it and launch Jupyter Notebook from there:
source activate py3 jupyter notebook
Taking a shot in the dark.
Well, you should be able to use a Gohlke binary, then use conda convert
Interestingly enough, the docs on that page give an (old) example:
conda convert cvxopt-1.1.7.win-amd64-py2.7.exe -d 'numpy >=1.8'
Gohlke no longer packages exes, so you should replace the name above with the .whl file you installed (I am guessing that the conversion is similar).
If that doesn't work, the Gohlke site also has the source of the port, so that route may work too.
EDIT:
Also: if you have a question related to Python, please ask on /r/learnpython. Also, don't repost questions. Please.
The easiest way to do this, especially on windows, is to install miniconda and to create a conda environment (like a virtualenv environment) with conda create --name mypy3 python=3.4
and conda create --name mypy2 python=2.7
and then you can switch between them with source activate mypy3
and source activate mypy2
P.S. TIL Spyder is official Anaconda GUI :)
It has nothing to do with conda, they just apparently tend to ship it along with Anaconda distribution in the noob-friendly format.
Also, note the difference between conda (http://conda.pydata.org/docs/) and Anaconda distribution (https://store.continuum.io/cshop/anaconda/). Anaconda runs on top of conda (or, rather, it's conda + a big set of pre-selected packages), but you don't need the former to use the latter.
Ah, you are totally correct on that. I haven't used conda or miniconda before, and I saw you mentioned using sudo in the OP, which you would generally be doing to install outside your home directory. Looking at the quick-install directions, they don't mention running the install script with sudo, so yeah, you'd definitely be installing as the user that you want to run the software with. It also mentions closing your terminal session and reopening it, probably to get changes made by the install script to your .bashrc working. Have you tried that yet? It would fit with your .bashrc being updated, but the $PATH environment variable not reflecting that.
I would highly recommend conda! Its both a package manager and an environment manager. There is a small learning curve, but after that setting up modules is sooo much easier.
To answer your original question though, .whl are pip wheel files and you should be able to install them by saying pip install blah.whl
Have you considered using conda? see http://conda.pydata.org/docs/_downloads/conda-pip-virtualenv-translator.html If you want better environment seperation, you could also go to docker or to a virtual machine. Or consider alternatives to matplotlib (heresy?) like bokeh, seaborn, or other tool. If not.. can you at least post some examples of the issue you are having?
You can use pip and suggested, but the most fool proof method for managing python packages is with the conda environment.
First, download and install miniconda. After that, open up a terminal window and type
conda install numpy
I use the conda package manager, and have both PySide, PyQt4, and PyQt5 installed simultaneously and I have no problem using any of them....
$ conda install pyqt
Then I start PyCharm, open a new python
from PyQt4 import QtGui, QtCore
I've never had a problem... can you explain what the exact issue you're running into is?
I can confirm that matplotlib styles work with python 3, as I use them all the time.
I don't mean to suggest you go back to square 1, but have you considered using Minconda?
With miniconda, you can use the conda package manager which is absolutely fantastic (you can use pip too if there is a package not in conda, which happens occasionally).
One you have miniconda installed (you should remove your previous version of python), run the following line from the command prompt
conda update --all conda install matplotlib
It will install everything you need. If you need to install other packages, it works great for that too.
conda install numpy pandas seaborn ipython-notebook
You can even download/install R with conda
conda config --add channels r conda install r conda install r-ggplot2
Conda works at a higher level than pip. It is just a package manager, not necessarily a python package manager. So, when you do conda install, it tries to find binary packages on binstar.org. You can install pip/python with conda, then pip install anything. Generally, you'll want to try conda install first, then use pip if there isn't a conda build available. With conda, you don't have to deal with having all the compilation requirements, since the packages are already compiled for you.
Here is how to update conda/anaconda if already installed. Conda=package manager, anaconda=collection of scientific packages that come bundled together.
Thanks for your reply. Yeah Python is definitely suitable for what you want to do and you can also use cython if you want greater processing speed which is great since you already have exposure to C.
As for which Python version to use, it doesn't really matter other than you just have to go with the one that is supported by the libraries you need to use.
I would look into making virtual environments which in a nutshell allows you to switch between any version of Python without messing up your system level Python installation and/or packages.
I like to use Anaconda distribution which not only handles making virtual environments using the conda command, but installation and managing of packages. It comes with a lot of the popular packages already installed or if you prefer to install only the packages you need, you can install miniconda then install the packages manually yourself.
I would recommend you look at installing miniconda. You can create conda environments for any version of python released easily, as well as manage packages in those respective environments.
I generally don't use conda environments, but installing conda packages (as well as dependencies) is quite easy with:
conda install numpy conda install ipython-notebook
or whatever package you want to install...
You can upgrade all your packages installed with conda with:
conda update --all
When run in the root environment, that will actually upgrade your version of python as well.
Strange that their build instructions would be wrong. Use pip then. If you're within a conda virtual env, pip adds to it just fine, or so I read: http://conda.pydata.org/docs/faq.html#env-installation
If not a virtualenv, it shouldn't matter AFAIK
Basically all of these complaints are solved by conda.
"Why I Promote Conda": http://technicaldiscovery.blogspot.com/2013/12/why-i-promote-conda.html
sure. tl;dr - binaries are work in progress, so far binaries install is supported for conda/osx
so at the moment we're working on building / distributing pythonocc through conda, see instructions here. that said, so far only osx binaries are there, getting windows and linux support is planned for the 0.17 release. getting builds for linux should be pretty low hanging fruit. getting solid windows support is something where help would be much appreciated, since neither Thomas nor myself are intimately familiar with the platform specifics.
One option is also to install Conda as your package manager. It comes with environment management and you can also use it to install other packages for other languages, like javascript and R.
"Anaconda can create custom environments that mix and match different Python versions (2.6, 2.7, 3.3 or 3.4) and other packages into isolated environments and easily switch between them using conda, our innovative multi-platform package manager for Python and other languages."
https://store.continuum.io/static/img/Anaconda-Quickstart.pdf
Create a new environment (e.g. myenv
) with python 2.7 as the default interpreter inside it
$ conda create -n myenv python=2.7
Activate the environment myenv
$ source activate myenv
Use pip in the active environment myenv
(myenv) $ conda install pip
(myenv) $ pip install oauth2
Deactivate the environment myenv
(myenv) $ source deactivate
See Conda FAQ.
The complete Anaconda distribution includes these packages. Minimally, this should do it:
conda create -n envname python pip sqlalchemy ipython-notebook cython numexpr pandas xlrd xlswriter # matplotlib
generally speaking, to manage dependencies I've been using conda (specifically, Miniconda), it's been working really well...
If you're on linux or OS X, with miniconda installed you can first do
conda install pygraphviz
That yielded nothing, but suggested I do a binstar search
binstar search -t conda pygraphviz
Here I got a number of hits for Linux-64 (one hit for Linux-32), and OS X... nothing for windows.
If you're using Linux or OS X, I would suggest looking into Miniconda and use that to handle dependencies...
If you're running a linux-64 bit version, (with Anaconda or Miniconda installed) binstar had a hit for ETE2, so installing it should just be a matter of running
conda config --add channels lidavidm #the channel with the ETE2 package conda install ete2
that should handle all the dependencies
it largely depends on the complexity of your stack / dependencies.
if you use a lot of dependencies wrapping libraries then conda is the way to go. it lets you handle the library dependencies with great ease. if you do scientific computing, its pretty much the way to go. I'm developing robotics software that ties in large number of large, complex C++ libraries. Building, distributing and deploying, has been a pain in the neck and adopting conda has made a day and night difference for me. Think of conda as a cross-platform, pythonic apt-get.
finally, conda ties in with binstar ( for distributing and building your modules ). IMHO its takes the combined ambition for virtualenv and pip to a whole other level.
You actually can mirror Anaconda repo anywhere, you can even create local repo with your or upstream packages: http://conda.pydata.org/docs/custom-channels.html
Anaconda Server seems to be some cool management wrapper around simple HTTP server serving packages, nothing more. Your organization can run similar server themselves.
Of course you'll need to do some experiments on how to manage such a mirror w/o Anaconda Server, but it should be pretty straightforward.
I would recommend miniconda3
Then just: condo create -n myenv python=3
Then if on Linux: source activate myenv to activate it
Then: source deactivate to deactivate your environment
I think on Windows you can drop "source" from the command
To install packages: conda install package_name
To list packages installed: conda list
No. If you're on windows I highly recommend you install Miniconda. It has two advantages:
Once you have Miniconda, you can do conda create -n py27 python=2.7 pip
to create an environment called py27
which has python and pip. (py27
is just the name, call it whatever you want). Then you can do activate py27
and all further commands will assume they refer to that environment (in that particular terminal window).
To install a package into a conda environment:
conda
. E.g. conda install numpy
.pip install
.Note that conda is designed for all sorts of things and not just Python which is why you have to include python in the conda create
command if you want Python in your new environment. Also you'll note there are two Miniconda downloads, for 2.7 and 3.4. It does not matter which you choose, you'll be able to create new environments with any Python version from either. It only affects which Python is the default.