Author here. Probably the version you'll get on Amazon is a snapshot of the print version, which at this point is 4 years old and has a number of errors. The version you can get at https://git-scm.com/book incorporates innumerable contributions from the community, and so is more up-to-date and correct, so while it won't help my Amazon ranking in tech authors, I'd recommend getting it from the Git website.
I've got good news and ~~bad~~ better news. (see second edit)
The good news is, if you've already run git add on the files, git will have added their contents as blobs to the object store, so the contents of the files still exist on disk (until they eventually get collected as garbage).
The bad news is, the data could be tough to find. I don't know of a good way to search the blob store. If you have a (very) small repository you may be able to find it manually though.
Here's an example of creating a new repository, adding a file and then retrieving it from the blob store:
$ git init Initialized empty Git repository in /Users/tpettersen/tmp/resurrect/.git/
$ echo "Test file" > test.txt
$ git add test.txt
$ git rm -f test.txt rm 'test.txt'
$ ls .git/objects/ 52 info pack
$ ls .git/objects/52 4acfffa760fd0b8c1de7cf001f8dd348b399d8
$ git show 524acfffa760fd0b8c1de7cf001f8dd348b399d8 Test file
Blobs (and trees & commits) are stored in the .git/objects directory. The first 2 characters of the object's SHA are used as the directory name and the remaining characters are the object's filename. The blobs themselves are in a binary format though, and I don't know a good way to search through them wholesale. There's some other great info on the object store in the Git internals section of pro-git.
edit:
Actually, you could run git fsck to get just the dangling blobs (blobs that aren't referenced by a tree/commit) your repository. If there aren't too many, you could manually check each of them using git show (or grep through them using a script).
double edit:
Here's a command that should print all of your dangling blobs (warning, there may be quite a bit of output depending on your repo's recent activity):
git fsck | grep 'dangling blob' | awk '{ system("git --no-pager show " $3) }'
Well you can't really install a Github Server (unless you are willing to pay horrendous sums for Github Enterprise) but you can install Gitlab, which is basivally and open source clone of Github.
The advantage is, that you can use tools like issues, wiki, etc, which you'd have to pay when using atlassian tools (jira, confluence).
> A SHA-1 collision is less likely than every member of your programming team being attacked and killed by wolves in unrelated incidents on the same night. - Official Git documentation
Welp there goes my team.
Fortunately, you don't need to extend that list. Instead, you want to create a custom merge tool and access it via this option: https://git-scm.com/docs/git-mergetool
If you write a program that is capable of resolving merge conflicts properly between two copies of a specific kind of binary, and invoke it using the merge tool command, Git will happily accept the results of that merge. You won't need to modify Git.
You're running Windows 7 on a VPS?
To answer your question, Windows has no native SSH support, which is how remote Git repos are usually accessed (outside of github). So people don't build Git servers to run on Windows.
http://git-scm.com/book/en/Git-on-the-Server-The-Protocols
If you really want to do it, install OpenSSH for windows and then anyone can access the git repos on the VPS via SSH.
Side note: I've installed GitLab on Ubuntu several times. It's really easy. If you can get an Ubuntu VPS it will probably work a lot better and be a lot easier to maintain, plus tons of extra functionality similar to github. An Ubuntu VPS with 1 gig of ram is really cheap to rent per month.
Good luck.
> Again 99% of the time autocrlf
is the right thing.
100% of the time it's the wrong thing. Basically all documentation recommending to use core.autocrlf
is itself outdated or based on other outdated documentation. Line endings are a data issue, not a user preference, and it's not just enough to have identical configurations for core.autocrlf
because that doesn't address various shell interpreters' inability to deal with \r\n
nor Visual Studio's stubborn insistence on using \r\n
in project files. I almost feel like git init
should create a default .gitattributes
with
* text=auto !eol
^(The !eol
is to explicitly unset any possibly user-specified <code>core.eol</code> setting, which can override the default behaviour of text=auto
—thanks, Git.)
So you already know about git push
, which pushes new commits on your branch to the remote.
The converse operation is git pull
, logically enough. This first fetches the commits from the remote that you don't have, then merges those commits into your local branch. That is, you can think of it as:
git fetch origin master git merge origin/master
If you want to rebase your not-yet-pushed commits — this will "replay" them on the new tip of the branch — you can use git pull --rebase
instead. This is effectively a shortcut for:
git fetch origin master git rebase origin/master
In most cases you never need to touch git fetch
. It's not exactly at the bottom of the Git stack, but I would consider it a "lower level" command than git pull
.
(origin/master
here refers to a so-called "remote tracking" branch — this is a kind of staging area for fetched commits. They go here before some merge
or rebase
operation puts them into your regular branches. This two-step pull
process allows for a variety of different policies to be implemented.)
The Pro Git book has a good summary of these commands. I suggest looking through some of the links there.
Don't use git reset --hard
unless you really, really mean it! It literally means throw away everything, set HEAD
to be a pointer to the identified commit (or branch), and check out that commit (or the commit at the branch's tip) into the working directory.
In particular, your particular use of git reset --hard
will have turned HEAD
into what's called a "detached HEAD
". This is where HEAD
points to a commit, rather than a branch. This is almost certainly not what you want.
Absolutely. This is where leveraging the power of git's hooking comes into play. Depending on your acumen for scripting, you can use a third-party program such as gitolite or do it yourself.
The way I handle this at my shop is by employing the power of the update
hook and user groups. This works for my flow, as my junior devs can push to ticket-based branches, then when signed off, are pulled into scoped branches by senior devs. This allows me to create a file in each repo called allowed-groups
, where groups are given permissions to update certain git refs
. It looks like this:
refs/heads/T[0-9].* junior senior refs/.* senior
Using regex, what this means is for branches that match the pattern T<number> (which correspond to our ticketing system), users in the junior and senior group have update access, and the next line gives senior devs the ability to push everywhere (including tags).
As per the docs on the update hook: >this script takes three arguments: the name of the reference (branch), the SHA-1 that reference pointed to before the push, and the SHA-1 the user is trying to push Leveraging this, I check against the allowed-groups file to verify access.
You can apply my approach for your person -> branch, as well as expand it to include deeper ~~searching~~ diff'ing against the ~~commits~~ starting and ending commit hashes with the flag name-only
for users who are bound to directories.
E: clarity
> I want to be able to have every file have an "init commit" message next to it on github, but some of my files have messages from previous commits I did.
That's a bad idea, don't try to do that. A commit does capture the full state of every file, so you can see all the files as they were for that commit by clicking on the commit message and then clicking the Browse Code button. GitHub choosing to show the message from the last commit that changed the file is just a display issue. You shouldn't do unusual things like modifying every file just to make the display look the way you want on GitHub.
You should make a tag using Git (also see this SO question).
You want to look at the different merging strategies in the .gitattribute file.
https://git-scm.com/book/sl/v2/Prilagoditev-Git-a-Git-Attributes
> You can also use Git attributes to tell Git to use different merge strategies for specific files in your project. One very useful option is to tell Git to not try to merge specific files when they have conflicts, but rather to use your side of the merge over someone else’s. > >This is helpful if a branch in your project has diverged or is specialized, but you want to be able to merge changes back in from it, and you want to ignore certain files. Say you have a database settings file called database.xml that is different in two branches, and you want to merge in your other branch without messing up the database file. You can set up an attribute like this:
No, gitignore is not mandatory, in the sense that it is possible to create a new repository without a gitignore file and commit other things to it and work without ever adding a gitignore file. Git allows you to work without a gitignore file so they are not mandatory in that sense.
Is having a gitignore a very good idea? Yes, absolutely. You can create a personal, system-wide ignore file (see also the git config
setting for <code>core.excludesFile</code>) in which you should place lines for e.g. your personal editor's scratch files. A .gitignore file you track in a repository should be relevant to the files likely to need ignoring by that repository, so e.g. if it contains C code then intermediate and final build output (*.o
, *.a
, etc.); if you're tracking a LaTeX report then you might want to ignore *.pdf
, and so on.
use git rebase to squash your WIP commits out. do so on a branch so you arent rebasing on master/development.
Or you can use github's UI to squash your merge requests into a single commit automatically.
​
In both cases you keep yoru WIP commits but "fold" them into your real commits.
You should use git revert
in this situation. It will undo the changes made by the faulty commit. Then push the newly created commit to remote.
See https://git-scm.com/docs/git-revert for more info.
The git book is a must-read for anyone who wants (or has) to work with git.
For me, it helped to be surrounded by serious git users for five years. Especislly being forced to rebase, squash commits and rewrite history learned me a lot. Leaving your comfort zone is quite harmless with git since you can almost always go back to a previous state (at least as long as you're working locally). Get to know git reflog!
He is telling you, that all flat files (.html, .css, .cs, .java, etc) should be version controlled. Maybe you use svn of some variation, but not having a backup of every state your code ever have been in is nuts.
So I hope you make some sort of backup - .zips, or at least Dropbox where you can roll back up to 30 days if you mess up something.
So head over to the getting started page for Git, and read about what it can do for you (tldr; save all your files, in every version, when you specify it, and let you work on the same code on multiple machines without a big hassle of merging them).
If you have a lot of clients, whose code is all the same, but still have different colors and icons, I assume you store the path for this in a db and just map the company to it. If you want all your flat code to apply, look up something called Continuous Integration - this can be triggered when you save your code in Git.
Internally, git stores pieces of files called blobs. A commit is just a collection of these file change fragments, and git definitely stores them as such. If you look in your .git folder you will not see multiple full copies of the entire contents of your repositories. This has enormous advantages in size savings as well as speed, not just for network transfers but for diffs and logs and many other operations. Rather than having to compare entire files, the deltas are already available.
Size is one of the major advantages that git has over some of the other VCSs available. The mozilla browser codebase and history, for example, takes up about 12gb when stored in SVN, while git stores the exact same revision information in only 420mb.
This stack overflow thread has a lot of great basic information about the way Git stores information about your files and how they've changed, and also has a bunch of links to further information if you're interested.
Git has many mechanisms to ensure data integrity, but leaving duplicates of all your files lying around is definitely not one of them.
Almost every action in git repo can be reverted. Read about git reflog
and git reset
commands.
This may be helpful too: http://git-scm.com/book/en/v2/Git-Internals-Maintenance-and-Data-Recovery
>A leading "" followed by a slash means match in all directories. For example, "/foo" matches file or directory "foo" anywhere, the same as pattern "foo". "**/foo/bar" matches file or directory "bar" anywhere that is directly under directory "foo".
https://git-scm.com/docs/gitignore
​
So, you want `**/error_log`, which is equivalent to `error_log`. Either will do.
Fundamentally, that's not what Git is. It's a tool for tracking versions of code, not for deploying code.
But that hasn't stopped many people from using it for deployment. You can write a <code>post-receive</code> hook to build and deploy your code for you after a push, or use any number of existing deploy tools that use Git as a trigger and source for files.
Searching "git deploy tools" should turn up a bunch of options as well as some sample hooks.
We used Git to control the Asciidoc source for the Pro Git book. Probably the one thing that made it work the best was to only insert line breaks at the end of sentences. This makes it so that changing a sentence only lights up that one sentence in whatever diff reader you're using, and only highlights the part of the sentence that changed in a reasonably smart one.
It's slightly annoying to work like this in your editor, and easy to fall off the wagon. I ended up writing an Atom plugin to help out.
Lets take a step back and get a quick overview here.
Git is a tool for helping you manage your source code. It has an awful lot of features, and there are a lot of tools built around it. It is not a code editor. There are guis that come with it/are build for it, but they're optional. Most tutorials etc you find on the web are going to assume you're using it in a terminal.
GitHub is a website that makes it easy for you to share your code with others. It is built on top of git. Git includes ways to communicate with other systems to share source code in a managed fashion. GitHub, in addition to its web-based service, offers a tool (confusingly also called GitHub) that you can use as a gui for managing git locally, and for moving your source code from your local machine up to the GitHub service.
You also presumably have some kind of code editor, like xcode or textmate. This is the program that you use to write your software. Many of them have places in their interface for communicating with git.
It is entirely possible to use git without using GitHub. You can use git without having any integration with your editor of choice. The preferred way to deal with git is by typing commands into a Terminal.app session.
I suggest that rather than screwing around with the GitHub Mac Application, you instead open a terminal and start there. There are quite a few tutorials around the web to introduce you to the basics of git. This is the one I usually recommend.
At a conceptual level, git doesn't compress anything. It runs your file through the hash algorithm to get a 40-byte hex string unique to its contents, then stores the contents in your projects .git/objects
under that hash (i.e. it makes a file named that hash, and puts the contents of the file in there).
At a practical level, though, as an optimization, git zlib compresses those contents, so you can't just open that file and read it. Zlib is the algorithm used in the zip format, so git actually does zip everything under the hood. You don't need to do it, and it would just cause more work for you.
Git has a further optimization: packfiles. In the objects
directory is a pack
directory. It starts out empty, and as you begin working in a repo there's nothing in pack
. Eventually git will decide to optimize by packing all of the loose objects into a packfile in the pack
directory in a packfile. You can do this manually at any point with git gc
. Also, whenever you push objects, or fetch/pull them, they're packed up in a packfile for sending, and they aren't unpacked on the other end. Git can use its own packfile format natively and expediently.
Read the packfiles section of the Git Pro book - it goes over these things, and talks about gits compression of a large (12k) text file.
Yes, there is. You want the -C
option, like git -C /whatever/directory gc
.
Note that the -C
comes before gc
. It won't work if you do git gc -C /whatever/directory
. The position of the argument matters.
This is because -C
is a sort of general argument for the git
command, not an argument to the gc
sub-command. That is, you could use -C
with not only git gc
but also other commands like git status
or git log
.
More info is available from the manual page here. (Note that Git has multiple manual pages, and that, since this is for the base git
command rather than the gc
sub-command, this is documented in the corresponding manual page.)
TL;DR: git add -u
says, “stage all changes to files we're already tracking”. It won't grab new files and it won't let you pick and choose. git add --patch
may be a better option if you want to pick and choose what to index change-by-change, or git add --interactive
if you want to go file-by-file.
excerpt from <code>git help add</code>:
> -u
--update
>
Update the index just where it already has an entry matching <pathspec>. This removes as well as modifies index entries to match the working tree, but adds no new files.
>
If no <pathspec> is given when -u
option is used, all tracked files in the entire working tree are updated (old versions of Git used to limit the update to the current directory and its subdirectories).
There are a dozen ways:
Remember that git was created with almost this exact scenario in mind. Linus didn't want to have to push a server. It was created so that individuals can push and pull from each other without a 3rd party. Each machine becomes a single individual. https://git-scm.com/book/en/v2/Git-on-the-Server-Git-Daemon
Setup a private repo on github and push and pull from that until you're done and ready.
If you want to do it locally just share the drive that the repo is on.
Put the repo on a USB stick.
You can setup a git server locally, and use that as your remote.
You can search the output of any command by piping it through grep
; for instance:
ls -la | grep 'foo'
So you can do that with any Git command, for instance:
git branch --list --all | grep 'foo'
In this particular case, if you look at the <code>git branch</code> documentation, then you will see that you can also supply a pattern to git branch --list
:
> If a <pattern>
is given, it is used as a shell wildcard to restrict the output to matching branches.
So you can run:
git branch --list --all 'foo'
Awesome. This is really cool.
Another cool (we think ;) ) alternative to Github is https://codegiant.io
We would love to get a mention on your list.
Our USPs -
Git's concept of a pull request is that of generating the output you see, usually for emailing purposes. GitHub's concept of a pull request is their web UI for code review. request-pull
doesn't open GitHub pull requests.
This is actually normal behaviour. One of Git's design goals was to work well in a disconnected state, so almost all operations work against your local repository. It's pretty much just fetch and pull operations which check for remote updates, so you need to perform one periodically in order for your repository to become aware of upstream changes. Which to use comes primarily down to personal preference and workflow.
I'd recommend taking a look at Pro Git. It's very well written, goes into plenty of detail, and is available free in a number of formats.
Just add them the file:
.git/info/exclude
It works the same as .gitignore, but only applies to your local copy of the repository.
See the ignore documentation for a bit more info: https://git-scm.com/docs/gitignore
Yes, it matters. There is a quirk in git with reverting merges. If you want to merge back to master, you need to revert that revert on master and then merge your branch.
You can read more about why this happens in section "Reverting the Revert" at https://git-scm.com/blog/2010/03/02/undoing-merges.html
> We view GitKraken as a SaaS
> It's a SaaS.
I'm glad to know you feel this way because your website sure does a good job of selling this as (direct quote) "100% standalone" (http://imgur.com/a/LfR3y) (direct link: https://www.gitkraken.com/features)
It sure feels shady to me when one of your main selling points directly contradicts your intentions.
I have no problem with a company selling web-based features as a SaaS, but claiming a "100% standalone" desktop application is a SaaS is straight up lying.
Presumably once cloned, the OP shouldn't need to pull from the template repo again, so she might-could do:
git remote set-url origin <new-url>
This is going to be more "my recommendations", rather than "widely-accepted community recommendations". Just to be clear. :)
First off, you don't need -a
on git-commit
if you've already added things to the staging index; it's redundant. Secondly, you should rarely use the -m
option, as it strongly discourages well-formed commit messages.
I usually have untracked files hanging out in my working directory that I don't want to commit, so rather than using git add .
I use git add -u
, or sometimes git add -p
; if I'm adding new files, I manually specify them (liberally using shell globs and expansions to avoid typing too much). Then I run git status
to make sure the set of files is what I want (and I'm on the right branch), git diff --cached
to make sure I've staged the appropriate diff and didn't leave in any TODOs, etc., and then git commit
.
To summarize:
[$]> git add -u [$]> git status [$]> git diff --cached [$]> git commit [$]> git push
I have git aliases set up to make these common commands a bit shorter.
Well, at work we use a system for planning and managing the life cycle of our tickets, ie development, code reviews and testing, and that force use to commit changes (otherwise there would not be anything to review and test).
If you are a solo-developer it may beneficial to at least have a todo-list with issues to fix and commit after fixing an issue, it also helps when you want/need to roll-back changes to known state that previously worked.
I could reccomend to create alias for the most common commands you use, it reduce the typing and you don't need to remeber all parameters every time (I have an alias for git itself and a git alias for log with graph, for exampel my command "g la" show the log for all brances), you may take a look at https://git-scm.com/book/en/v2/Git-Basics-Git-Aliases.
>Also, what's the upside of using git without GUI ?
>when I used the github desktop app back then it was really too easy
You've kind of answered your own question – GUIs can sometimes make things too easy, and encourage developing an incomplete or incorrect mental model of what's happening. Learning how to do it on the commandline can help ensure you're actually learning what's going on, but it's not foolproof. While I encourage learning the commandline (and IMHO the Pro Git book is a great way to learn), don't be too hard on yourself if you find that a GUI helps.
Personally, although I almost exclusively use the commandline to actually issue git commands, I do use a repository visualizer to see the current state of my commit graph and whether I have a dirty working tree or not, as well as anything added to the staging area. (I like gitk --all
.)
>every time I open git console, I have to cd ... type the whole directory there
Do you have an option in your right-click context menu to e.g. “Open Git Bash here” if you right click on/in a folder?
Git can be overwhelming if you're not used to using the commandline. Try to be patient with yourself as you are learning. Good luck. :)
> take a look at GitHub as they have resources available to learn git.
I would caution that these resources will mostly be designed to teach git using GitHub and so maybe a better resource for beginners is actually the official Git Book, which will teach git without GitHub. (Though it does teach about remotes, just not about GitHub specifically.)
Take a look at the first example in the gitignore documentation:
> The pattern hello.*
matches any file or folder whose name begins with hello
. If one wants to restrict this only to the directory and not in its subdirectories, one can prepend the pattern with a slash, i.e. /hello.*
; the pattern now matches hello.txt
, hello.c
but not a/hello.java
.
Actually it's not mutually exclusive. Overleaf projects can be treated like remote repositories, see https://www.overleaf.com/learn/how-to/Using_Git_and_GitHub. The downside is that if people edit it on Overleaf the commit messages are quite generic and unhelpful. I also found out that it doesn't work with git submodules.
AFAIK, the main draw of Overleaf is that it enables collaborative LaTeX editing with real-time preview. I prefer to separate these two aspects. The collaborative part is better handled by Git (descriptive commits, diffs, etc.), and the real-time preview is a lot more snappy if I compile on my own computer (I use vim + vimtex for this, but you could use any of the available software). The downside with this is quite obvious, basically every collaborator needs to be on board with it, which can be tough!
The man page for the git-commit -a flag says:
>Tell the command to automatically stage files that have been modified and deleted, but new files you have not told Git about are not affected.
Your example needs to explicitly add the untracked files to the index.
What makes you think that the installed files themselves are at fault?
Also, installing and uninstalling software completely depends on your Operating System.
Have you tried the documentation?
And the reference: https://git-scm.com/docs/gitignore
Assuming .gitignore is in the root of your repo,
vendor/
will match any folder named "vendor"
/vendor/
will match only top-level folders named "vendor"
> But then I read somewhere (might be wrong) that the author isn't tied to the configs (???).
Where did you read that, and what exactly did it say?
Author information is usually taken from user.name
and user.email
, but you can override that e.g. with environment variables. Just like all config settings, values in a repository's .git/
take precedence over what's in your global ~/.gitconfig
.
3 why don't you run the git in your docker as well?
What you're looking for is a git submodule. There should only be one .git folder per directory and all directories under it, unless those directories are included in the parent directory structure as a git submodule.
git bundle is the tool for this problem: you generate a bundle file which contains everything that git fetch
would have tried to transfer, if you had an actual network connection. Then you can import it into the non-networked repo, and update refs.
That said, I do agree with the other commentors who are suggesting that you just keep an entire copy of the latest version (just the files, not the git repo / history) on the USB stick, and use rsync to update the USB stick from the dev machine and rsync to update the server from the USB stick.
Check out the git documentation on hooks.
https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks
I don't know the specific answer to your question, but you'll probably want to write some regex and stick it in a hook. Maybe the pre-push hook that returns non-zero if the branch doesn't pass the regex? Maybe you want to implement a server side hook. This is a good place to start looking, anyway.
git bundle sounds like exactly what you need:
> Some workflows require that one or more branches of development on one machine be replicated on another machine, but the two machines cannot be directly connected, and therefore the interactive Git protocols (git, ssh, rsync, http) cannot be used.
Basically puts your whole repo including all the history/branches/tags into a single file which you can then email back and forth. I imagine it'll be rather annoying, but if IT really won't budge this may be your best option.
“Adding” files “stages” them to be “committed.” The reason for the extra step is so that you can combine changes to multiple files into one commit, but at the same time not be required to include all changed files in that commit.
Some Git commands also work differently on staged files as opposed to changed/new unstaged files, e.g., <code>git diff</code>.
The subreddit sidebar has a few. Pro Git, the top on the list, is pretty much the best reference in my opinion. The name sounds intimidating, but it starts from basics, it's not meant to only be for advanced users.
BTW, avoid stuff written on Medium. Chances are it will be rubbish.
No. Git stores multiple similar files in what are known as pack-files. These basically store one copy of the full file, with a linked-list-like structure. Conceptually, think of it as:
v3_hash -> v2_hash, diff2_hash -> v1_hash, diff1_hash
So, to get the most recent version of a particular file, the blob
object is fetched directly from the v3_hash
. If you checkout a commit with an older version of the file (say v2_hash
), it'll first grab the blob
from the v3_hash
, then it'll grab the diff listing on that file from the diff2_hash
, and apply the diff to the blob
returned from v3_hash
.
note: this is a gross simplification. I'm not actually very clear on the format of a pack-file, and I don't have an exact understanding of how it all works, but this is the general idea.
See Pro Git: Packfiles for a better explaination.
You can. http://git-scm.com/blog/2010/03/10/bundles.html talkes about a way you might go about that.
However, since you're talking about binary files, you may want to consider something less fully-featured. Git is fantastic software for tracking text files, but it's actually not great at binary files like images. If you just want a backup and you don't need versioning, then a tool like rsync may be a better fit. It still only transfers the changes since you last ran it, but it also has easy resume support for if your connection times out, and you don't have to do three steps to get your data to the server (add+commit+push), it's just a single command.
> Then I did “git restore -s=HEAD --staged --worktree – .” based on a post on StackOverflow then it said “fatal: could not resolve =HEAD”
Don't just copy/paste commands from StackOverflow and hope it will work. That is not a recipe for success. Read the documentation.
Here's the documentation for <code>git restore</code>. Under the "OPTIONS" section, do you see an option that has a syntax like -s=
?
Once you can answer that question, I think the reason for the error message will be evident.
Make sure you've fetched the updates for the master
branch already.
Checkout the branch you're working on.
Merge the master
branch into your current branch (git merge master
), or rebase your branch on top of the master
branch (git rebase master
). Chapter 3 of https://git-scm.com/book/en/v2 explains these in some detail. I don't know which buttons you need to click in the GUI you're using, though.
Have you checked the sidebar? There are a dozen docs linked…
also tbh any Git tutorial that starts off explaining remotes and how to fork on github is not a very good Git tutorial.
I think this is the problem:
This means git
will not enter the .config
directory. It matches the *
pattern.
I think this will solve it:
# Ignore everything * # But track these files !.gifconfig !.gifignore !.zshrc # And track these folders !.config !.config/neofetch !.config/neofetch/*
In essence, at every level of directory that git
reads, the *
will match, but the explicit path to that directory will override that.
If you clone locally, this already happens by default. Look at the <code>--local</code> option for git clone
:
> The files under .git/objects/ directory are hardlinked to save space when possible.
This requires the repositories to be on the same filesystem and requires a filesystem that supports hard links (which is standard on Linux).
And it does support garbage collection, too. Because of how hard links work, each repository can delete or keep its link. When the last hard link disappears, the filesystem itself takes care of actually removing the file. (On a filesystem with hard links, you must delete all links to the underlying file for it to be removed.)
>Does deleting commits remove their changes?
Yes, when Garbage Collection is run: https://git-scm.com/docs/git-gc
​
>Can I squash a bunch of commits together to hide the API Key?
Yes, subject to garbage collection being done after.
​
>Is it possible to just remove all history but the latest commit?
Yes, by squashing commits. But again, garbage collection.
​
The correct way to proceed is to rotate your API key so that the committed key is no longer valid. As someone else said, if it's been committed to a public repo, someone else has it by now.
If your employer finds the old key and asks, you tell them "Oops, and I rotated the key as soon as I noticed the error." Accidentally committing API keys happens with enough regularity that there are tools for scan for it, and I've never heard of anyone being punished for a single transgression.
As mentioned, this is bad practice. A safer option is git push --force-with-lease.
Here's the documention with a jump right to the relevant section.
I suspect that Y drive is a corporate remapping from where it would normally.
I don’t think you can actually change the default locations without changing your HOME drive.
That being said there are options to add extra config files or use a different one
https://git-scm.com/docs/git-config
Think it’s -f and there’s also stuff on XDG_CONFIG_HOME
I work nearly exclusively in windows and use git daily fine.
If you’re using regression testing, just wait until you learn about git bisect and how to automate the search for a commit which introduces a certain bug.
Welcome to the fold. :)
You didn't pull the branch, but you presumably fetched it at some point if you could see it and interact with it in your local repo & client. If that's true, then you have a local copy of it, at least until the GC kicks in.
If you happen to have a record of the SHA1, you can simply
git branch <name> <SHA1> to re-create the branch locally and then re-push it. If not, things get a bit more complicated.
Unfortunately, since you never checked it out locally, you won't find it in the reflog. Fortunately, you can still use git-fsck to find it. This will likely take some effort as fsck will report on all unreachable commits since your last GC, so depending on the size and amount of activity in your repo, you may have a lot of commits to check.
If you have too many to check manually, I'd do something like this to list the log message of all of the found commits. Adjust the parameters to log to you liking.
git fsck --lost-found ls .git/lost-found/commit/ | xargs -I{} git log -1 {}
Piping output of ls isn't good practice, so be careful here, but other alternatives will need a bit more time to write than I have right now.
Let us know if you have any luck!
Global is global-for-your-user. On a unixy system that's probably the .gitconfig
in your home directory. System is system-wide-for-all-users-on-your-machine. On a unixy machine it might live at /etc/gitconfig
.
There are actually a few more locations, too:
> If not set explicitly with --file
, there are four files where git config
will search for configuration options:
> $(prefix)/etc/gitconfig
> System-wide configuration file.
> $XDG_CONFIG_HOME/git/config
Second user-specific configuration file. If $XDG_CONFIG_HOME
is not set or empty, $HOME/.config/git/config
will be used. Any single-valued variable set in this file will be overwritten by whatever is in ~/.gitconfig
. It is a good idea not to create this file if you sometimes use older versions of Git, as support for this file was added fairly recently.
> ~/.gitconfig
> User-specific configuration file. Also called "global" configuration file.
> $GIT_DIR/config
> Repository specific configuration file.
> $GIT_DIR/config.worktree
> This is optional and is only searched when extensions.worktreeConfig
is present in $GIT_DIR/config
.
> If no further options are given, all reading options will read all of these files that are available. If the global or the system-wide configuration file are not available they will be ignored. If the repository configuration file is not available or readable, git config
will exit with a non-zero error code. However, in neither case will an error message be issued.
> The files are read in the order given above, with last value found taking precedence over values read earlier. When multiple values are taken then all values of a key from all files will be used.
This sounds like a classic XY-problem.
You want to make a directory of files available for download from a server. That's not what you use git for.
Do the files change a lot? You could just zip them in a zip-file and link to that.
You should maybe use sftp, rsync, bittorrent, or some other protocol designed for copying directories of files.
If you're set on using git-over-http without running git on the server, that's what "The Dumb Protocol" is for: https://git-scm.com/book/en/v2/Git-Internals-Transfer-Protocols
Use git update-index with the flag --[no-]assume-unchanged (e.g. git update-index --assume-unchanged [file])
The docs are pretty clear, in the absence of other options, 'reset' can be considered the opposite of 'add'. The branch head is only moved when a mode is specified (with the exception of the patch mode.)
Adding to what /u/pi3832v2 said, there is a very useful collection of gitignore-files by github. You should also see this section in the Pro Git book.
You're being downvoted, but what you say is partially true. It's a little more complex, and Git doesn't always store deltas, but it does indeed do it this way sometimes:
There are three differences:
--mirror
will grab every last ref, including the refs for pull requests, notes, etc. etc., and --bare
will only grab the normal branches and tags, i.e., refs named refs/heads
or refs/tags
.git fetch
in the --mirror
repo, it'll silently overwrite every ref with the latest one on the remote side. (If you git fetch
in the --bare
repo, it'll grab a new copy of master as FETCH_HEAD
.)--mirror
repo, it'll imply git push --mirror
, which silently overwrites every ref in the other direction.For a backup, use --mirror
unless you're sure you don't want commits in pull requests etc.
(Also, consider backing up the pull requests, issues, and so forth themselves, using some non-git tool.)
If you're already using git to update over the network, then using git from a usb drive isn't much different. Add a remote with a path to the repository on the usb drive, rather than a network path. All of the commands will work exactly the same way.
If you want to carry around a single file instead of a bare repository on the usb drive, you could instead use bundles.
If you're carrying around a bunch of binary files and your update process is "overwrite all existing files with these new files," git may not be the best choice for you. rsync may be simpler and easier to manage if you don't need the functionality that git provides.
You can probably use git worktree
for this – the example in the documentation sounds a lot like your use case:
> You are in the middle of a refactoring session and your boss comes in and demands that you fix something immediately. You might typically use git-stash(1) to store your changes away temporarily, however, your working tree is in such a state of disarray (with new, moved, and removed files, and other bits and pieces strewn around) that you don’t want to risk disturbing any of it. Instead, you create a temporary linked working tree to make the emergency fix, remove it when done, and then resume your earlier refactoring session.
> $ git worktree add -b emergency-fix ../temp master > $ pushd ../temp > # ... hack hack hack ... > $ git commit -a -m 'emergency fix for boss' > $ popd > $ rm -rf ../temp > $ git worktree prune
Maybe try something like:
git branch --set-upstream remotename/branchname
Well, if you deleted everything, it won't be that way anymore.
For next time here's a few important commands to know:
git status
tells you what files changed.git diff
tells you what changed in those files.git reset --hard
clears all changesTry this cheat sheet if you want to know more: https://www.git-tower.com/blog/content/posts/54-git-cheat-sheet/git-cheat-sheet-large01.png
Honestly, I am not sure what you're looking to gain that you're not going to get with
$ git log --graph --decorate --oneline --all
(notice I added --all
. I think that is important and will show you more branches, etc).
The only difference is that you will have it not in a terminal window.
Let me put it another way, you say you are looking for something "better" but that is rather vague and I am not sure what is better.
Don't get me wrong, while I prefer to do all of my git via CLI, I 100% recognize the advantages of a GUI at times and for convenience (preferably with an understanding of what is going on still). But it sounds like you're looking for ways to view the history; not interact with git. And I am not sure what you expect to get?
Also git kraken.
There's a great book that's completely free 'Pro Git': http://git-scm.com/book/en/v2
It should be able to answer your questions from any range of expertise.
I want to give you the answer that this is a fairly basic decision, but as you go up the learning curve, these answers become more philosophical. So, my ultimate answer is that it will depend on you and the team. Whether your personal, topic branches are local-only or if they'll be shared? Are you sharing through a central repo or doing truly distributed?
I'm going to assume the answers are 'local' topic branches and 'yes' to central repo because that seems the most traditional setup for effective work environments. Okay...
You're in your local repo, you've made some changes that you want to commit. Your changes are currently 'unstaged'. Use 'add' to stage them:
git add some_file
When you've made all the changes that you want to do (to fix an issue, to add a new feature, etc.), then you need to do a commit. (Please avoid using -m 'short message', instead use the popup editor and write a good, detailed message:http://chris.beams.io/posts/git-commit/)
git commit
Commits are local in git, so nobody can see this change until you push it to a shared repo. The push syntax can be more complicated, but since I am assuming a central repo is used, then the command is simple:
git push
Which is equivalent to ('origin' is the default name for the original repo, and 'master' is the default name for the initial branch):
git push origin master
Either way, 'push' will try to share all the new commits you have created since you have done your last checkout from origin/master. Keyword being 'try', if you haven't done a pull in a while, then there will likely be merge conflicts. To avoid that, you should:
git pull
That'll to a merge commit which will allow to resolve the differences manually.
> If the machine ourRepo has git set to /home/me/git then would the path would be /home/me/git/project?
No. git clone
puts the repo in the directory you are currently in by default, you can specify a name for the folder else it uses set name. You can specify a path to store it if you don't want it in your current directory.
See: http://git-scm.com/docs/git-clone
> Would this make a repeated directory? like /home/user/repo/project/project
Git makes a folder based on what you set during clone or the name of the git repo in the current directory you are in. You can specify a custom path. See the git clone docs link above.
> Are these changes only committed to the local machine?
Yes, Git is distributed. Every repo has a local version of the codebase. It isn't until you push to a remote will another bare-repo have a duplicate.
> Will these intermediate comments end up on the main repository when pushed?
First off, there is no main repo. Most projects only have a single origin and seem like it is the main, but Git is decentralized. You can have multiple remotes for multiple reasons. We have remotes that are our Gitlab server, we also have deploy remotes. Yes all commits on that branch will be pushed to the remote. You can squash commits or interactively rebase if you want to merge them into a final bundle.
> How do they get updates of any files pushed to the repo after they checked out?
They git fetch --all
to match their local origin to the remote. Then they can either pull (fetch + merge .. but they have already fetched so this is redundant) OR they can git merge origin/master
as we have already done a fetch.
>I know git~~hub~~ can show me then once I've committed…
FTFY. GitHub doesn't do anything with a remote clone of a repository that Git can't do with a local one. Re: gitk.
git revert -m 1 $COMMIT
on the staging branch, where $COMMIT
is the id of the merge commit for your f1 branch, should do it.
This will create a new commit that reverts the changes in that commit. Since the staging branch is shared, you cannot use rebase.
A router would presumably be better, since it could act as the DHCP server, assigning IP addresses to all the clients.
Anyway, something like GitLab for Raspbian might be the easiest way to get a repository host up and running.
> The documentation doesn't say short options can be combined.
It does, but you really have to know where to look. Typing git help cli
would reveal the answer:
That same manual page has additional helpful info about things like negating options (--no-color
negates --color
) and separating options and their arguments (you can do -m"Message"
or -m "Message"
or --message="Message"
or --message "Message"
).
Don't tell your coworker to clone anything if they don't actually need version history. Just use git archive
, possibly called in some sort of git hook to run it automatically at the appropriate times and store the results somewhere your coworker can access them.
git archive accepts path arguments so you can use shell expansion to do something like git archive -o images.tar -- **/*.dxf **/*.jpg
, depending on what kind of filename expansion your shell supports.
It just means you already have a remote named “origin” attached to your local repo. There’s nothing sacred about the name “origin” just give it another name.
Also I’d definitely encourage you to read the docs. Here’s <code>git remote</code>. IMO Git’s docs are some the best I’ve read. Very thorough and quite well written.
> Have at me: what am I doing wrong and how could I solve this ongoing issue? Only limitation: I am AWARE of the "Commit all and sync" but sometimes I work very late and don't notice stuff.
You probably could protect yourself from accidentally committing your local versions of the project files with Git hooks. You could write a pre-commit hook that throws an error if those files are changed.
Hopefully Visual Studio's GUI for interacting with Git (which I assume is where "Commit all and sync" is found) is reasonable enough to run your local Git hooks just like the official git
command.
Then when you accidentally select the option to commit everything, it should fail to commit.
There is no cache or anything. It’s very simple: whenever you commit, git will read the name from the config file, unless you specify it on the command line.
You can have config files specific for a repo, or for your user or for the machine. Read more about this here: https://git-scm.com/docs/git-config
> Git stores the difference between two versions of code between commits,
This is simply wrong because Git does not store diffs. It stores complete files (they are stored in the object directory in a compressed form) and if the number of so called loose objects it getting too large they will be packaged into a pack file. This is optimised for transport via git protocol (there are used so called deltas which is different than the diffs you are producing every time you do a commit or what you get via git diff
or git log -1 -p
...
> so if you added and committed a 100mb file, and then deleted that 100mb file in a later commit, you're still pushing that 100mb file to github to keep track of its creation and deletion.
If you have committed a file and delete it afterwards it will be kept in the history(of course). But that file will not being pushed each time you are pushing. It will be pushed (transfer) only once.
The problem described by the poster is simply that by committing such large files the repo (git) will become slower etc. There is only one option in such case remove the commit which contains the large file(s) and start to use git lfs...apart from that most platforms like Github, GitLab etc. do not allow to commit large files (usual limit 100 MiB)...
You can use git check-ignore to debug hour ignore file. With the -v flag it will tell what ruke makes it ignore a particular file.
git ls-files with the -i flag should list all files that is currently ignored.
Inside the .gitignore file, you can prefix a rule with the ! to say it should not ignore something. E.g. if you’ve ignored every folder named ‘backup’ but you have one backup folder with source files that should not be ignored.
They might be in the reflog. (See <code>git reflog</code>.)
Basically, the reflog contains a history (for a limited period of time, apparently 90 days by default) of what branches used to point to. So even if a branch points to something different now (which doesn't allow you to access the "lost" commits), it is probably possible (but not super easy) to recover it by looking at what the branch pointed to before.
It’s the full history of every change you make locally - each time you transition to a new commit, an entry is logged and you can view it or go back:
https://git-scm.com/docs/git-reflog/2.20.0
See also https://github.blog/2015-06-08-how-to-undo-almost-anything-with-git/
In that case, you might be able to figure out a way to do it using git-svn.
What it's meant for is people who have a Subversion server but want to use a git client. But as the documentation says, it is bi-directional.
Normally, the idea is to take an existing Subversion repository and create a git clone from that. But if you could do the opposite, i.e. take an existing Git repository, then add a Subversion repository as a remote (perhaps by editing the config file?), then it seems like you should be able to use git svn dcommit
to take a git branch and turn it into a bunch of Subversion commits.
Maybe there's a better way, but this seems like it should work:
git svn clone
to clone it, giving you an empty git repo that tracks the empty Subversion repo.git svn dcommit
to create Subversion commits.First of all, why and where do you have to type a commit hash? Git revisions allow you to do basically what you want. Another option could be a fuzzy finder (like FZF) to search for the hash by specifying (parts of) the commit message.
> instead of a few string of random characters associated with the latest commit.
You could just copy and paste the hash, no? A description of an actual use case / workflow would help.
OK. Then I suggest you work your way through the Pro Git book. It should help you get started.
It sounds like you came to this point having seen GitHub. GitHub is just one of many Git hosting services for people who want to put their repositories online. Git itself is perfectly usable without a hosting service.
You can use the interactive rebase to easily select the one you wants to modify. See the git-scm doc for details but in short:
$ git rebase -i HEAD~8
Will open the rebase interface based on the 8 different commits. Replace the "pick" before the commits you want to modify by "e" (or "edit") and exit the editor (by default vim).
For each commit, you can modify the date to set the one you wish
$ git commit --amend --date "October 1st 2018 08:00"
and then use
$ git rebase --continue
to go to the next edited commit.
And finally make a push force to overwrite the remote.
You could set up an alias for this. e.g.
git config --global alias.stat "status --untracked-files=no"
# Then: git stat
> the developers became so familiar with the “plumbing”, and knew so much about which low-level commands to combine to perform high-level tasks, that they decided they didn’t need any “porcelain” on top after all!
This is probably why the Git Book has 9 chapters almost exclusively dedicated to porcelain commands, and a final chapter reserved for plumbing. Quoting this final chapter (Git Book, §10.1):
> As you will have noticed by now, this book’s first nine chapters deal almost exclusively with porcelain commands. But in this chapter, you’ll be dealing mostly with the lower-level plumbing commands
> git is for text file versioning.
This isn't strictly the case but I do agree that Git is not the tool for this job. Op should look for a file syncing app like Syncthing.
Hooks are really just scripts, so you should be able to write one in whichever language you want. Call it post-merge
(without extension), make sure it's executable, and put it in the .git/hooks
folder next to those examples.
There's a bit of documentation about the different hooks that git can work with at https://git-scm.com/docs/githooks - and while there aren't any specific examples there they do mention one in the git codebase.
Completely unfounded guess, but it's possible that the Fetch function in this program actually does a rebase pull (https://git-scm.com/docs/git-pull#git-pull---rebasefalsetruepreserveinteractive) with --autostash (https://git-scm.com/docs/git-pull#git-pull---autostash). If it crashed after the stash happens, the pull wouldn't have happened but the local changes would be gone from your working directory. You could run git stash list
at the command-line to see if you have any stashes.
If I'm right about this, I'd definitely write to the authors of the program about their mangling of the terminology and inability to recover from this sort of failure.
> But is TCP the underlying protocol used for both these protocols when the files are transferred?
Yup. Both SSH and HTTP(s) are application layer protocols that typically operate over TCP (in principle it might be possible to operate them over a different transport layer, but in practice you can assume TCP).
If you want more details about the specifics of git's transfer protocol, checkout this page.