You can just add the <code>--author=&lt;author&gt;</code> flag when committing for anyone curios as to how to do this. There are legitimate uses for that flag, I have used it for committing code my colleague wrote as an example.
No kidding. The number of subdirs isn't the only problem, but it's such an obviously wrong approach that they shouldn't need Github to tell them it's a scalability issue.
Pretty much every filesystem in existence scales poorly as the number of entries in a single directory grows. Some handle it better than others, of course, but it's still a terrible practice to stuff everything into one directory and assume you won't have any problems.
And it's such an easy thing to fix - hash the filename or subdirectory name, take the first 2 hex characters, and use that as an intermediate folder name. Now you have at most 256 top-level subdirectories, and with 16k entries each of those has ~64 children.
If you look at .git/objects/ in any Git repo, you'll see this is exactly what Git does internally.
Seconded. For me the two most useful pieces of literature on the subject of version control via git were:
The online git book, to learn the "language" and how it works
The common git workflows (originally the Git Flow article), to learn how to effectively use it
It's very unlikely for you to learn, for instance, that a branch is nothing but a pointer to a commit, and be able to exploit that aspect of git, if you only follow GUI tutorials.
Haha. Super useful, but easy to get into sticky situations. If you want to be a power user, I highly suggest learning a bit of how git works underneath. Once you know things like branches are just labeled commits, cloning repos literally clones everything over, pulling branches pulls on what's called a remote branch before it's merged with the target branch, etc. you'll be able to use git a lot better and understand what's going on in all those sticky situations.
This book is good: https://git-scm.com/book/en/v2
This is why presubmit scripts are great.
> Error: Messages.strings updated but no corresponding diff found for Plurals.strings
and if Person Who's Not Here Anymore wrote it, just blame the script to find the commit that added the error and read the description to understand why.
As a bonus you can use lolcommits to take a lolcat-style photo from your webcam with every commit.
I haven't used the GUI recently buy my impression was that it abstracts away some of the steps which makes it more difficult to understand what's happening. Picking up the CLI should be manageable even for junior engineers and it pays off in the long run.
A more balanced discussion about it - https://git-scm.com/book/en/v2/Appendix-A%3A-Git-in-Other-Environments-Graphical-Interfaces
And another, more one-sided, opinion - https://news.ycombinator.com/item?id=25791306
It's also related to how git was developed. First as a collection of command-line tools written in Perl, then slowly one at a time rewritten in a growing C code base. They did a fantastic job of establishing a set of simple primitives, terminology, and file structures early in the project that made this possible. I think this is at least partially attributable to Linus being unusually familiar with the idea of "don't break the user mode API" that he is so serious about when it comes to the kernel.. he applied the same principles to separate git plumbing and porcelain, which allowed vastly different tools to be used during different stages of the development process.
edit: core -> plumbing
> then I'm left hanging for weeks or months before a decision is made whether or not to move forward
This is completely broken.
In terms of the git part of it, what you can do is create your feature branch, work on it, and occasionally rebase it on the master to bring it back up to speed with the rest of the codebase.
Author here. Probably the version you'll get on Amazon is a snapshot of the print version, which at this point is 4 years old and has a number of errors. The version you can get at https://git-scm.com/book incorporates innumerable contributions from the community, and so is more up-to-date and correct, so while it won't help my Amazon ranking in tech authors, I'd recommend getting it from the Git website.
They started using BitKeeper instead of patches for scm (source code management). So before 2002 there wasn't any way to really track who was working on what. Since then they have moved over to Git which was developed by Linus Torvalds (original author of Linux).
I think git is what you are looking for and it's never a bad time to learn git. Here is the book about git, how it works and how you use it.
Github is a git host. You can think of Github as the cloud storage of you git repository. Like Dropbox is cloud storage for "regular" files". So Github would (essentially) be to git repositories what Dropbox is to regular files.
Of course Github has some more functionality but it's not really something you "learn" as you would git itself.
While I agree that email and github workflows are not equivalent, I don’t quite follow you
“But it’s so much effort every time”
What is this additional effort you pay every time?
I’ve worked on many open source projects where git patches were the norm, both via email and as attachments to bugs (with email backend), and they don’t seem to be seriously more difficult.
That is a possible solution and what you're proposing is very similar to Git alternates, which exists today. We didn't use alternates because it doesn't solve the "many files" problem for checkout and status. We needed a complete solution to huge repos.
Having the full repo on my local machine is 90% more content than our average developer in Windows needs. That said, we did prototype an alternates solution where we put the full repo on a local network share, and ran into several performance.
Alternates were designed for a shared local copy. Putting the alternate on a file share behaved poorly as git would often pull the whole packfile across the wire to do simple operations. From what we saw, random access to packfiles pulled the entire packfile off the share and to a temporary location. We tried using all loose objects and ran into different perf issues with share maintenance and millions of loose objects cause other performance issues.
Shared alternate management was also difficult, when do we GC or repack, keeping up with fetching on the alternate is not inherently client driven.
Doesn’t work if the user lacks access to the local network share and many Windows developers work remotely. We would have to make the alternate internet facing and then have to solve the auth management problem. We could have built a Git alternates server into Team Services, but the other issues made GVFS a better choice.
Alternate http is not supported in smart git, so we would have to plumb that if we wanted alternates on the service.
>You could argue that git should, by default, have some sort of a PR protocol that you could utilize.
It does. It's called "email".
Not the same I guess but it is functional
> Are you supposed to just look at bug reports and try to come up with a solution
Yes, that is one approach. I would advise you to debug and fix something you are able to actually reproduce.
Another approach is to reach out to the project's preferred community (irc, mailing lists, whatever) and ask if anyone would be willing to mentor you through a new project. This will probably be something trivial, but it will get you up to speed. This doesn't always work, but can sometimes turn up good stuff to hack on.
> Is there a beginners guide to Git and open source projects out there that could help me get started?
I think the freely available git book is pretty good. The thing with git (and cvs, svn, mercurial, etc...) is that using it to collaborate with other people is much different than learning the basic mechanics of how to create a commit, push/pull, etc... One of the best things you can do to learn git is to simply start using it in a collaborative manner and pick up the parts you don't know as you go. You'll be a git pro in no time.
Guys guys. Let me let you in on a little secret... Git is open source. No hear me out. You can turn any linux computer into a git server. And push your local hithub or gitlab files to your own server. It's not hard.
Want to host a readme? Instead of using markdown Install apache and host an html file.
It's old tech but it checks out.
One of the best examples of WHY I use GIT requires a small story:
In my first year programming, my 'team' had to build a semester long project, so in order to 'save' and 'store' all the work together, we simply used Dropbox and just saved as we did.
Unfortunately (and GIT users will know about this), we didn't realize halfway through that if let's say I made a change to the main program and uploaded it AND my friend made a change to the program and SHE uploaded it, HER recent change overrides MY change (cause she didn't use the copy I uploaded - she used the copy she got from Dropbox the day before).
So it required a lot of finagling and coordination to make sure we didn't accidentally override someone's work whlie uploading our new changes.
In addition, as you might expect, that file on Dropbox got LARGE, in addition to the backup copies of each iteration we kept - it was in the megabytes, and by the time the semester ended, each of us had a GB worth of our project - cause we had different 'versions' of the program to ensure we don't screw up.
WHERE GIT COMES IN
Once you know GIT (it takes a bit to properly learn else you basically get into this problem), multiple programmers can work on the SAME copy of the program without getting in each other's way. My changes to the program won't override my friend's, and GIT will just merge the changes back together.
Also note, GIT tracks changes, not the actual program. Where something like Dropbox will actually keep the ENTIRE FILE (I know know, it doesn't do that - it's more complicated, but stick with me here), GIT only KEEPS THE CHANGES - a.k.a just kbs of data. It's very cheap and very fast.
I think this was the one I read. Pretty technical, but I was a CS major. Plenty of other reference materials here as well.
<code>git daemon</code> is what you want, though in classic git fashion it's a bit less fire-and-forget than hg tools. You can setup an alias/defaults to ease usage.
Aliases will do you:
> git config --global alias.yes 'commit -a'
> git config --global alias.no checkout
> git config --global alias.pls 'pull --rebase'
Now you can use git status per normal, but you can also git yes to save your changes, git no <file> to revert changes, and git pls to get the latest changes from the server.
git no <file>
Check out the documentation on Aliases
I think the worst issue here isn't the idea of microservices (not that it helped matters much), but the horrific misuse of version control. It sounded like a lot of the problems stemmed from
> When pressed for time, engineers would only include the updated versions of these libraries on a single destination’s codebase.
This should not be possible. You should be using a git submodule to include your libraries in each of your repos. It sounded like before they were literally copy and pasting the library across every git repo, which sounds like a recipe for disaster.
And now I know about gitconfig: Conditional includes! That solves quite a few problems for me, wow.
Now if only it also supported remote-url formats of the current git repo, so I could say [includeIf "remote:origin=https://github.com/**"]or something.
I love this command because it helps people build the correct mental model for what git is doing:
> git log --oneline --graph --color=auto --decorate --all
git log --oneline --graph --color=auto --decorate --all
Basically, run that, look at your commit tree. Then run whatever command. Then run the log command again and see what it did to your commit tree.
That gives you a good understanding of the commit tree. Then the following article fills the holes with regards to the differences between the head, work tree, and index: https://git-scm.com/blog/2011/07/11/reset.html
Rebasing is no longer that difficult once you understand the fundamental concepts behind Git. I'd therefore recommend trying to understand that first - afterwards, the official documentation is relatively easy to grok.
As a suggestion, this visual 10m tutorial I made might help with understanding the core concepts.
To be frank, it's hard to explain. Why is html/css better than a wysiwig? Because I have total control. But I bet wysiwig users would tell me they can do just as much as I can do. They can't, but I don't know how to explain that to them.
Git is the same way. If you actually understand the internals, you can do anything. You're completely free to navigate and manipulate your entire codebase. You can do 50 things, I can do 5,000.
I'd strongly recommend this book, https://git-scm.com/book/en/v2. I'm not sure there's a short explanation that's anywhere near as good as asking you to actually do the work to learn git at a deeper level.
Please create quality commit messages. If a developer came back to this shitty message they would have no clue where the ball was.
"Getting in the habit of creating quality commit messages makes using and collaborating with Git a lot easier. As a general rule, your messages should start with a single line that’s no more than about 50 characters and that describes the changeset concisely, followed by a blank line, followed by a more detailed explanation"
He wanted to list all git branches in the current repository. But the command git branch list creates a git branch called "list" instead of listing all branches. The correct command to list all branches is git branch (as seen in the second command, note that "list" is there as branch).
git branch list
Another cool feature of bisect is <code>git bisect run</code>. You can use it to automate your tests every step and let git figure out by itself which commit is the culprit.
You can download Node directly from its website or use NVM for Windows if you want to be able to manage Node versions better.
Git for Windows comes with Git Bash, which has worked perfectly for me when doing typical web dev stuff.
Microsoft will be launching WSL2 and their new Terminal soon-ish. I haven't used WSL before, but it should allow for more Linux-ish workflow and for Linux programs to run. WSL2 is said to be a much improved version of it.
On some serious shit, though, check out git reflog. If you manage to fuck things up, you can usually find something in your reflog to go back to. Note that running git gc clears this out, though.
You should know the basics of version control software like git. Not a mastery of it, but the basics (checkout, commit, push, pull).
Resourcefulness and persistence. These are personality qualities that are hard to interview for, but are important to posses. Even if you don't know something, you should be able to figure out how to figure it out, and then figure it out (or figure out that it's likely beyond your experience to do, and thus know when to defer to a more senior developer). It's common for developers of any skill level to tackle problems they've never tackled before, and it's expected that they can do the research necessary to get it done.
Similar to #3, you know how to break down a problem into small, achievable bits so as to arrive at a solution one baby step at a time.
You'll want to be sure that you know how basic CSS works: cascading/inheritance, selector types, box model, positioning (i.e. you know the difference between IDs and classes, borders/padding/margin, relative vs absolute positioning etc)
Obviously HTML, but that's pretty basic stuff.
This post should be upvoted to the top of /r/programming.
By looking at OP's history, it's clear that OP knows how to program. He's been in the industry for some time, he writes Perl, PHP...
But wait. Look at that post about GitHub, the most used code sharing website of all time. The new SourceForge. And in that post, the issues he talks about are not really about GitHub, they're about the tool itself, Git.
Almost all his issues could be fixed by reading the introduction chapters of the Git Book. He could just, you know, RTFM.
There's a very important lesson here: even in your own domain, after decades of gathering experience, there will be times where you think you understood, but you didn't understand anything at all.
This project is an interactive console for Ren'py. Way more sophisticated than the original one in DDLC.
This version is far from done yet. So use it if you don't fear uncommented code.
I'll use it for a mod which will teach you Git, the version control system.
Here you go, Derek. How to merge two branches of code 101. I do this day in and out, and it's really not a big deal... phew, your 'old school' pre-CVS/SVN never-check-in-your-code-hope-the-power-never-goes-out-mid-save knowledge is showing.
I'd argue that it's overall a simpler experience for large code bases for teams than say Git. The trade off is sometimes it can be less (IMO) flexible than Git. On the other hand, there are situations in Git where I think one have had better read Pro Git eBook cover-to-cover.
The complexities of Git can be disguised a bit in the "Github Flow", but I don't find that to be representative of what internal workflows end up looking like. (Remember, Google uses mega-repos and Perforce internally. So clearly workflow is not an easy thing to put to bed.)
If I were working with a very apt and smart team who cared 100% about the beauty and elegance of graph theory and Git, then I'd probably pick Git. If I had to work with people who "simply" want D/VCS to "just work", I'd pick HG.
If you're using Git then you'll have to learn how to work with remotes: https://git-scm.com/book/en/v2/Git-Basics-Working-with-Remotes
Your kids would keep track of their own repo and your upstream repo. You can either keep track of all their forks and pull changes manually or ask them to make pull requests.
For C++ code our company has a githook that won't allow code to be committed unless it passes a clang-format check to ensure consistent formatting. Most editors have options to auto-clang-format, so developers just enable those options and never have to think about formatting again.
It's been great. Sure, there were some minor complaints over a few formatting edge cases when we first introduced it a year or two ago, but now folks are used to it. It's nice that we never spend time in code reviews discussing formatting, but can focus on more substantial matters instead.
> A SHA-1 collision is less likely than every member of your programming team being attacked and killed by wolves in unrelated incidents on the same night. - Official Git documentation
Welp there goes my team.
If you are really worried, you can rewrite history in git and (force) push that up. Github even has a help page on how to do that. As an aside, I'd be much more impressed with someone who knew enough nitty gritty git to do such maintenance.
Try reading some of its manpages. It's unintelligible gibberish. Example:
"Update remote refs along with associated objects" - what the FUCK is this supposed to mean? Compare that to the much more intelligible Mercurial description, "push changes to the specified destination"
Also, git checkout and git reset each being used for several unrelated and partially overlapping actions is total bullshit. (checkout switches the branch, reverts local changes in specified files or creates a new branch based on the current one (with -b), while reset moves the branch pointer, reverts all local changes or does some things with the index that I don't even understand). The only sane way to use git is to use git commit -a instead of performing complex index manipulation before committing.
git commit -a
Actually yes the git documentation for reset does use the term discard:
Resets the index and working tree. Any changes to tracked files in the working tree since <commit> are discarded.
And if they used the term reset it'd be even more confusing. At least discarding indeed does mean throwing something out. The problem with the message is that it's ambiguous whether it's the actions you've done in the editor that are thrown out (creating the repo) or the files themselves. I'm not sure what the message should be, but even a simple addition could make it clearer:
> Are you sure you want to discard ALL changes to these X files? This action is IRREVERSIBLE!
Something I learned recently:
git push --force-with-lease
From the Git docs:
> Usually, "git push" refuses to update a remote ref that is not an ancestor of the local ref used to overwrite it.
> This option overrides this restriction if the current value of the remote ref is the expected value. "git push" fails otherwise.
> Imagine that you have to rebase what you have already published. You will have to bypass the "must fast-forward" rule in order to replace the history you originally published with the rebased history. If somebody else built on top of your original history while you are rebasing, the tip of the branch at the remote may advance with her commit, and blindly pushing with --force will lose her work.
> This option allows you to say that you expect the history you are updating is what you rebased and want to replace. If the remote ref still points at the commit you specified, you can be sure that no other people did anything to the ref. It is like taking a "lease" on the ref without explicitly locking it, and the remote ref is updated only if the "lease" is still valid.
> --force-with-lease alone, without specifying the details, will protect all remote refs that are going to be updated by requiring their current value to be the same as the remote-tracking branch we have for them.
It's not perfect but it solves at least one of the issues with force pushing.
I would recommend to learn git first. After that most of the platforms (github, gitlab, bitbucket, etc.) and tools (IDE plugins) will come easy.
I liked this book, it goes pretty deep, but you can focus on the first chapters for a day to day work.
Fortunately, you don't need to extend that list. Instead, you want to create a custom merge tool and access it via this option: https://git-scm.com/docs/git-mergetool
If you write a program that is capable of resolving merge conflicts properly between two copies of a specific kind of binary, and invoke it using the merge tool command, Git will happily accept the results of that merge. You won't need to modify Git.
Github hosts a git repository. Git is a source code management tool which tracks changes on a project over time. When you download that zip from github you're downloading the entire project and all of its history. Using git you can locally checkout different parts of the projects history.
Antes de entender como funciona o Github sugiro que entenda um básico de Git. Estude os 2 primeiros capítulos deste livro (em inglês) e quando se sentir confiante procure a documentação do próprio Github. O que acho mais importante não é saber o que os comandos fazem mas sim quando usá-los.
It's a pretty basic command, so I imagine you don't have a ton of experience with git. Here's a book that you can read online for free that can give you a decently comprehensive understanding if you're interested: https://git-scm.com/book/en/v2
Have you tried reading the docs?
> Some workflows require that one or more branches of development on one machine be replicated on another machine, but the two machines cannot be directly connected, and therefore the interactive Git protocols (git, ssh, http) cannot be used. This command provides support for git fetch and git pull to operate by packaging objects and references in an archive at the originating machine, then importing those into another repository using git fetch and git pull after moving the archive by some means (e.g., by sneakernet). As no direct connection between the repositories exists, the user must specify a basis for the bundle that is held by the destination repository: the bundle assumes that all objects in the basis are already in the destination repository.
It explains the exact scenario where this it would be useful, and explains the difference from "a standard archive file".
> Again 99% of the time autocrlf is the right thing.
100% of the time it's the wrong thing. Basically all documentation recommending to use core.autocrlf is itself outdated or based on other outdated documentation. Line endings are a data issue, not a user preference, and it's not just enough to have identical configurations for core.autocrlf because that doesn't address various shell interpreters' inability to deal with \r\n nor Visual Studio's stubborn insistence on using \r\n in project files. I almost feel like git init should create a default .gitattributes with
* text=auto !eol
^(The !eol is to explicitly unset any possibly user-specified <code>core.eol</code> setting, which can override the default behaviour of text=auto—thanks, Git.)
A commit doesn't contain changes, it has a pointer to a tree object that has an entire copy of the code base. Or more technically: the tree points to blobs (files) and more trees (directories). These blobs and subtrees can be shared across commits, but generally each commit points to a different top level tree (unless you have a commit that makes no changes).
There's a free book online that's great for learning how git works, but it's a bit deep if you just want to use the basics: https://git-scm.com/book/en/v2
For git, read the git book.
It is easy reading.
I am 100% serious that if you read the first 3 chapters (which is not a lot) carefully, understand it, and practise what you learnt, you will be more adept with git than 90% of the "senior" developers I have worked with over the last dozen years.
Use the CLI. You will maintain a better understanding of what you are doing. Look at the output of git log --all --decorate --graph often to reassure yourself of where you are. (Alias that to something short like git graph).
git log --all --decorate --graph
The entire Pro Git book, written by Scott Chacon and Ben Straub and published by Apress, is available for free. I suggest you start there. For a lighter read, try the Git Handbook published by GitHUb itself. Getting Git Right by Atlassian simplifies it even further with practical examples.
I guess you don't read the man pages: man git add:
Add file contents to the index
I found it weird when I saw many people stating that they found the git documentation pretty good, but then I realized most people think the Pro Git book is official documentation, but it isn't (Junio didn't approve any of it). Only what is inside the Documentation folder is official (i.e. man pages).
So git documentation is "pretty good" only if you exclude the official documentation. Have you seen the official user manual? I personally find it unbearable to read.
> It is not a database, and it is not an API.
That's where you are wrong. Git, at it's core, is an immutable, distributed key-value database. See also: https://git-scm.com/book/en/v2/Git-Internals-Git-Objects
Also, git is a well established API, there are plumbing commands that you can use. For example, one could use some combination of git ls-remote, git init --bare and git fetch --depth=1 to download a snapshot of a specific ref.
git init --bare
git fetch --depth=1
I think, there is nothing wrong in using git as a database and as a SCM at the same time, it's practical, easy, and the core concept are, IMO, well designed, especially for this use-case (vcpkg registries).
Please don’t buy anything from this person. Anything they teach, you can learn for free from better resources. They want to scam you off your money under the guise of “I love teaching and sharing knowledge with people”. They’re using a fake account now because their old one was banned in many subreddits for spamming their “amazing” offers.
P.S. This course has a lecture on git. Alternatively, read this comprehensive and not at all difficult to read documentation book.
So you already know about git push, which pushes new commits on your branch to the remote.
The converse operation is git pull, logically enough. This first fetches the commits from the remote that you don't have, then merges those commits into your local branch. That is, you can think of it as:
git fetch origin master
git merge origin/master
If you want to rebase your not-yet-pushed commits — this will "replay" them on the new tip of the branch — you can use git pull --rebase instead. This is effectively a shortcut for:
git pull --rebase
git fetch origin master
git rebase origin/master
In most cases you never need to touch git fetch. It's not exactly at the bottom of the Git stack, but I would consider it a "lower level" command than git pull.
(origin/master here refers to a so-called "remote tracking" branch — this is a kind of staging area for fetched commits. They go here before some merge or rebase operation puts them into your regular branches. This two-step pull process allows for a variety of different policies to be implemented.)
The Pro Git book has a good summary of these commands. I suggest looking through some of the links there.
Don't use git reset --hard unless you really, really mean it! It literally means throw away everything, set HEAD to be a pointer to the identified commit (or branch), and check out that commit (or the commit at the branch's tip) into the working directory.
git reset --hard
In particular, your particular use of git reset --hard will have turned HEAD into what's called a "detached HEAD". This is where HEAD points to a commit, rather than a branch. This is almost certainly not what you want.
If you're experienced in C# and want to use it the mono version is great. The only downside is it's lacking exports for everything except desktop atm and has some bugs (I have yet to find any substantial). If you use it the main C# dev is returning soon so it'll be in a better state around next month I'd assume.
If you want to use the built in languages you can go with the regular or steam version. The steam version is the same except it lags behind with updates.
As for working together you can and on different computers using a version control system like git. If you don't know it there's a lot of information out there on it and even GUIs to make it easy.
Well, git itself is developed with a kind of bug tracker built into the source code. When a new bug is discovered and is easily reproducible, and new test can be added to the test suite and is marked as test_expect_failure. When fix is implemented, the commit with the fix also changes test_expect_failure into test_expect_success. Special markers "Reported-by:" are added into commit messages to give credit.
> can you give me a reason why this is important
So that people know YOU were the one that contributed the code (or at least you're willing to sign your name under it).
> where I can get this?
TLDR: git commit -aSm. Uses GPG.
git commit -aSm
While everyone is busy circlejerking over the branch naming, Reddit is missing a pretty big QoL change. No more accidentally breaking your branch when you pull if there's been a rebase pushed to the remote.
>When you run git pull in a repository when you’re tracking a remote branch, one of four things can happen: there might be no changes, changes on the server, client, or both. As long as there aren’t changes in both directions, resolving the difference is straightforward: when there are no changes at all, there’s nothing to do. When the server is strictly ahead of the client, the client fast-forwards to the state on the server.But, when there are change both on the client and on the server: what happens? That depends on whether not you have the pull.rebase configuration set. If you do, your branch is rebased on top of where you’re pulling from, and otherwise, a merge is performed.These merges can clutter your history and be tricky to back out of without starting over your pull from scratch. Git 2.28 now warns you of this case (specifically, when pull.rebase is unset, and you didn’t explicitly specify --[no-]rebase as an argument to git pull).
Pro Git, Ansible, From Beginner To Pro & Pro Vim have been the most help to me recently - the bulk of recent work has been datacenter network automation across vmware, UCS, nexus and arista.
Here is a litmus test I use, but it really isn't comprehensive. But the links /u/nawfel_bgh are good. I would add https://git-scm.com/
> Author: System Administrator <>
That's not right :). You should set a username and password the first time you use Git
$ git config --global user.name "John Doe"
$ git config --global user.email
(this won't change your previous commits though)
You want to look at the different merging strategies in the .gitattribute file.
> You can also use Git attributes to tell Git to use different merge strategies for specific files in your project. One very useful option is to tell Git to not try to merge specific files when they have conflicts, but rather to use your side of the merge over someone else’s.
>This is helpful if a branch in your project has diverged or is specialized, but you want to be able to merge changes back in from it, and you want to ignore certain files. Say you have a database settings file called database.xml that is different in two branches, and you want to merge in your other branch without messing up the database file. You can set up an attribute like this:
No, gitignore is not mandatory, in the sense that it is possible to create a new repository without a gitignore file and commit other things to it and work without ever adding a gitignore file. Git allows you to work without a gitignore file so they are not mandatory in that sense.
Is having a gitignore a very good idea? Yes, absolutely. You can create a personal, system-wide ignore file (see also the git config setting for <code>core.excludesFile</code>) in which you should place lines for e.g. your personal editor's scratch files. A .gitignore file you track in a repository should be relevant to the files likely to need ignoring by that repository, so e.g. if it contains C code then intermediate and final build output (*.o, *.a, etc.); if you're tracking a LaTeX report then you might want to ignore *.pdf, and so on.
use git rebase to squash your WIP commits out. do so on a branch so you arent rebasing on master/development.
Or you can use github's UI to squash your merge requests into a single commit automatically.
In both cases you keep yoru WIP commits but "fold" them into your real commits.
FYI, you should be using log instead of whatchanged - you can use the same --since syntax with git log.
whatchanged is left in for legacy purposes, but it provides less information.
You should use git revert in this situation. It will undo the changes made by the faulty commit. Then push the newly created commit to remote.
See https://git-scm.com/docs/git-revert for more info.
Something like this?
The arg_object argument takes an object spec, so you can cat it straight out of the git repo.
It's not really an oversight. Git beats the pants off svn even with the regular old objects directory. But for years git has used object pack files where the objects are collected together, similar objects found and deltas are used.
I remember reading a technical description of the pack files a few years ago and it was a really really good read. I feel like it was either comments in the source code itself, or a mailing list posting. Either way, after reading it I felt like it made me really appreciate the elegance of their design, the interesting problems they faced and their solutions, and made it seem like any random programmer could easily write a reader/writer for these files. So many times compressed object files seem like black magic voodoo, but this seems like the opposite.
You might want to start splitting your code into separate files before it gets too long, it will save you a lot of scrolling and make your life easier.
Also, take u/CrocodileSpacePope's advice and get a free version control service account It will not only be a place to host your code, you'll be grateful to have one when you break your code. If you don't know what version control is, here is a guide you only need to read chapters 1, 2 and the first part of chapter 6 and you'll be using github within the hour.
I'm not a wizard, but there are two things I can think of that might help you.
The first would be separating the assets from the code.
Not only will this reduce the size of the project, it will also improve git's performance.
Recently I learned that git doesn't like binary blobs and stores each version of the blob instead of storing only one version and the diffs.
The second would be using submodules to break up the large, complex project into smaller projects.
The git book is a must-read for anyone who wants (or has) to work with git.
For me, it helped to be surrounded by serious git users for five years. Especislly being forced to rebase, squash commits and rewrite history learned me a lot. Leaving your comfort zone is quite harmless with git since you can almost always go back to a previous state (at least as long as you're working locally). Get to know git reflog!
He is telling you, that all flat files (.html, .css, .cs, .java, etc) should be version controlled. Maybe you use svn of some variation, but not having a backup of every state your code ever have been in is nuts.
So I hope you make some sort of backup - .zips, or at least Dropbox where you can roll back up to 30 days if you mess up something.
So head over to the getting started page for Git, and read about what it can do for you (tldr; save all your files, in every version, when you specify it, and let you work on the same code on multiple machines without a big hassle of merging them).
If you have a lot of clients, whose code is all the same, but still have different colors and icons, I assume you store the path for this in a db and just map the company to it.
If you want all your flat code to apply, look up something called Continuous Integration - this can be triggered when you save your code in Git.
Git does way, way more than that, but the basics are exactly right. Once you get in the rhythm of using it, you won't want to do without it even for solo projects.
If you're looking for a good guide, I'd recommend the one on git-scm.com. Though if you've been reading about git, chances are you've seen this already.
Next time use version control. You and your class mate should take some time to get to know git. Don't use dropbox or a file share. Use bitbucket (unlimited private repos) or github (unlimited public repos).
edit: also you need to provide specific information. Files may be salvageable. How did you share code?
More specifically I make my inters read the first 3 chapters of the free book (can be found here) it covers 85% of your use cases (and 99% of what an entry level will encounter.)
what is git?
github is an online host of a git repository. this allows for multiple people to easily work together on a codebase without breaking things (as much as possible with other options, anyways)
This is fantastic, how have I never heard of this before?
EDIT: It does say in the Notes section:
> Users often try to use the assume-unchanged and skip-worktree bits to tell Git to ignore changes to files that are tracked. This does not work as expected, since Git may still check working tree files against the index when performing certain operations. In general, Git does not provide a way to ignore changes to tracked files, so alternate solutions are recommended.
> For example, if the file you want to change is some sort of config file, the repository can include a sample config file that can then be copied into the ignored name and modified. The repository can even include a script to treat the sample file as a template, modifying and copying it automatically.
I for one learned about <code>git-rebase</code>'s <code>--autosquash</code> from reading this simple script, which I wouldn't have done so if stacked git was linked, so thanks OP.
Learn the tools your company already has implemented. Understand those systems and the reasons they were built the way the are.
Shell scripting will be important regardless of what other systems you use. Learn Bash. Learn git, read the whole book. https://git-scm.com/book/en/v2. Ansible is very useful for performing the same actions in multiple systems or repeatedly performing the same actions. Terraform is great for managing cloud infrastructure regardless of your provider. Learn containers: build one from scratch, and learn why it's more secure to do so. Learn Kubernetes for hosting containerized applications.
There's plenty of other tools too, and that's why I first suggest learning those tools which your company already use because you'll see them at scale and encounter real situations that you'll have to deal with.
A lot of people don't know this, but you can actually overrride the date of a commit! Check out the docs here. You are looking for the --date flag. I totally understand that inexperienced devs may leverage this, but I do not support that behaviour as I have explained.
However, given that date overrides are an option in Git, I think being transparent about it is a better approach. The more people that know that you can commit in the past, the more people will devalue the contribution graph as a litmus test of a "good" developer.
I hope that makes sense. Thank you for checking out my app! :)
google git prompt you will get to know more..
The one I use is given on Git's website: https://git-scm.com/book/en/v2/Appendix-A%3A-Git-in-Other-Environments-Git-in-Bash
& the symbol is an unicode icon which I copied from fontawesome.
I’d probably recommend learning it now. Basic usage is really not that complicated.
I would. Heck out the official documentation or the book “Pro Git” by Scott Chacon and Ben Strab. It’s published under an open source license and can be downloaded in different ebook formats from the official site. Or you can buy a paper copy.
>A leading "" followed by a slash means match in all directories. For example, "/foo" matches file or directory "foo" anywhere, the same as pattern "foo". "**/foo/bar" matches file or directory "bar" anywhere that is directly under directory "foo".
So, you want `**/error_log`, which is equivalent to `error_log`. Either will do.
This. OP sounds like he's not using version control / is concerned about saving changes if untested.
Go to https://git-scm.com and the r/ git subreddit has good learning resources. If you're working alone it's a lot easier to learn committing and undoing mistakes. When you learn to branch you can make more complex changes in dev and easily switch back to your master branch with the known-good prod code.
> In case you haven't heard: Today's update of the game reintroduced a few old bugs into the game, like the bobby pin weight, bulked items not working properly, fusion core spawn rate at work shops being too low by a factor of 10 (down to 0.8 instead of 8 per hour) and it also looks like duping is back on the menu.
>The way it's looking right now everything points to them having based this patch on an older build, hence the old squashed bugs coming back to life and even duping working again - check eBay and you will see how much the duping exploits are really "fixed".
Oh my God, this is hilarious.
If these dickfarts knew dick about source control, or even if they brainlessly jumped on the git bandwagon because it was trendy or because they got butthurt because Linus Torvaalds called them idiots, they should have known how to identify and reclaim their bug fixes, even if they based their new patch off an obsolete build.
git diff <hash of most recent version> <hash of patch base version>
Voilà! If they use git internally, this is all they have to use to determine the differences in code between the versions. Then they can go through, identify their bug fix changes, and just copy paste it from the terminal or whatever, however way they liked.
And that's if their source control workflow hygiene is non-existent, using a plan I formed in 3 minutes off the top of my head when git has more sophisticated, built-in tools to achieve what I'm describing.
...Why are these ~~cock sucking~~ clam slurping grundle lickers employed in place of non-lazy people who know their shit?
I'd strongly disagree - the format is well documented, it's a task that Perl is well-suited for, and it'd be easy to break it up into a series of steps.
Try something like this to get an overview of how+where things are stored:
and the documentation covers the concepts in more detail:
You can compare your code directly against the equivalent git commands and rapidly build up a toolset. Much of git was constructed by shell scripts and Perl scripts originally!
Fundamentally, that's not what Git is. It's a tool for tracking versions of code, not for deploying code.
But that hasn't stopped many people from using it for deployment. You can write a <code>post-receive</code> hook to build and deploy your code for you after a push, or use any number of existing deploy tools that use Git as a trigger and source for files.
Searching "git deploy tools" should turn up a bunch of options as well as some sample hooks.
Git is also really helpful to learn in general! You can find the documentation here. At first, it might be a bit confusing, but once you have a working knowledge of git, it becomes much less daunting to use and actually starts making sense.
Basically, you get their subversion changes to your local git repo on your local machine. You can work with git features on local machine (stash, branch, blame...)
When you need to submit code to share, you basically submit the changes to the svn server.
//uses git-svn for subversion and git-p4 for perforce, whenever the place I'm working in has these inferior systems. Git-p4 breaks more often than git-svn
Git is decently well documented. You can even just search for git branch and find good explanations along with examples.
It's very handy and easy to use once you get the hang of it. Good luck! :)
When working on code with a team, you use "version control software" such as git to be able to work on the same code without stepping on each other's changes.
A "commit" is when you select a set of changed lines to bundle together.
"push" means to send commits to another system (usually the central server holding your team's shared code).
TL;DR: git add -u says, “stage all changes to files we're already tracking”. It won't grab new files and it won't let you pick and choose. git add --patch may be a better option if you want to pick and choose what to index change-by-change, or git add --interactive if you want to go file-by-file.
git add -u
git add --patch
git add --interactive
excerpt from <code>git help add</code>:
Update the index just where it already has an entry matching <pathspec>. This removes as well as modifies index entries to match the working tree, but adds no new files.
If no <pathspec> is given when -u option is used, all tracked files in the entire working tree are updated (old versions of Git used to limit the update to the current directory and its subdirectories).
There are a dozen ways:
Remember that git was created with almost this exact scenario in mind. Linus didn't want to have to push a server. It was created so that individuals can push and pull from each other without a 3rd party. Each machine becomes a single individual. https://git-scm.com/book/en/v2/Git-on-the-Server-Git-Daemon
Setup a private repo on github and push and pull from that until you're done and ready.
If you want to do it locally just share the drive that the repo is on.
Put the repo on a USB stick.
You can setup a git server locally, and use that as your remote.
If the templates are similar enough, you could attempt a Git rebase. First create a new branch pointing to your original commit with just the template code. Then delete everything, create the new template, and commit. Next, do git rebase branch_to_rebase. You may need to fix merge errors during the process. See the get rebase documentation for more information.
git rebase branch_to_rebase
You can search the output of any command by piping it through grep; for instance:
ls -la | grep 'foo'
So you can do that with any Git command, for instance:
git branch --list --all | grep 'foo'
In this particular case, if you look at the <code>git branch</code> documentation, then you will see that you can also supply a pattern to git branch --list:
git branch --list
> If a <pattern> is given, it is used as a shell wildcard to restrict the output to matching branches.
So you can run:
git branch --list --all 'foo'
git can be learned in a day.
naming convention can be learned in an hour, but often takes way longer to master.
> GIT does not have free private repos.
Do you mean Github? Git does not offer hosting itself, but allows other people to do it. Bitbucket has free private repos (i believe up to 5). Also GitLab might be worth looking into.
Then there are lots of ways to set up a free local private git repo, example - https://bonobogitserver.com/