So, when will this utility have the full functionality that mydumper had back in 2010?
Any reason not to collaborate with that project instead, aside from the "not invented here"/legal/licensing reasons (considering that one of your employees has collaborated in the past with it)?
How many different utilities are you going to create: mysqldump, mydumper (Mark is noted as one of the authors), mysqldbexport (from the utilities group) and now mysqlpump?
Sorry, i think i don't get your question... mysql won't store your data in ordinary sql files it uses different files for different parts of the database information and it also depends on your database engine. as an example for MyISAM engine you have .MYD files for the table data, .MYI for table indices and .frm for the table definition. If you want to find out the path where all those files are stored, check the my.cnf file for the "[mysqld] datadir=" entry. if you want to manipulate the data in the database use the mysql-client command line tool or a WIndows GUI-Client like HeidiSQL
DON'T EMBED VARIABLES INTO A QUERY STRING!
seriously. don’t do it. ever.
my $sth = $dbh->prepare( << SQL ) or croak $!; SELECT DISTINCT CoName FROM SystemDemo Where Place = ? AND prov = ? SQL $sth->execute( $drop_var, $prov_selected ); while( $sth-> fetchrow_hashref( ) ) { ....
[edit] ack, OP's using php and not perl with a mysql_query helper. see mysqli for the php equiv.
I strongly disagree and I think your professor is wrong. Using joins will explicitly tell how you are joining the tables.
Doing joins in the WHERE clause is easier to miss or misshunderstand.
Well, if you're not interested in writing it properly, throw caution to the wind and use mysql_insert_id() in your php script to get the id inserted by the first query, then use whatever value it returns in the second query.
Again, to be of any more help, we need the definitions of the tables involved (check the mysql docs for show create table
syntax) and the exact query you need help with. We are not mind readers and cannot help diagnose a problem with a query you don't include in your code. :)
AWS free tier may work for you depending on your needs.
You can probably google for a DigitalOcean credit/code as well. They probably still offer codes for $10 free credit which would get you a small vps for free for 2 months. The one I used when I signed up no longer works but I bet there are some out there.
If not, its only $5 - $10 for the smaller ones.
edit: no idea if any of these still work but I found them with google:
For starters, one of my favorites https://www.safaribooksonline.com/library/view/mysql-fifth-edition/9780133038552/
Then for in depth, the online docs are pretty good, plus the internals section depending on how far down the rabbit hole you want to go: https://dev.mysql.com/doc/internals/en/
I believe what you're looking for is GNU Screen
It would allow you to do what you're wanting. That being said; It isn't anything mysql specific and will work with 99% of the applications you'll run across. Think of it like running multiple shells from within your terminal session with the ability of skipping back and forth between them.
How critical is this data? If very critical, then the guys at Percona may be able to help you recover it. If it is not just a config error (did you change the config recently?) then it could be recoverable but not via any means I am comfortable relaying over Reddit for critical data.
The important lines in that error log are:
Error: trying to access page number 4294896512 in space 0, space name ./ibdata1, which is outside the tablespace bounds.
InnoDB: If you get this error at mysqld startup, please check that your my.cnf matches the ibdata files that you have in the MySQL server.
Do not bother with "tuners" and other crap. This is more fundamental. There is info here (http://serverfault.com/questions/592793/mysql-crashed-and-wont-start-up), as it says in the messages, but if this is critical data then you need help beyond Reddit.
I would suggest learning to use mysql without phpMyAdmin.
If you want a GUI, use MySQL workbench or heidisql. The gui's have some nice auto completion features that you won't find in the command line or phpmyadmin, but will help build your sql skills.
Also, WAMP is ok to start, but seriously look into VirtualBox https://www.virtualbox.org/ and setup a proper virtualized dev environment.
Have you tried SequelPro https://sequelpro.com ?
I just added a BINARY column to one of my local tables and it was displayed with no problem.
You need to uncheck the box "don't load blob and text fields until needed" for it to display those fields on the Content view.
edit : FWIW the MySQL client is protecting itself by not dumping the binary content out as it may have codes and characters it doesnt know how to display and might cause a problem.
That's not surprising, but this isn't a phpmyadmin support forum. MySQL has lots of different applications that connect to it; we don't support them all, we support mysql itself. Maybe check out https://www.phpmyadmin.net/support/ ?
Further to others' responses, you might also want to grab MySQL Workbench:
> https://www.mysql.com/products/workbench/
Mostly so that you can at least connect to the database that you're working with, in order to be able to see what's in there, without all of the pissing about with Excel at first.
At the very least, that'll help you confirm that you can actually connect to the MySQL server that you need from your machine, and will give you a clear picture of what tables and data are available. Then, use that information to make the connection from the Excel side.
Never used Excel's Power Query thing, though.
MySQL would do what you want and MySQL Workbench is a nice tool for building a database. It allows you to build a diagram and then forward engineer your database from there.
This site has a mysql tutorial
The problem might be that the "unique" id is not really unique in some of the tables. Maybe there are multiple rows for the same "unique" id? Your join will then try to create all the combinations of those, but it sounds that you might want only the latest rows for each id.
If that is the case, the things they discuss here might help:
https://stackoverflow.com/questions/9103847/sql-how-to-make-a-join-query-with-only-having-the-largest-entry-of-any-type
edit And if for example the customer phone number is in one record and the customer email in another, then you will need create that kind of join for each column, where your condition is for example that the field is the latest non-empty value.
It might be easiest and fastest to merge each column with a separate update query, rather than trying to do it all in a one huge query.
> massive amounts of biological data
Can you give us an idea of how much data? How many tables & how many rows (more or less)?
> I would like to have a little summary of each experiment that folks can look up and poke around in images taken and then be able to query the whole collection of data for points of interest.
MySQL would do this, though I'm not sure if it's the best choice.
Within databases exists the concept of a drop-in replacement. Which is to say a database that is comparable, has all of the existing features & works with all of the same frameworks but has added benefits. Amazon offers Aurora SQL which is a drop in replacement for MySQL. They offer a service that allows the database to scale with you. So if you need more resources, it's a few clicks (and more money) but completely doable.
Separately, Google offers MySQL & PostgreSQL hosting. The same concept of availability applies. Also I know this is a MySQL forum but for all things data analysis I find that people are more interested in integrating with PostgreSQL. I also have bias towards PG, so that can be taken with a grain of salt.
.
But mostly, you'll need someone who can work with SQL. I don't know how JMP is, but the database itself comes without frills. If you don't know what to do with that, it may not help you.
KEY FK_Users_idx (UserID), CONSTRAINT FK_Users FOREIGN KEY (UserID) REFERENCES students (UserID) ON DELETE NO ACTION ON UPDATE NO ACTION
same on kids_callhome and pwd_recovery tables, change constraint and FK name on the pwd_recovery table.
Google: https://stackoverflow.com/questions/3316410/sql-error-errno-121
You're not gonna get away without a little bit of googling/reading.
http://php.net/manual/en/mysqli.query.php
mysqli_query returns a mysqli_result object. It will have a property called num_rows. If you don't use a COUNT() and you just query directly for a result set, then this will work. If you use a COUNT, it will always return one row, provided you don't have an error. Then you can use one of the fetch_ methods to get the value, trivially.
XAMPP is great for learning in a local environment. I've been working with this for a couple of months and can possibly help if you see any issues. This extra info is also helpful.
Something you may want to add is the number of engine row operations, super useful for analyzing a query, post execution: SHOW SESSION STATUS like 'Hand%';
I was thinking of creating something like that for my developers.
More info on how to use the handler statustics to debug the performance of a query.
MySQL doesn't have a way to return the result of a match in a regular expression or replace substrings using regular expressions. Here is the top result of a google search for "mysql regular expression replace" which answers your question.
It would be possible to do what you describe in python, or possibly using mysql's other string manipulation functions if you don't need full regular expression syntax.
sharding is my first thought. Keep all information related to a specific client / segment / shard available on that individual shard.
My second thought is that MS SQL has the concept of Link Servers, and after doing a google just now, it looks like the similar technology (Federation) has already been mentioned above:
https://stackoverflow.com/questions/5370970/how-to-create-linked-server-mysql
~~If that is the case, why the limit 1? That reduces your results down to only a single row. Regardless, it is likely the ORDER BY clause that is causing the performance issue.~~
Ah, so this is a top ten deal? I strongly suspect the order by is slowing things down. Try removing it and see how performance is affected - you may want to explicitely tell MySQL not to execute the query without using its cache via SQL_NO_CACHE, as any cached results will throw your numbers off.
Again, add EXPLAIN before the query and post the results. It will help a lot in figuring out what is happening. My suspicion is that, with the order by, you will see a line that reads FILE SORT. Check this article out to see what that means.
If this turns out to be the issue, there are some alternative methods for getting top-ten style results worth discussing.
This stack overflow post has a decent explanation of how to delete from multiple tables simultaneously.
It then falls on you to craft a simple query that returns the rows you want deleted. I'd like to see what you have already tried so I can suggest a way to fix it, but I suspect your issue is that you created an unintentional cross join.
I guess these are Environment variables. You just need to add them into your deployment descriptor in container section:
env:
- name: key
value: value
I'm not sure what you're asking here. Most people don't go out and get data to put in their database and then start using it. Instead, they build an application that uses MySQL, and use insert
statements to put the data into the database as the user directs.
If you're looking for interesting data sets that you might want to use as a basis for a project, you can check out the Quora list and the Amazon List for ideas. Importing the data into mysql is left as an exercise for the reader (LOAD DATA INFILE
would be a good search term).
As an alternative to MySQLTuner you could try Tuning Primer which has a little better output in my opinion.
Also for InnoDB Tuning try this blog entry from Percona.
> password_hash
So you're going to want to
select username, password from table where username = ?;
then, compare the pw you get back with this
http://php.net/manual/en/function.password-verify.php
(sorry for the brevity...I'm pooping and only have my phone)
nothing really jumps out at me. I know they recommend at least 3 nodes for a stable setup that avoids split brain, but maybe someone over at https://groups.google.com/forum/#!forum/codership-team can help? The Percona and Galera devs are on that group and they will probably be able to help better than I can.
First you should verify that it's actually MySQL sending the data and where it's going to.
You can run a packet trace on the server to see which process is actually using the traffic. Use tcpdump
on Linux to capture the traffic, then download the resulting dump to your computer for analysis in Wireshark:
https://www.wireshark.org/docs/wsug_html_chunked/AppToolstcpdump.html
If the culprit is MySQL, you can feed that very same tcpdump log into Percona's pt-query-digest
tool (skip to the tcpdump section):
http://www.olindata.com/blog/2014/08/analysing-slow-mysql-queries-pt-query-digest
That'll tell you exactly which queries are using the traffic and how much traffic they're using, right down to the byte.
If you're using innodb with file-per-table enabled, you may want to check out percona xtrabackup. It works kinda like mysqlhotcopy did, but for innodb table files. You'll still need to use the tools to backup and restore, but it's closer to a file copy than a backup sql file generator, so it's considerably faster.
A framework is going to be overkill for this.
Using PHP, use this class: http://php.net/manual/en/book.pdo.php to connect to your database, run the statements and return the values into an object for you to loop through and echo into your HTML.
That is the simplest way of doing what you want to do.
I'd like to veer off topic though and just state that you shouldn't be opening ports to anything unless it is meant as a public facing service, such as a web server. However, in your case, even though you'll need port 80 access to this site, do not open it up to the internet. I can't stress enough that you should be using a VPN when accessing resources remotely.
edit: I'd also like to say that this book: http://www.sitepoint.com/books/phpmysql5/ is fucking fantastic for learning PHP and accessing a MySQL database with it. It explains SQL injection and uses best practices and I'm confident you'll be able to do exactly what you're trying to do by reading this.
For starters, are you implicitly using stuff from homebrew for a reason?
https://www.mamp.info/en/ will give you every you need with a simple install.
If you are editing httpd.conf
cd /etc/apache2 sudo cp httpd.conf httpd.conf.backup
Always backup system files that you are editing so you can just switch back if you screw up.
sudo nano httpd.conf
Should let you make whatever edits you need to make. Ctrl X to exit, it will prompt to save and just press enter.
> I've done Postgres on top of OpenShift in AWS. Does that count?
I believe so -- https://www.openshift.com/products says: "OpenShift is part of the CNCF Certified Kubernetes program, ensuring portability and interoperability of your container workloads".
> with a lot less benefits
The benefits we are hoping for are:
UPDATE
BTW, thanks for your detailed reply.
> I'm looking to set up either MySQL running from my desktop
If the database is important to the company it should really, really be running on a server. You don't want to wind up in a situation where the whole business goes down because your desktop needs a new motherboard.
> and using Access
No.
> and using Access
The memories. NOOOOOO.
> and using Access
AHHHHHHHHHHHHHHH.
In all seriousness, I really recommend against using Access for anything. It has caused me nothing but pain at my current job. Maybe the newer versions aren't as bad, but I am not optimistic.
If your needs aren't terribly complex (simple CRUD type operations), you are best off going with what you know - build something in PHP to let people work with your database. Personally I prefer web based tools because it doesn't require that people install anything - everyone already has a browser ready to go. Check online, there should be tons of frameworks for getting something like this off the ground fairly quickly (I do mostly java, so couldn't tell you what is out there).
> Just noting, I'm not too knowledgeable regarding servers or MySQL specifically, any suggested MySQL versions would be appreciated.
I recommend using MySQL 5.6 or 5.7, as they solve some outstanding issues that were a bit of a pain. Alternatively you could use MariaDb or Percona's variant of MySQL. Both add features, fixes, and tools that are extremely useful.
Install Wamp Going to assume you use Windows. These instructions still apply to other OS
Run Wamp and find it in the Windows Task Trey. Left Click and Choose PHPmyAdmin.
Choose new from the left hand side. Name the Database Cheaters.
With the Database list selected choose "Cheaters"
Choose Import. The file(s) should easily import and format correctly.
Although I don't know as I wouldn't download it. That data dump contains partial credit card info. If you already downloaded, it doesn't matter anymore so you might as well look.
There are like 10 others ways to view it but if you wanted to learn some MySQL then this will work.
Once you have it imported and have the right DB chosen google how to sort the results where the email contains a ".edu" or ".gov".
Good luck
Edit: If you didn't want to download a whole webserver setup then get MySQL Workbench It's the same thing as PHPmyAdmin but it isn't in your browser.
I'm not sure I get how migrating terabytes of data into postgres, which still has latin1 support anyway will address your concerns.
Your applications will needed to be updated to use new client libraries, you'll have to spend time auditing queries to make sure there's nothing that mysql specific going on, fix it if there is, probably update connection configurations anyway.
Don't get me wrong here this certainly is not about which is better in general, MySQL or postgres. If this was a new project that debate could be and should be had. But saying migrating everything over to postgres because you don't to just update your application configurations to sepcify a charset seems pretty... how dow I put this diplomatically ... silly.
Switching to postgres is not going to eliminate the need to have properly configured applications.
Good luck getting buy in from the rest of your operations and engineering teams for this change.
Spoiler: performing backups do indeed cost money and require additional resources. Spoiler 2: not having backups usually cost thousands or millions times more money ;-)
Backups have to be remote and offline, for some definition of those words you yourself agree.
Please note I asked about your non-functional requirements on my first question. "Have to be free" is an important omission- whose answer is you go to your manager and tell them both will lose your jobs without them. You also mention issues like Gitlab 2 years ago https://about.gitlab.com/2017/02/10/postmortem-of-database-outage-of-january-31/
Well, i don't know how it's going right now, since i wasn't used it in pas half year, but from my experience it was running very well. MySQL workbench was very slow which was the main reason to switch. There is also a https://dbeaver.io/ but it wasn't as good as these two.
I don’t know if it’s available in your country but this is a feature of Datagrip. Once you connect your database, run the query you’re interested in and then you can download.
Datagrip is a paid service and other services might be free but I know this feature works because I use it.
<em>Data Modeling Made Simple</em> and <em>Data Modeling Essentials</em> are two popular, and current books on the subject.
I think a very simple approach involves thinking of nouns, then thinking of verbs.
Nouns are the entities in your scope. Students and teachers, for example.
Verbs are relationships. Students attend classes. Students receive grades. Teachers teach classes.
So now we know we have two entities, students and teachers. And we've identified a couple relationships between them.
Maybe there are more relationships. Maybe some adjectives help you identify additional attributes for the entities.
Now, it's just a matter of patterns. Once you know about different kinds of relationships, you can build tables and columns out of well-known patterns. A 1:1 relationship, for example, always looks the same way; there's easy ways to model many-to-many relationships.
And that's it in a nutshell. Hope it helps. The books go into far more detail.
It's a brand new installed database, I just was going to start a project and now not even mysqld works. This is my systemctl status mysqld trace:
Assuming that this is what you are talking about: https://grafana.com/docs/grafana/latest/datasources/mysql/
You could fiddle with the max open, max idle and max lifetime in the connection settings. Maybe, I have no clue.
> SQL Workbench
did you mean MySQL Workbench? that's just a front-end app
but it access MySQL database
and MySQL doesn't use LISTAGG, it uses GROUP_CONCAT
Does your insert work? I see commented out lines with $db instead of $link and some capital letters in column names that are potential failure points, for example.
Also, use mysqli functions (or PDO, or...) rather than mysql. It's deprecated. Short explanation
While it's an extra query, here's what I would do to make it work if it needs to work so you can code other things:
Do the insert of the new student.
SELECT studentid FROM student ORDER BY studentid DESC LIMIT 1 (that syntax is likely not quite right, but you get the idea - $db, please give me the studentid column, with the largest number first, but only give me one.)
For validation, I would amend the above to select studentid and, say, first name, last name, maybe another field as well.
if loop to make sure the row you've grabbed matches the data you still have in those variables. Assuming it does...
Insert studentid into the other table and you're done.
Ideally, I'd probably wrap that query and the validation into a function. The function would return $functionStatus and $studentID (which can then be checked to make sure it matches, for additional validation if you like).
$functionStatus could be "OK" for success, "INSERT ERROR " . (error number) if the insert failed, "QUERY LAST STUDENTID ERROR " . (error number) if that query failed, etc. so you can handle it appropriately.
Best practice is probably not to use root for the db user, but since this is in a db called "project" I'm assuming this is not production code. Do take the above comments about mysql_ vs mysqli_ seriously, though.
Also, the actual query itself is being run against $db rather than $link. That would certainly cause all queries to fail if that's the actual code as it currently stands.
>you have it in a join but then don't use any fields from it.
You are right, I don't need it. Doing this at 4am in the morning is sometimes not the best idea. :) Getting rid of it brought the time down to on avg. 200ms or so.
> Out of curiosity, I wonder what the EXPLAIN plan would look like if you did > SELECT ... FROM game_bets STRAIGHT JOIN users ON () STRAIGHT JOIN betting_groups_user_pivot ON() ...
SAMPLE but without the join on betting_groups_user_pivot
id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
---|---|---|---|---|---|---|---|---|---|---|---|
1 | SIMPLE | game_bets | ALL | game_bets_user_id_index | 91545 | 10 | Using where; Using temporary; Using filesort | ||||
1 | SIMPLE | users | eq_ref | PRIMARY,users_id_unique | PRIMARY | 8 | laravel.game_bets.user_id | 1 | 100 | Using where |
> It is also curious that I don't see game_id in any kind of condition, so a lot of past data would enter any query as it stands now.
Sorry, but I am not sure what you mean by that? Do you mean the field in the COUNT
instruction? If yes then I have also changed that to use points
now.
Here is also the full schema for the tables in question
Not a stack you mentioned, but I've used this several times to build an API using PHP / MySQL https://github.com/jmathai/epiphany (There are lots of PHP choices, but that's one I've used)
If you need lots of enterprise features AWS has it's own offering: https://aws.amazon.com/api-gateway/
actually, there are two different databases where I run those commands first is run my database, and the second code is a hacker rank question where I think both comment or uncomments code is the same but commented code doesn't work and un comment work
> what files do i need to try and find to get the data i'm looking for?
Go to backups > mysql. (Yes, -1 troll, but happily it is a lesson that everybody has to learn once). Backups for data, git for your schema/app/config.
You may recover the schema from the .frm files and, with the right tools, even from ibdata1/.ibds (not easy to recover from innodb internal dictionary, it has its own format, I've never done the inverse process). You could even give a try to the binary logs (host-bin.000001, if you recently imported the database- CREATE statements would be there).
Also available on SquirrelSQL, as you can see from the source file
I am out of my field here. I am consulting the php manual on understanding classes and $this. Here's the link.
So for insert function to be an object and to gain access to $this, I need to create a class? something like:
class SomeClass { public function insert($table, $cols, $vals) [... insert code ...] $dbh = new PDO(); }
Also, by inserting that new PDO(); line, would that make $dbh a parameter of my function?
Would it be better practice for me to have this class as a different .php file and then include "someclass.php" in my original code?
Thanks so much for the initial reply, and if you decide to further help me, thanks for your patience.
oh heh, sorry skipped over the first part.
I would probably print out to a log what the ajax form is sending to the api, I'd also attempt to see what the browser console is saying, perhaps there's some kind of error is occurring which is preventing the JS from posting properly.
It also appears that you're sending the query to the db, but you're not correctly fetching the data out of the result object. Check this out. http://php.net/manual/en/mysqli.quickstart.statements.php
print_r & var_dump is useful when trying to figure out what a variable is set to.
Also, I would highly recommend taking a look at prepared statements to get around the janky '" stuff, this will help with sql injection and it'll make things easier to use. Also, don't store passwords in your database, store the hash of a password and compare against that :)
HI! Glad you got it working. So Mysqli is expecting the host argument to be something like an ip address or something that is resolvable.
http://php.net/manual/en/function.mysqli-connect.php
So likely what happened was that, Either the variables were not defined and mysql defaulted back to the mysql socket file, the string didn't resolve to an ip address.
> $host = "$username-$projectname-$projectid";
For example,
$username = 'notenoughcharacters9';
$project = "stuffnthings";
$projectId= 99;
So you could try resolving that string to see if it would work. dig notenoughcharacters9-stuffnthings-99
Looks funny right?
Two possible things that could be happening: 1. What is your mysql max_connections set to? Could php be exceeding what mysql is allowing? The default is either 100 or 151 depending on version inuse. 2. Do you have php persistant connections enabled? That could also be causing issues see http://php.net/manual/en/features.persistent-connections.php for details
You do not have the required server experience to run a wordpress server on your own.
If you want to operate a wordpress site, use a hosting service like Automattic Wordpress or one of the many other hosting providers.
I would suggest you open the CSV with an editor like atom (atom.io) or bbedit (barebones.com) which both will let you have full control of the actual text.
Atom is probably the best to use overall and there you find the UTF mentioned at the bottom. It would probably say csv file next to it as it is formatting aware.
Finding and dealing with non ascii characters has been a ongoing problem since at least the 80s and I think long before but don't know that for sure. A common solution is to install dos2unix and run that against the file which takes off the CR/LF and UTF characters.
Alternative solution.
Recreate your database with UTF as standard encoding and it will stop being a problem. A word of warning for this one is that you should make sure all databases and the default for the server is all the same. Doing joins with UTF-8 and latin or ascii is slow and I have had things not behave as expected.
> Default on linux is not socket, it's IP, like everywhere else
Who talked about defaults. I said that socket was an alternative connection method to host/port - which is not available on Windows.
And yes, Django- askbot, which op asked about, is programmed in django and uses its migration framework for the database. You can see it on the source code: https://github.com/ASKBOT/askbot-devel/blob/master/askbot/migrations/0001_initial.py
If the forum is up, and op owns it, he will be able to use configured server parameters from django's database configuration.
Learn hardware, OS and operational work (even if you don't do it yourself). It will make you a better developer and at the same time, it will make you understand why your DBA says "you cannot do that" because your code runs in an actual machine, not in a "magical place". Basic query optimization and profiling will help too.
Why are you using utf8mb4 in some places and utf8 (alias for utf8mb3, deprecated) in others?
Two stocks which are in different markets might duplicate symbols - IAU is iShares Gold Trust on Nasdaq but Intrepid Mines in Australia. So symbol
cannot be a unique key in contract
.
The same stock might trade in two different markets using different symbols - Carnival is CCL in New York but 0EV1 in London. So symbol
does not belong in contract
at all; it should be in exchange_contract
with a composite unique key on (exchange_id, symbol)
.
Sometimes things which seem static change anyway. Countries change names (Eswatini was Swaziland until 2018), split apart (I still have a globe showing a unified Czechoslovakia), adopt new currencies (see: Euros). You may have to deal with such changes and decide how to manage historical data.
> price and volume data per day
If it's per day, then you want a date, not a datetime.
> price_volume ... decimal(10,2)
Not precise enough - U.S. markets can trade sub-penny stocks in hundredths of a cent; I don't know about other countries. You need at least decimal(10, 4), and when inflation takes some economies into printing $100-trillion banknotes, it might need to be wider than that.
When using mysqldump, users are NOT exported.
Here's an article which addresses how to export users: http://serverfault.com/questions/8860/how-can-i-export-the-privileges-from-mysql-and-then-import-to-a-new-server
On a side note, by default, not even functions and procedures are exported by mysqldump. You have to add --routines
You have two options. You can either dump the result of a query executed through a command line utility directly to a file, or you can save a CSV file on the server and transfer through whatever means you have available on your network. Everything you have looked at thus far has been describing how to get that job done one way or another.
Which option you go with will depend on a few things:
1. Does your user account for MySQL have the FILE permission? If not, you will have to dump the query output locally.
2. Do you have the ability to share files with the server (SSH, file sharing, etc)? If not, you will have to dump the query output locally.
3. Does your data set contain special characters, tabs in particular? If so, you will likely want to dump the data to a CSV server side via SELECT INTO OUTFILE.
Anyways, the batch option simply requires adding -B as an argument to the call to the mysql command line client. This tweaks its output to be, effectively, a tab-delimited csv. An example of a quick-and-dirty command line query and dump:
mysql -B -u [username] -p[Password] -h [host] -e "[your query]" [schema] > C:[target]\dump.csv
The > character simply directs the output into whatever file name you provide. It will overwrite the file already there.
Note that NULLS will actually show up as the word NULL, which could be irritating (and cause issues if you are dropping the data into excel). You can stop this at the query level using any of the techniques mentioned here
This would be the option I would start with. If you needed the output as a comma-delimited file or with fields quoted, then I would use the INTO OUTFILE solution.
> $query = "SELECT * FROM accounts WHERE username='{$username}' AND password='{$password}'";
This is wide open to SQL injection attacks. Help like this is why websites are still vulnerable to these. Use parameterized queries. They've been available for 20 years and everyone should be using them by default.
>$password = md5($_POST['username']);
Unsalted MD5 hashes (and MD5 hashes in general) are insecure; Use <code>password_hash()</code> instead
I found Flask (Python framework) extremely easy to use and with great performance in the past: http://flask.pocoo.org
I implemented a proof of work recently of a key value system with a rest interface answering JSON from a MySQL database in a few lines of code:
https://github.com/jynus/pysessions/blob/master/app.py
Warning! Don't use that code directly, it is bad code just for testing purposes in a few minutes I did to learn the framework, not intended for production usage.
PHP website has excellent documentation, here is a direct link to PDO section:
http://php.net/manual/en/book.pdo.php
Are you using above version 5.1 PHP you shouldn't have to install anything extra. And you can keep using the same database!
That is actually an incredibly difficult task :) In PHP, the more suitable feature is this one: http://php.net/manual/en/function.get-browser.php
The browscap project has a csv
file you can import into MySQL to be able to match features/versions:
$lastID;
this as a single row does nothing, remove that (line 48 on pastebin)
$imageInsertedIdResult->fetch_object() if this doesnt return a row, then $lastID has no value and is NULL this is what triggers your error, you should test is_numeric($last_id) or !is_null($last_id)
other than that i advise strongly that you learn about preparing queries
http://php.net/manual/de/mysqli.prepare.php
also you know that you can get the id of the row you inserted last without actually quering it from a table?
if you'd use the object syntax of mysqli it would simply be stored in $mysqli->insert_id ~ but you currently use the functional way , i am sure theres a function for that too, but you should switch to object syntax
DB_USER is a php constant.
On line 46 you're connecting to the database:
$mysqli = new mysqli(DB_HOST, DB_USER, DB_PASS, DB_NAME);
All four of those host, username, password, database name; are constants presumably predefined using php define
On line 17 it looks like it is looking for those constants with:
require_once("db_const.php");
If that file is not present your script should fail at this point. Given that it doesn't and actually has the connection denied, the file likely exists.
In that file you will find:
define("DB_USER", "yourUsername");
Change the credentials and database info to match that you set up with wamp.
Since every install of MySQL is different, general tuning suggestions aren’t really available. They will vary depending on a lot of factors including hardware resources, schema, how the data is used, storage engines involved, and the like.
Something that I would recommend is picking up High Performance MySQL from O’Reilly press. The current version is a bit dated now (released in 2012) but it still very useful. If you can wait a couple of weeks, the new version comes out next month.
You can also try out MySQL Tuner to identify some opportunities for configuration improvements. Just make sure that you thoroughly understand the settings and the effects of any changes that you might consider making since you are in a production environment.
Another tangential solution might be to use something like Sphinx. It's extremely powerful for these types of query and could use MySQL as a real-time data-source. http://sphinxsearch.com/about/sphinx/
Check out Sphinx Search. It installs on your server and is used for fast searches. It can talk directly to a MySQL database (it does so by default) and offers in depth indexing, filtering, and other search tools. The documentation on the site is a little hard to follow, at least for me, but I was able to find a very useful book (pdf) that got me started.
Have you considered setting them up with an ERP system? Here's one that's open source, and based on PHP: http://inoideas.org/
Also, you may want to browse templates in the MySQL documentation, and even MS Access to see if someone else has already created what you're looking to do. Then, once you get the basic database framework set up (not easy to do), then you can learn how to create custom queries, forms, and/or reports to get out the info you need (for thank you cards, etc)
I've done this before, but instead of emailing, I've set up a Heroku plugin called paper trail. It's free for what I use it for, and every time an ERROR level log is raised, it forwards it to a slack channel I monitor (you can set it up to trigger on any log level).
If you are going to send emails and you decide to use MailChimp or Sendgrid, try not to mark the emails as junk or spam (when batch cleaning) or else the provider could flag them and may limit the number of emails sent out next time (I.e. account reputation).
Generally time-series databases, like InfluxDB, are very good at handling the kind of data you're storing.
For example, we use a specialized TSDB for the Prometheus monitoring system. It uses specialized compression that's oriented for the kind of data you're producing.
For example, Prometheus TSDB uses about 1.3 bytes per sample (float64 + timestamp in milliseconds). The Prometheus TSDB is not general use and probably not a good fit for what you're doing. But InfluxDB is also pretty good at compressing time-series data. Last I looked it used about 2 bytes per sample.
They're also both designed for reading back the kind of data you're storing much more quickly than a normal SQL database.
At 2 bytes per sample, and a pair of floats (speed/direction) 1 second resolution data would be around 340KiB of data per day. 10 years of data would only be 1.2GiB, without having to downsample at all.
gh-ost is a great tool.
Percona is MySQL consulting company that maintains a fork of Oracle MySQL, with a bunch of good tuning for real production work. I ran a similar size dataset, tables with billions of rows, over 1TB in size. Percona is what we used for all our databases, and we used their support/consulting for evaluation/tuning. They're a solid bunch of engineers, I recommend them highly.
I can also recommend Percona PMM for monitoring MySQL services. It's based on Prometheus.
Pip is a tool used to install python packages. You should really ask over at /r/python as this is not a MySQL issue but a python configuration one.
You really should look into setting up virtual enviroments to avoid messing with the system installed python. for a quick start to get you going, I would install Homebrew. Then install whatever version of python you need.
Then depending on python packages you need you can run pip3 install <package>
The error is pretty self-explanatory - that file is missing. But the name of the file is very suspect. I imagine it should be something like ~/sandboxes/msb_8.0.21/my.sandbox.cnf
Note that's just an example. You'll have to figure out what version string it is looking for. I install MySQL server via Homebrew and it sets all of that stuff up for you, so you just need to point your client to root@localhost for a connection
Did you install mysql via dmg file from mysql's website or homebrew/macports?
Clearly your installation did NOT finish properly, so your mac has no mysql binaries such as mysql server (mysqld & mysqld_safe) and mysql client (mysql) That's why bash gives you no such file errors.
I suggest you to install mysql via homebrew (https://brew.sh)
brew command to install mysql (brew install mysql)
I've been told of other people about using Mysqld_multi for managing their instances, but I have never used it personally.
If you want several instances just for testing (not on production) I am very happy with MySQL Sandbox.
If you want it for production, virtual machines is the way to go (or Docker, if you do not want the VM overhead, and something that is getting a lot of traction), so that you can control the resources used by each instance.
In any serious environment you will want a dedicated server, even if it is a VPS or worse hardware, as MySQL is very memory and disk hungry (as any database) and you do not want your application or other instances affecting the performance of your DB.
It's not an iOS app, but our web app runs well on mobile. I use it pretty frequently from my phone and have never had issues. One really nice thing is all your query history and connections are saved and synced across devices. e.g. you can enter your connection properties and write a query on your desktop and later rerun the exact same query on your phone. We don't do push notifications, but we have a really easy way to set up Slack alerts / send data to Google Sheets.
Check it out here:
I've heard good things about DataDog but haven't personally used it yet.
Their default MySQL stuff is documented here, and I believe you can add custom metrics for almost anything: https://docs.datadoghq.com/integrations/mysql/
I would suggest taking a look at Grafana. There are a ton of Dashboards other people have created and shared. It shouldn't be too difficult to take something like this dashboard and customize it to your needs.
First, multi-master is great! Until it stops working. Per another comment, I found Galera to be not very good on a MySQL server that is busy writing/updating on a server that actually uses the disk/hardware I/O.
You are not far off from a reasonably good architecture with your current setup. You can script promoting a slave any number of ways. The most reliable is probably using keepalived. Lots of examples out there. Know that keepalived isn't really set up to promote/demote nodes in an interactive way. It's not bad. It's Just not a pacemaker cluster.
Don't use a NAS with a 1 GB NIC as the backend. It won't scale. And then getting off of it will be difficult.
If you aren't already married to MySQL, Postgresql will probably do the job better. Especially if your IOT devices send JSON. Otherwise, know that your log server should be it's own SQL server and optimized for writes, a ton of memory not required. Then replicate the data to a second server that has indexes and RAM for fast select queries.
Lots and lots of examples of running active/passive MySQL Linux clusters out there. Know that you will need at least three servers. The two node examples using pacemaker won't failover in a predictable way. Here's a very thorough DRBD example: https://www.linbit.com/downloads/tech-guides/DRBD8_MySQL_HA_on_the_Pacemaker_cluster_Stack.pdf
There are also examples out there using NBD as the way to get failover. This is not a bad solution, but you need at least 3 hosts with 10Gb NICs that support SAN packets. (Jumbo frames?) You still need a heartbeat cluster and three servers.
Note that I've done all of the setups described on absolutely vanilla desktop hardware with a few Intel nics,dedicated drives for the database and it worked great.
Sorry guys! This is the first time I ask for help with this kind of problem. Of course, without the basic information it is hard to troubleshot anything.
Windows 10 family: 10.0.19042 Number 19042
Type x64
Here is the link to the logs: https://filetransfer.io/data-package/zgjBA4rd#link
​
Basically I am a newbie in the sql world, I just want to learn and to start the server on my computer and working on it as well.
If you are confused about the different types of joins (left, right, inner, outer), you won't really get this by reading alone. You must setup some test data and try this out. I think there are some web sites where you can do this without much effort. sqlzoo could be worth trying.
One way I've dealt with that is to allow free form entry. However, use jQuery to 'suggest' from a list of known good options.
ie: pre-populate a list of countries. You could also get fancy and allow user entered data to get onto the list either after an admin verifies, or if more than three people are using that value or so.
"FROM tablea, tableb" will take each record from tablea and match it against each record in tableb, giving you the cross-product of the two tables. This is almost never what you want.
You want to research JOINs. I like this reference.
As a most basic case...
SELECT * FROM tablea as a LEFT JOIN tableb as b ON b.myvalue BETWEEN a.lowvalue AND a.hivalue
The ON clause is where you will have to work your magic. If a JOIN doesn't get you there, you will have to research SubQuery and all the fun they can be.
This comment is right- if the "algorithm" for merge is trivial, you can just INSERT INTO another_table SELECT ... GROUP BY id
, with the appropriate aggregate function.
However, if you want to do complex cleanups without programming you will need a higher level tool such as http://openrefine.org/
If you’re okay with not building something from scratch, you can use a Business Intelligence tool. One example of an open source tool is Metabase: https://metabase.com/
Another is Redash: https://redash.io/help/open-source/setup
Metabase is a bit easier to build reports your need. But redash is also very solid.
Check these two out:
https://www.metabase.com/ and https://redash.io/
Both are free, redash looks paid, but look for the “host it yourself” version on the page. Metabase is probably what you’ll want, it’s a bit more friendly, but redash is a bit more customizable, but requires more work up front.
You're going to need some scripting language that can interact with the database. Use PHP. It's got a small learning curve, easy to use, enough people use it that you can easily Google any question you may have about it and find the answer. There are other languages, but php is going to be the best for a beginner to get some stuff from a database to a browser.
This is a good place to start When you understand what's going on, check out a tutorial, this one seems pretty good
I'm going to preface this with saying I'm not much of a mysql fan - its kind of a weird little database and my preference is PostgreSQL. Mysql does have huge adoption, largely among web developers, and it is a perfectly serviceable db.
However, given your current level of expertise (spreadsheets) its a decent choice and I would suggest you get your hands on SequelPro on the Mac. There is NO BETTER DATABASE GUI in the world. Not for mysql or any other database. Bonus - its free and has excellent CSV import/export capabilities which will make moving your spreadsheet data in and out easy.
This was the best hint that I could find as to their pricing.
https://aws.amazon.com/marketplace/pp/B00909GAR4/ref=mkt_ste_l2_db_f3?nc2=h_l3_db
If you're only on a single node then you might want to instead look into pressing memcached into service by serializing objects. Even on basic single table lookups via a PK, memcache outruns MySQL querying the result.
I'm also assuming you've tried and gave up on the native MySQL qcache feature.
That's phpmyadmin. Yes it is free and comes with most prepackaged servers (WAMP,etc) or with CPanel hosts. You can install it yourself too.
http://www.phpmyadmin.net/home_page/index.php
A single file application of the same type is called adminer and is much easier to install.
You're probably putting the cart before the horse.
Check out Limesurvey. It's open source and will ask all the various questions you want answered and store the data.
btw, MariaDB 5.5 EoL date is 11 Apr 2020 https://mariadb.org/mariadb-55-rip/ and not suitable for production.
Please check:
SHOW VARIABLES LIKE '%char%'; SHOW CREATE DATABASE dbname; SHOW CREATE TABLE tablename\G
Encoding should be correct in multiple places:
I’ve used MAMP in the past, it’s pretty simple to install and run.
You should also think about installing a GUI such as MySQL Workbench so you can easily interact with the database, unless you’re a fan of using command line :)
Another approach (and the one I use these days instead of MAMP) is to use a virtual environment such as Vagrant, and install MySQL inside that - keeps it sandboxed from your host OSX system.
Edit: fixed the link
Check out phpMyAdmin -- https://www.phpmyadmin.net/
Of course the demo for MySQL seems to be broken at the moment. The GUI for MariaDB seems to be working. You can get an idea of the interface without going through the installation.
to work with a database in a manner that's significantly easier than the command line, you need a "front end tool" such as MySQL workbench or HeidiSQL
You install it according to whatever instructions came with it. I don't have a mac handy at the moment, so I'm not going to be able to help much with that.
Generally, databases run as services. That is to say that they will start when the OS boots, and they will continue to run, silently in the background, until explicitly turned off or until the computer is powered down.
Code interacts with databases via drivers/TCP ports/sockets. Humans interact with databases via clients - at its simplest, that's the MySQL command line client (called 'mysql'). The sort of UI you're probably thinking of is more like MySQL Workbench: https://www.mysql.com/products/workbench/.
Either way, you're going to need at least one client, and probably two - the CLI and workbench are good places to start. You'll need to know where the DB is listening for client connections. For MySQL, that is usually on port 3306 or via a socket, depending on what you told the DB to do when you installed it.
>My biggest challenge is I need to add it to the query I am already using to retrieve marks and messages; but I am not sure how to do that because this query is already so huge.
Use stored procedures/functions. That way your queries can be long and complex as you need, and you can call them from node with just a simple 'CALL procedureName(param)'
MYSQL Workbench is pretty good for creating/testing stored procedures quickly.