板凳-------Mysql cookbook学习 (九)

发布于:2025-05-31 ⋅ 阅读:(21) ⋅ 点赞:(0)

第4章:表管理
4.0 引言
MySQL :: 员工样例数据库 :: 3 安装
https://dev.mysql.com/doc/employee/en/employees-installation.html

Employees 数据库与几种不同的 存储引擎,默认情况下启用 InnoDB 引擎。编辑 文件并调整 comments 选择不同的存储引擎:employees.sql
SET default_storage_engine = InnoDB;
– set storage_engine = MyISAM;
– set storage_engine = Falcon;
– set storage_engine = PBXT;
– set storage_engine = Maria;

• MySQL 5.7 或更早:使用 storage_engine。
• MySQL 8.0+:使用 default_storage_engine。

D:\sql\Mysql_learning\employee\mysql\master>mysql -u root -p -t < employees.sql
Enter password: ****
+-----------------------------+
| INFO                        |
+-----------------------------+
| CREATING DATABASE STRUCTURE |
+-----------------------------+
+------------------------+
| INFO                   |
+------------------------+
| storage engine: InnoDB |
+------------------------+
+---------------------+
| INFO                |
+---------------------+
| LOADING departments |
+---------------------+
+-------------------+
| INFO              |
+-------------------+
| LOADING employees |
+-------------------+
+------------------+
| INFO             |
+------------------+
| LOADING dept_emp |
+------------------+
+----------------------+
| INFO                 |
+----------------------+
| LOADING dept_manager |
+----------------------+
+----------------+
| INFO           |
+----------------+
| LOADING titles |
+----------------+
+------------------+
| INFO             |
+------------------+
| LOADING salaries |
+------------------+
+---------------------+
| data_load_time_diff |
+---------------------+
| 00:01:09            |
+---------------------+
mysql>  SOURCE D:/sql/Mysql_learning/employee/mysql/master/test_employees_md5.sql;
Database changed
+----------------------+
| INFO                 |
+----------------------+
| TESTING INSTALLATION |
+----------------------+
1 row in set (0.00 sec)

Query OK, 0 rows affected, 2 warnings (0.00 sec)

Query OK, 0 rows affected (0.06 sec)

Query OK, 0 rows affected (0.04 sec)

Query OK, 6 rows affected (0.00 sec)
Records: 6  Duplicates: 0  Warnings: 0

+--------------+------------------+----------------------------------+
| table_name   | expected_records | expected_crc                     |
+--------------+------------------+----------------------------------+
| departments  |                9 | d1af5e170d2d1591d776d5638d71fc5f |
| dept_emp     |           331603 | ccf6fe516f990bdaa49713fc478701b7 |
| dept_manager |               24 | 8720e2f0853ac9096b689c14664f847e |
| employees    |           300024 | 4ec56ab5ba37218d187cf6ab09ce1aa1 |
| salaries     |          2844047 | fd220654e95aea1b169624ffe3fca934 |
| titles       |           443308 | bfa016c472df68e70a03facafa1bc0a8 |
+--------------+------------------+----------------------------------+
6 rows in set (0.00 sec)

Query OK, 0 rows affected, 1 warning (0.01 sec)

Query OK, 0 rows affected (0.04 sec)

Query OK, 0 rows affected (0.00 sec)

Query OK, 300024 rows affected, 1 warning (6.19 sec)
Records: 300024  Duplicates: 0  Warnings: 1

Query OK, 1 row affected (0.29 sec)

Query OK, 0 rows affected (0.00 sec)

Query OK, 9 rows affected, 1 warning (0.01 sec)
Records: 9  Duplicates: 0  Warnings: 1

Query OK, 1 row affected (0.01 sec)

Query OK, 0 rows affected (0.00 sec)

Query OK, 24 rows affected, 1 warning (0.01 sec)
Records: 24  Duplicates: 0  Warnings: 1

Query OK, 1 row affected (0.01 sec)

Query OK, 0 rows affected (0.00 sec)

Query OK, 331603 rows affected, 1 warning (6.28 sec)
Records: 331603  Duplicates: 0  Warnings: 1

Query OK, 1 row affected (0.38 sec)

Query OK, 0 rows affected (0.00 sec)

Query OK, 443308 rows affected, 1 warning (9.02 sec)
Records: 443308  Duplicates: 0  Warnings: 1

Query OK, 1 row affected (0.43 sec)

Query OK, 0 rows affected (0.00 sec)

Query OK, 2844047 rows affected, 1 warning (51.55 sec)
Records: 2844047  Duplicates: 0  Warnings: 1

Query OK, 1 row affected (2.60 sec)

Query OK, 0 rows affected (0.07 sec)

+--------------+------------------+----------------------------------+
| table_name   | found_records    | found_crc                        |
+--------------+------------------+----------------------------------+
| departments  |                9 | d1af5e170d2d1591d776d5638d71fc5f |
| dept_emp     |           331603 | ccf6fe516f990bdaa49713fc478701b7 |
| dept_manager |               24 | 8720e2f0853ac9096b689c14664f847e |
| employees    |           300024 | 4ec56ab5ba37218d187cf6ab09ce1aa1 |
| salaries     |          2844047 | fd220654e95aea1b169624ffe3fca934 |
| titles       |           443308 | bfa016c472df68e70a03facafa1bc0a8 |
+--------------+------------------+----------------------------------+
6 rows in set (0.00 sec)

+--------------+---------------+-----------+
| table_name   | records_match | crc_match |
+--------------+---------------+-----------+
| departments  | OK            | ok        |
| dept_emp     | OK            | ok        |
| dept_manager | OK            | ok        |
| employees    | OK            | ok        |
| salaries     | OK            | ok        |
| titles       | OK            | ok        |
+--------------+---------------+-----------+
6 rows in set (0.00 sec)

Query OK, 0 rows affected (0.00 sec)

Query OK, 0 rows affected (0.00 sec)

+------------------+
| computation_time |
+------------------+
| 00:01:18         |
+------------------+
1 row in set (0.01 sec)

Query OK, 0 rows affected (0.04 sec)

+---------+--------+
| summary | result |
+---------+--------+
| CRC     | OK     |
| count   | OK     |
+---------+--------+
2 rows in set (0.00 sec)

mysql>
SQL CREATE TABLE LIKE 用法及示例
在 MySQL 中,CREATE TABLE LIKE 是一种用于创建新表的语法,该新表的结构与现有表完全相同,但不包含源表中的数据1。以下是详细的用法和示例:

import mysql.connector
from pathlib import Path
import re

def merge_sql_files(main_sql_path):
    """递归合并所有被引用的SQL文件"""
    main_path = Path(main_sql_path)
    base_dir = main_path.parent
    merged_sql = []
    
    with open(main_path, 'r', encoding='utf-8') as f:
        for line in f:
            # 处理source命令
            if line.strip().startswith('source'):
                # 提取被引用的文件名
                match = re.search(r'source\s+[\'"](.+?)[\'"]|source\s+(\S+)', line.strip())
                ref_file = match.group(1) or match.group(2)
                ref_path = base_dir / ref_file
                
                if ref_path.exists():
                    merged_sql.append(f"\n-- 开始导入: {ref_file}\n")
                    merged_sql.append(merge_sql_files(str(ref_path)))
                    merged_sql.append(f"\n-- 完成导入: {ref_file}\n")
                else:
                    raise FileNotFoundError(f"无法找到引用的SQL文件: {ref_path}")
            else:
                # 移除DELIMITER等客户端命令
                if not line.strip().upper().startswith(('DELIMITER', 'USE ', 'CONNECT ')):
                    merged_sql.append(line)
    
    return ''.join(merged_sql)

def execute_mega_sql(db_config, full_sql):
    """执行大型SQL脚本"""
    cnx = mysql.connector.connect(**db_config)
    cursor = cnx.cursor()
    
    try:
        # 设置更大的缓冲区
        cursor.execute("SET GLOBAL max_allowed_packet=256*1024*1024")
        cursor.execute("SET GLOBAL net_buffer_length=1000000")
        
        # 分割并执行SQL语句
        statements = [s.strip() for s in full_sql.split(';') if s.strip()]
        for stmt in statements:
            try:
                # 确保语句以分号结尾
                sql = stmt if stmt.endswith(';') else stmt + ';'
                for result in cursor.execute(sql, multi=True):
                    if result.with_rows:
                        result.fetchall()  # 消费结果集
                    print(f"执行成功: {result.statement[:100]}...")
                cnx.commit()
            except mysql.connector.Error as err:
                print(f"跳过错误: {err}\n语句: {stmt[:200]}...")
                cnx.rollback()
            except Exception as e:
                print(f"意外错误: {e}")
                cnx.rollback()
    
    finally:
        cursor.close()
        cnx.close()

# 配置信息
db_config = {
    'host': 'localhost',
    'user': 'root',
    'password': 'root',
    'database': 'cookbook'
}

# 主SQL文件路径
main_sql_path = r"D:\sql\Mysql_learning\employee\mysql\dataset_small\employee.sql"

try:
    print("开始合并SQL文件...")
    full_sql = merge_sql_files(main_sql_path)
    
    # 可选:保存合并后的文件供检查
    with open('merged_script.sql', 'w', encoding='utf-8') as f:
        f.write(full_sql)
    
    print("开始执行SQL脚本...")
    execute_mega_sql(db_config, full_sql)
    print("数据导入成功完成!")

except Exception as e:
print(f"导入失败: {str(e)}")
def verify_import():
    tables = [
        'employee', 'department', 'dept_emp',
        'dept_manager', 'title', 'salary'
    ]
    
    cnx = mysql.connector.connect(**db_config)
    cursor = cnx.cursor(dictionary=True)
    
    for table in tables:
        cursor.execute(f"SELECT COUNT(*) as count FROM {table}")
        result = cursor.fetchone()
        print(f"{table}表记录数: {result['count']}")
    
    cursor.close()
    cnx.close()

verify_import()
employee表记录数: 1000
department表记录数: 9
dept_emp表记录数: 1103
dept_manager表记录数: 16
title表记录数: 1470
salary表记录数: 9488
mysql> SHOW TABLES;
+-----------------------+
| Tables_in_cookbook    |
+-----------------------+
| actors                |
| adcount               |
| al_winner             |
| app_log               |
| artist                |
| book_authors          |
| book_vendor           |
| booksales             |
| catalog_list          |
| cd                    |
| city                  |
| color                 |
| cow_color             |
| cow_order             |
| current_dept_emp      |
| customers             |
| date_val              |
| datetbl               |
| datetime_val          |
| department            |
| dept_emp              |
| dept_emp_latest_date  |
| dept_manager          |
| dialogue              |
| dialogue_analysis     |
| die                   |
| doremi                |
| drawing               |
| driver_log            |
| employee              |
| expt                  |
| formula1              |
| goods_characteristics |
| goods_shops           |
| groceries             |
| groceries_order_items |
| hitcount              |
| hitlog                |
| hostip                |
| hostname              |
| housewares            |
| housewares2           |
| housewares3           |
| housewares4           |
| httpdlog              |
| httpdlog2             |
| hw_category           |
| image                 |
| ingredient            |
| insect                |
| inv_item              |
| invoice               |
| item                  |
| limbs                 |
| limbs_backup          |
| limbs_stats           |
| mail                  |
| mail2                 |
| mail_view             |
| marathon              |
| mark_log              |
| metal                 |
| money                 |
| movies                |
| movies_actors         |
| movies_actors_link    |
| mytable               |
| name                  |
| news                  |
| newsstaff             |
| numbers               |
| obs                   |
| occasion              |
| painting              |
| passtbl               |
| passwd                |
| patients              |
| perl_session          |
| person                |
| php_session           |
| phrase                |
| player_stats          |
| player_stats2         |
| poi                   |
| poll_vote             |
| profile               |
| profile_contact       |
| rainfall              |
| rand_names            |
| rank                  |
| ranks                 |
| reviews               |
| roster                |
| ruby_session          |
| salary                |
| sales_region          |
| sales_tax_rate        |
| sales_volume          |
| sibling               |
| some table            |
| standings1            |
| standings2            |
| states                |
| str_val               |
| sundays               |
| taxpayer              |
| testscore             |
| testscore_withmisses  |
| testscore_withmisses2 |
| time_val              |
| title                 |
| tmp                   |
| tomcat_role           |
| tomcat_session        |
| tomcat_user           |
| top_names             |
| trip_leg              |
| trip_log              |
| tsdemo                |
| weatherdata           |
| weekday               |
+-----------------------+
121 rows in set (0.01 sec)
mysql> SELECT * FROM department LIMIT 5;
+---------+------------------+
| dept_no | dept_name        |
+---------+------------------+
| d009    | Customer Service |
| d005    | Development      |
| d002    | Finance          |
| d003    | Human Resources  |
| d001    | Marketing        |
+---------+------------------+
5 rows in set (0.00 sec)

mysql> SELECT COUNT(*) FROM employee;
+----------+
| COUNT(*) |
+----------+
|     1000 |
+----------+
1 row in set (0.01 sec)

基本语法

CREATE TABLE new_table_name LIKE existing_table_name;
new_table_name:要创建的新表的名称。
existing_table_name:作为模板的现有表的名称。
此语法会复制 existing_table_name 表的所有列定义、索引以及约束到 new_table_name 表中1。

示例代码
以下是一个具体的例子,展示如何使用 CREATE TABLE LIKE 创建一个新表:

– 假设有一个名为 employees 的表
CREATE TABLE employees_backup LIKE employees;

上述语句将创建一个名为 employees_backup 的新表,其结构与 employees 表完全一致,但不包含任何数据1。

如果需要同时复制数据,则可以结合 INSERT INTO SELECT 使用:

– 复制数据到新表
INSERT INTO employees_backup SELECT * FROM employees;

注意事项
CREATE TABLE LIKE 不会复制触发器或视图等其他数据库对象3。
在不同数据库系统中,LIKE 的支持情况可能有所不同。例如,PostgreSQL 支持类似的 CREATE TABLE AS 功能,但行为略有差异2。
结合 INSERT INTO SELECT 的完整示例
以下是一个完整的示例,展示如何结合 CREATE TABLE LIKE 和 INSERT INTO SELECT 创建并填充新表:

– 创建结构相同的空表
CREATE TABLE employees_backup LIKE employees;

– 将数据从原表插入到新表
INSERT INTO employees_backup SELECT * FROM employees WHERE department = ‘Sales’;
上述代码将创建一个名为 employees_backup 的新表,并仅复制属于 Sales 部门的数据

4.1 克隆表

mysql> insert into mail2 select * from mail where srcuser = 'barb';
Query OK, 3 rows affected (0.01 sec)
Records: 3  Duplicates: 0  Warnings: 0

mysql> select * from mail2;
+---------------------+---------+---------+---------+---------+-------+
| t                   | srcuser | srchost | dstuser | dsthost | size  |
+---------------------+---------+---------+---------+---------+-------+
| 2006-05-11 10:15:08 | barb    | saturn  | tricia  | mars    | 58274 |
| 2006-05-13 13:59:18 | barb    | saturn  | tricia  | venus   |   271 |
| 2006-05-14 14:42:21 | barb    | venus   | barb    | venus   | 98151 |
+---------------------+---------+---------+---------+---------+-------+
3 rows in set (0.00 sec)

4.2 将查询结果保存到表中
一、CREATE TABLE … LIKE + INSERT INTO … SELECT
原理
结构复制:通过 LIKE 关键字复制源表的完整结构(包括字段、索引、约束等),但不复制数据。
数据筛选插入:使用 INSERT INTO … SELECT 将源表中符合条件的数据插入新表。
实例
场景:从 employees 表复制 2023 年入职的员工到新表 employees_2023。
sql
– 1. 创建结构相同的新表
mysql> CREATE TABLE employees_2023 LIKE employee;
Query OK, 0 rows affected (0.04 sec)

– 2. 插入符合条件的数据
INSERT INTO employees_2023
SELECT *
FROM employee
WHERE hire_date >= ‘2023-01-01’ AND hire_date < ‘2024-01-01’;
Query OK, 0 rows affected (0.01 sec)
Records: 0 Duplicates: 0 Warnings: 0
关键点:
新表 employees_2023 包含与 employees 相同的字段、索引和约束。
SELECT * 表示复制所有字段,需确保源表与目标表结构完全一致。
二、CREATE TABLE … AS SELECT
原理
结构与数据一次性创建:根据 SELECT 子句的字段动态创建新表,并将查询结果直接插入。
新表结构:仅包含 SELECT 中显式指定的字段,不继承源表的索引和约束(除非手动声明)。
实例
场景:创建 high_salary_employees 表,仅包含高薪员工的姓名和薪资。
sql
CREATE TABLE high_salary_employees AS
SELECT
emp_no, – 显式指定字段
first_name+last_name,
salary
FROM employee
WHERE salary > (SELECT AVG(salary) FROM employee); – 筛选高薪员工
关键点:
新表 high_salary_employees 仅包含 emp_id、full_name、salary 三个字段。
不继承源表的索引和约束,如需需手动添加(如 PRIMARY KEY)。
三、两种方法对比
特性 CREATE TABLE … LIKE + INSERT CREATE TABLE … AS SELECT
结构完整性 完全复制源表结构(含索引、约束) 仅复制 SELECT 中指定的字段
数据筛选灵活性 支持复杂 WHERE 条件和多表连接 支持 WHERE,但需提前规划字段列表
性能(大数据量) 分两步执行,可能稍慢 一次性完成,通常更快
适用场景 需要完整保留源表结构,后续可能修改数据 快速创建临时表或统计结果表
四、注意事项
字段匹配:
使用 INSERT INTO … SELECT 时,SELECT 的字段顺序和类型必须与目标表严格一致。
示例(指定字段避免顺序问题):
sql
INSERT INTO employees_2023 (emp_id, full_name, hire_date)
SELECT id, name, hired_at FROM employees WHERE …;

约束与索引:
CREATE TABLE … AS 不会复制源表的约束(如 NOT NULL)和索引,需手动添加:
sql
CREATE TABLE high_salary_employees AS
SELECT emp_id, salary FROM employees WHERE …;

ALTER TABLE high_salary_employees ADD PRIMARY KEY (emp_id); – 手动添加主键

临时表优化:
若需快速统计,可结合 TEMPORARY 关键字:
sql
CREATE TEMPORARY TABLE temp_stats AS
SELECT dept_id, COUNT(*) FROM employees GROUP BY dept_id;

五、实战案例
需求:从 orders 表复制 2023 年 1 月的订单到新表,并添加额外统计字段。
sql
– 方法一:LIKE + INSERT
CREATE TABLE jan2023_orders LIKE orders;

INSERT INTO jan2023_orders
SELECT *
FROM orders
WHERE order_date BETWEEN ‘2023-01-01’ AND ‘2023-01-31’;

– 方法二:AS SELECT(带计算字段)
CREATE TABLE jan2023_orders_stats AS
SELECT
order_id,
customer_id,
amount,
amount * 0.9 AS discounted_amount – 计算字段
FROM orders
WHERE order_date BETWEEN ‘2023-01-01’ AND ‘2023-01-31’;
通过上述方法,你可以灵活复制表结构和数据,满足不同场景需求。

https://www.modb.pro/db/408221
MySQL 样例数据库 Employee 的制作过程