1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
|
spark/bin/spark-sql \
--verbose \
--database default \
--name sql_test_1 \
--conf spark.driver.extraJavaOptions="-Dio.netty.tryReflectionSetAccessible=true -Dlog.file=/opt/spark/logs/driver-sql-hadoop-jfs-1.log" \
--conf spark.executor.extraJavaOptions="-Dio.netty.tryReflectionSetAccessible=true -Dlog.file=/opt/spark/logs/executor-sql-hadoop-jfs-1.log" \
-e \
"
DROP TABLE IF EXISTS spark_dept;
CREATE TABLE spark_dept(deptno int, dname string, loc string);
INSERT INTO spark_dept VALUES (10, 'ACCOUNTING', 'NEW YORK');
select * from spark_dept;
select count(*) from spark_dept where deptno=10
"
Maximum heap size rounded up to minimum supported size 512 MB, specified Xmx is 128 MB.
Using properties file: null
24/07/15 14:24:33 WARN Utils: Your hostname, xfhuang-pc resolves to a loopback address: 127.0.1.1; using 198.18.0.1 instead (on interface eth0)
24/07/15 14:24:33 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
Parsed arguments:
master local[*]
remote null
deployMode null
executorMemory null
executorCores null
totalExecutorCores null
propertiesFile null
driverMemory null
driverCores null
driverExtraClassPath null
driverExtraLibraryPath null
driverExtraJavaOptions -Dio.netty.tryReflectionSetAccessible=true -Dlog.file=/opt/spark/logs/driver-sql-hadoop-jfs-1.log
supervise false
queue null
numExecutors null
files null
pyFiles null
archives null
mainClass org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver
primaryResource spark-internal
name sql_test_1
childArgs [--database default -e
DROP TABLE IF EXISTS spark_dept;
CREATE TABLE spark_dept(deptno int, dname string, loc string);
INSERT INTO spark_dept VALUES (10, 'ACCOUNTING', 'NEW YORK');
select * from spark_dept;
select count(*) from spark_dept where deptno=10
]
jars null
packages null
packagesExclusions null
repositories null
verbose true
Spark properties used, including those specified through
--conf and those from the properties file null:
(spark.driver.extraJavaOptions,-Dio.netty.tryReflectionSetAccessible=true -Dlog.file=/opt/spark/logs/driver-sql-hadoop-jfs-1.log)
(spark.executor.extraJavaOptions,-Dio.netty.tryReflectionSetAccessible=true -Dlog.file=/opt/spark/logs/executor-sql-hadoop-jfs-1.log)
Main class:
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver
Arguments:
--database
default
-e
DROP TABLE IF EXISTS spark_dept;
CREATE TABLE spark_dept(deptno int, dname string, loc string);
INSERT INTO spark_dept VALUES (10, 'ACCOUNTING', 'NEW YORK');
select * from spark_dept;
select count(*) from spark_dept where deptno=10
--verbose
Spark config:
(spark.app.name,sql_test_1)
(spark.app.submitTime,1721024673994)
(spark.driver.extraJavaOptions,-Dio.netty.tryReflectionSetAccessible=true -Dlog.file=/opt/spark/logs/driver-sql-hadoop-jfs-1.log)
(spark.executor.extraJavaOptions,-Dio.netty.tryReflectionSetAccessible=true -Dlog.file=/opt/spark/logs/executor-sql-hadoop-jfs-1.log)
(spark.jars,)
(spark.master,local[*])
(spark.submit.deployMode,client)
(spark.submit.pyFiles,)
Classpath elements:
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
24/07/15 14:24:34 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
24/07/15 14:24:34 WARN HiveConf: HiveConf of name hive.privilege.synchronizer does not exist
24/07/15 14:24:34 WARN HiveConf: HiveConf of name hive.metastore.event.db.notification.api.auth does not exist
Zing VM warning: alt signal stk requested for user signal handler (signal 1).
Zing VM warning: alt signal stk requested for user signal handler (signal 2).
Zing VM warning: alt signal stk requested for user signal handler (signal 7).
Zing VM warning: alt signal stk requested for user signal handler (signal 8).
Zing VM warning: alt signal stk requested for user signal handler (signal 11).
Zing VM warning: alt signal stk requested for user signal handler (signal 12).
Zing VM warning: alt signal stk requested for user signal handler (signal 13).
Zing VM warning: alt signal stk requested for user signal handler (signal 15).
Zing VM warning: alt signal stk requested for user signal handler (signal 23).
Zing VM warning: alt signal stk requested for user signal handler (signal 60).
Zing VM warning: alt signal stk requested for user signal handler (signal 61).
Zing VM warning: alt signal stk requested for user signal handler (signal 62).
2024/07/15 14:24:35.013278 juicefs[3128621] <WARNING>: The latency to database is too high: 12.534501ms [sql.go:240]
24/07/15 14:24:35 WARN JuiceFileSystemImpl: 2024/07/15 14:24:35.013278 juicefs[3128621] <WARNING>: The latency to database is too high: 12.534501ms [sql.go:240]
Spark Web UI available at http://198.18.0.1:4040
Spark master: local[*], Application Id: local-1721024676263
DROP TABLE IF EXISTS spark_dept
Time taken: 1.867 seconds
CREATE TABLE spark_dept(deptno int, dname string, loc string)
24/07/15 14:24:41 WARN ResolveSessionCatalog: A Hive serde table will be created as there is no table provider specified. You can set spark.sql.legacy.createHiveTableByDefault to false so that native data source table will be created instead.
24/07/15 14:24:41 WARN SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
Time taken: 0.859 seconds
INSERT INTO spark_dept VALUES (10, 'ACCOUNTING', 'NEW YORK')
Time taken: 4.986 seconds
select * from spark_dept
10 ACCOUNTING NEW YORK
Time taken: 0.663 seconds, Fetched 1 row(s)
select count(*) from spark_dept where deptno=10
24/07/15 14:24:48 WARN GarbageCollectionMetrics: To enable non-built-in garbage collector(s) List(GPGC Old Pauses), users should configure it(them) to spark.eventLog.gcMetrics.youngGenerationGarbageCollectors or spark.eventLog.gcMetrics.oldGenerationGarbageCollectors
24/07/15 14:24:48 WARN GarbageCollectionMetrics: To enable non-built-in garbage collector(s) List(GPGC New Cycles, GPGC Old Pauses), users should configure it(them) to spark.eventLog.gcMetrics.youngGenerationGarbageCollectors or spark.eventLog.gcMetrics.oldGenerationGarbageCollectors
24/07/15 14:24:48 WARN GarbageCollectionMetrics: To enable non-built-in garbage collector(s) List(GPGC New Pauses, GPGC New Cycles, GPGC Old Pauses), users should configure it(them) to spark.eventLog.gcMetrics.youngGenerationGarbageCollectors or spark.eventLog.gcMetrics.oldGenerationGarbageCollectors
24/07/15 14:24:48 WARN GarbageCollectionMetrics: To enable non-built-in garbage collector(s) List(GPGC Old Cycles, GPGC New Pauses, GPGC New Cycles, GPGC Old Pauses), users should configure it(them) to spark.eventLog.gcMetrics.youngGenerationGarbageCollectors or spark.eventLog.gcMetrics.oldGenerationGarbageCollectors
1
Time taken: 0.987 seconds, Fetched 1 row(s)
░▒▓ ~/environment 14:24 took 19s
❯ hadoop/bin/hadoop fs -ls /orders_hudi_2
Zing VM warning: alt signal stk requested for user signal handler (signal 1).
Zing VM warning: alt signal stk requested for user signal handler (signal 2).
Zing VM warning: alt signal stk requested for user signal handler (signal 7).
Zing VM warning: alt signal stk requested for user signal handler (signal 8).
Zing VM warning: alt signal stk requested for user signal handler (signal 11).
Zing VM warning: alt signal stk requested for user signal handler (signal 12).
Zing VM warning: alt signal stk requested for user signal handler (signal 13).
Zing VM warning: alt signal stk requested for user signal handler (signal 15).
Zing VM warning: alt signal stk requested for user signal handler (signal 23).
Zing VM warning: alt signal stk requested for user signal handler (signal 60).
Zing VM warning: alt signal stk requested for user signal handler (signal 61).
Zing VM warning: alt signal stk requested for user signal handler (signal 62).
2024-07-15 14:24:56,784 INFO fs.TrashPolicyDefault: The configured checkpoint interval is 0 minutes. Using an interval of 0 minutes that is used for deletion instead
2024-07-15 14:24:56,785 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes.
Found 1 items
drwxr-xr-x - xfhuang supergroup 4096 2024-07-15 14:24 /orders_hudi_2/spark_dept
|