-
Notifications
You must be signed in to change notification settings - Fork 0
/
save.sh
executable file
·453 lines (372 loc) · 26.1 KB
/
save.sh
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
#!/bin/bash
# Copyright 2014 CloudHarmony Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
if [ "$1" == "-h" ] || [ "$1" == "--help" ] ; then
cat << EOF
Usage: save.sh [args] [/path/to/results]
Saves SPEC CPU 2006 test results to CSV files, Google BigQuery, MySQL, PostgreSQL,
Librato Metrics or via HTTP callback. Test artifacts may also be saved to S3,
Google Cloud Storage or Azure (API) compatible object storage
If the [/path/to/results] argument is not specified, 'pwd' will be assumed.
This argument may be either the directory where test results have been written
to, or a directory containing numbered sub-directories [1..N] each containing
results from a test iteration. The test iteration number is included in saved
results (1 for non-numbered directories).
By default results are written to CSV files in 'pwd'. These arguments below may
be set to modify default CSV saving. These arguments may also be set in a
line delimited config file located in ~/.ch_benchmark (e.g. db_host=localhost)
--db Save results to a database instead of CSV files.
The following argument values are supported:
bigquery => save results to a Google BigQuery
dataset
callback => save results using an HTTP callback
librato => save results to Librato Metrics
(see https://metrics.librato.com)
mysql => save results to a MySQL db
postgresql => save results to a PostgreSQL db
For --db callback HTTP requests will be made to
--db_host. A HEAD request is used for validation,
and POST to submit results where CSV data is
contained in the POST body (first row is a header
containing column names). A simple example in PHP
to retrieve the CSV results as a string is:
if ($_SERVER['REQUEST_METHOD'] == 'POST') {
$csv = file_get_contents('php://input');
}
--db_and_csv If the --db argument is set, results will be saved
to both CSV and --db specified
--db_callback_header If the --db argument is 'callback', this argument
may specify one or more request headers to include
in both the HEAD validation and POST CSV submission
requests
--db_host If the --db argument is set, this argument
specifies the database server hostname. For
BigQuery this parameter may be optionally used to
designate a project (otherwise the default project
is assumed). For 'callback', this is the full URL
to post result to (if there is no http/https
prefix, http will be assumed). A HEAD request to
this URL is used for validation (should respond
with 2XX). Callbacks are in the form of an HTTP
POST where the POST body is CSV contents (1st row
is header containing column names). Callback should
respond with 2XX to be considered valid. The
following request parameters added to the URL:
benchmark_id => meta-id value in benchmark.ini
benchmark_version => meta-version value in
benchmark.ini
db_name => the --db_name argument value
table => the table name (including
--db_prefix)
This parameter is not used for Librato Metrics
Librato Metrics Parameters: the following parameters are specific to
--db librato only. More information about these parameters is available in the
Librato API documentation: http://dev.librato.com/v1/metric-attributes
http://dev.librato.com/v1/post/metrics and
http://dev.librato.com/v1/put/metrics/:name
Each of these metrics may be repeated to submit to multiple gauges/counters. If
multiple are set, the ordering of each will be used to distinguish their
properties. The only parameters that MUST be repeated are db_librato_name and
db_librato_value (or db_librato_count + db_librato_sum in place of value). If
the others are not repeated, they will be applied to all submissions
--db_librato_aggregate Enable service-side aggregation for the Librato
Metrics gauge. Only applicable if the gauge does not
already exist
--db_librato_color Sets a default color to prefer when visually
rendering the metric. Must be a seven character
string that represents the hex code of the color
e.g. #52D74C
--db_librato_count Optional name of the column that designates the
number of samples for each test performed (gauge
metrics only). Required if --db_librato_value is
not set. Cannot be used if --db_librato_value is
set
--db_librato_description Text that used to explain what a gauge is measuring.
This parameter may also contain column name tokens
that will be replaced by actual test values. The
format for these is {column_name}
--db_librato_display_max If a metric has a known theoretical maximum value,
set this so the visualizations can provide
perspective of the current values relative to the
maximum value
--db_librato_display_min If a metric has a known theoretical minimum value,
set this so that visualizations can provide
perspective of the current values relative to the
minimum value
--db_librato_display_name Name which will be used for the metric when viewing
the Metrics website. This parameter may also
contain column name tokens that will be replaced by
actual test values. The format for these is
{column_name}
--db_librato_display_units_long A string that identifies the unit of
measurement e.g. Microseconds. Used in
visualizations e.g. the Y-axis label on a graph.
Alternatively, this can be the name of a column
--db_librato_display_units_short A terse (usually abbreviated) string that
identifies the unit of measurement e.g. uS
(Microseconds). Used in visualizations e.g. the
tooltip for a point on a graph. Alternatively, this
can be the name of a column
--db_librato_display_stacked A boolean value indicating whether or not multiple
sources for a metric should be aggregated in a
visualization (e.g. stacked graphs). By default
counters have display_stacked enabled while gauges
have it disabled
--db_librato_display_transform A linear formula that is run on each measurement
prior to visualization. Useful for translating
between different units (e.g. Fahrenheit -> Celsius)
or scales (e.g. Microseconds -> Milliseconds). The
formula may only contain: numeric characters,
whitespace, parentheses, the letter x, and approved
mathematical operators ('+', '-', '', '/'). The
regular expression used is /^[\dxp()+-\/ ]+$/
--db_librato_max If --db_librato_count was set, this parameter should
designate the name of the column containing the
largest individual measurement. Cannot be used if
--db_librato_value is set
--db_librato_min If --db_librato_count was set, this parameter should
designate the name of the column containing the
smallest individual measurement. Cannot be used if
--db_librato_value is set
--db_librato_measure_time Optional name of the column containing a parsable
date string to associate with each test result
(otherwise the time submitted is assumed)
--db_librato_name The unique identifying name of the property being
tracked. The metric name is used both to create new
measurements and query existing measurements. Must
be 255 or fewer characters, and may only consist of
'A-Za-z0-9.:-_'. This parameter may also contain
column name tokens that will be replaced by actual
test values. The format for these is {column_name}.
For example, the parameter
"{meta_compute_service_id}-{meta_region}" might be
replaced with "aws:ec2-us-east-1". The default
name is the name of the benchmark name + version
+ --db_prefix, --db_suffix (if specified). Tokens
in this string may also include {benchmark} and
{version}
--db_librato_period Number of seconds that is the standard reporting
period of the metric. Setting the period enables
Metrics to detect abnormal interruptions in
reporting and aids in analytics. For gauge metrics
that have service-side aggregation enabled, this
option will define the period that aggregation
occurs on
--db_librato_source Optional string which describes the originating
source of a measurement when that measurement is
tracked across multiple members of a population.
Examples: foo.bar.com, user-123, 77025.
Sources must be composed of 'A-Za-z0-9.:-_' and can
be up to 255 characters in length. The word all is
reserved and cannot be used as user source.
This parameter may also contain column name tokens
that will be replaced by actual test values. The
format for these is {column_name}. For example, the
parameter "{meta_compute_service_id}-{meta_region}"
might be replaced with "aws:ec2-us-east-1"
--db_librato_sum If --db_librato_count was set, this MUST be set to
the name of the column containing the summation of
individual measurements. The combination of count
and sum are used to calculate an average value for
the recorded metric measurement. Cannot be used if
--db_librato_value is set
--db_librato_summarize_function Determines how to calculate values when rolling
up from raw values to higher resolution intervals.
Must be one of: 'average', 'sum', 'count', 'min',
'max'. If summarize_function is not set the
behavior defaults to average
--db_librato_sum_squares If --db_librato_count was set, this may be set to
the name of the column containing the summation of
the squared individual measurements. If set, a
standard deviation can be calculated for the r
ecorded metric measurement. Cannot be used if
--db_librato_value is set
--db_librato_type Type of metric to create (gauge or counter)
--db_librato_value the name of the column containing the value metric.
This must be a numeric value. Either this or a
combination of both --db_librato_count and
--db_librato_sum are REQUIRED
--db_mysql_engine An optional explicit storage engine to use when
creating MySQL tables (i.e. if a table does not
already exist). If not set, the default storage
engine will be used
--db_name Name of the database where tables should be created
and results stored. For Google BigQuery this should
be the dataset name. This parameter is not used for
Librato Metrics
--db_port If the --db argument is set, this argument
specifies the database server port. Defaults is the
corresponding database server defaults (3306 for
MySQL, 5432 for PostgreSQL, 80 for HTTP callbacks
and 443 for HTTP callbacks). Not applicable to
Google BigQuery. This parameter is not used for
Librato Metrics
--db_pswd If the --db argument is set, this argument
specifies the database server password. Default is
''. Not applicable to Google BigQuery. HTTP AUTH
password for --db callback, or API token for
Librato Metrics
--db_prefix If the --db argument is set, this argument
specifies an optional prefix to use for the results
table. Default table name is the benchmark name
with no prefix
--db_suffix If the --db argument is set, this argument
specifies an optional suffix to use for the results
table. Default table suffix is the benchmark
version with periods replaced with underscores
--db_user If the --db argument is set, this argument
specifies the database server username. Not
applicable to Google BigQuery. HTTP AUTH user for
--db callbacks, user name for Librato Metrics. For
MySQL user needs create table, drop table, and
load data infile permissions. For PostgreSQL, the
permissions are the same except that the user needs
copy permissions in place of MySQL load data infile
--iteration Explicit iteration number for test results -
otherwise 1 will be assumed unless results are in
numbered sub-directories
--nostore_csv Do not store SPEC CPU csv output files
--nostore_html Do not store SPEC CPU HTML reports or GIFs
--nostore_pdf Do not store SPEC CPU PDF reports
--nostore_rrd Do not store collectd RRD files
--nostore_text Do not store SPEC CPU text reports
--output The output directory to use for writing CSV files.
If not specified, the current working directory
will be used
--remove One or more columns to remove from the saved output
(CSV files or tables). This argument may be
repeated for multiple columns. To define multiple
values in ~/.ch_benchmark, use one line and comma
separated values. Wildcards are supported
--store Save result artifacts to object storage. The
following argument values are supported:
azure => save artifacts to an Azure Blob
Storage container
google => save artifacts to a Google Cloud
Storage bucket
s3 => save artifacts to an S3
compatible bucket
When used, URLs to the corresponding result
artifacts will be included in the CSV/db
results
--store_container If the --store argument is set, this argument
specifies the name of the container/bucket
where results should be stored. This argument is
REQUIRED when --store is set
--store_endpoint Overrides default API endpoints for storage
platforms. If specified, the endpoint should be
compatible with the designated --store API
--store_insecure Use an insecure endpoint (http) instead of secure
(https)
--store_key If the --store argument is set, this argument
specifies the API key or user for the corresponding
endpoint. This argument is REQUIRED when --store is
set
--store_prefix If the --store argument is set, this argument
specifies a container directory prefix (to avoid
overwriting other results). The following dynamic
values may be included:
{date[_format]} => a date string (optionally
formatted per [format] - see
http://php.net/manual/en/function.date.php
for valid format options -
default format is Y-m-d)
{benchmark} => benchmark name (unixbench)
(meta-id value in benchmark.ini)
{version} => benchmark version (e.g. 5_1_3)
(meta-version value in benchmark.ini)
{iteration} => iteration number
{hostname} => the compute instance hostname
{meta_*} => any of the meta_* runtime
parameters. If a meta_* value
is designated but was not set,
at runtime, it will be removed
from the prefix (including a
trailing /). Spaces are
replaced with _
{rand} => a random number. Random numbers
are the same for each unique
combination of other prefix
values
Multiple dynamic values may be specified, each
separated by a | character (e.g. {meta_compute_service_id|rand})
in which case the first dynamic value present will
be used. All substitions are lowercase
The default prefix is:
{benchmark}_{version}/{meta_compute_service_id|meta_provider_id}/{meta_instance_id}/{meta_storage_config}/{meta_region}/{date|meta_test_id}/{meta_resource_id|hostname}/{meta_run_id|rand}-{iteration}
--store_public If the --store argument is set, this argument
will result in stored artifact URLs being publicly
readable. If --store=azure, this parameter is
ignored because access rights are set at the
container level
--store_region If the --store argument is set, this argument
optionally specifies the service region. When an
explicit --store_endpoint argument is specified,
this argument is ignored. Otherwise, it is used to
determine the correct endpoint based on the --store
value specified. Valid regions for each --store
value are:
azure => not used (region is tied to the
account credentials)
google => not used (region is designated at time
of bucket creation)
s3 => required if --store_container is not
in the 'us-east-1' region
regin identifiers documented here:
http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
--store_secret If the --store argument is set, this argument
specifies the API secret or password for the
corresponding endpoint. This argument is REQUIRED
when --store is set
--verbose/-v Show verbose output - warning: this may produce a
lot of output
DEPENDENCIES
Saving artifacts using the --db and --store flags has the following
dependencies:
--db bigquery 'bq' => part of Google Cloud SDK see
https://developers.google.com/cloud/sdk/ for
detailed install instructions. 'bq' should be
pre-authenticated for the desired project where
the dataset exists and tables should be created
--db callback 'curl' => included with 'curl' package
--db mysql 'mysql' => included with 'mysql' package
--db postgresl 'psql' => included with 'postgresql' package
--save 'curl' => included with 'curl' package
SAVE SCHEMA
See README
USAGE
# save results to CSV files
./save.sh
# save results from 5 iterations text example above
./save.sh ~/spec-testing
# save results to a PostgreSQL database
./save --db postgresql --db_user dbuser --db_pswd dbpass --db_host db.mydomain.com --db_name benchmarks
# save results to BigQuery and artifact (TRIAD gnuplot PNG image) to S3
./save --db bigquery --db_name benchmark_dataset --store s3 --store_key THISIH5TPISAEZIJFAKE --store_secret thisNoat1VCITCGggisOaJl3pxKmGu2HMKxxfake --store_container benchmarks1234
# save results to Librato Metrics using the median metric and custom name/source
./save.sh --db librato --db_user [user] --db_pswd [API key] -v --db_librato_aggregate --db_librato_value metric
# save results to Librato Metrics using count + sum and custom name/source and other attributes
./save.sh --db librato --db_user [user] --db_pswd [API key] -v --db_librato_aggregate --db_librato_count samples --db_librato_display_units_short ms --db_librato_max metric_max --db_librato_min metric_min --db_librato_measure_time test_stopped --db_librato_name "{benchmark}-{test}" --db_librato_period 300 --db_librato_source "{meta_geo_region}" --db_librato_sum metric_sum --db_librato_sum_squares metric_sum_squares
EXIT CODES:
0 saving of results successful
1 saving of results failed
EOF
exit
elif [ -f "/usr/bin/php" ]; then
$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/lib/save.php $@
exit $?
else
echo "Error: missing dependency php-cli (/usr/bin/php)"
exit 1
fi