WEBVTT

1
00:00:02.940 --> 00:00:03.440
All right.

2
00:00:03.440 --> 00:00:04.100
Good morning.

3
00:00:04.100 --> 00:00:08.990
[AUDIO OUT] to Cloud BI Trends,
The Hybrid Cloud is Dead.

4
00:00:08.990 --> 00:00:13.160
Today we'll be hearing from
Snowflake, Sigma and Olivela,

5
00:00:13.160 --> 00:00:15.680
who uses both
Snowflake and Sigma.

6
00:00:15.680 --> 00:00:17.420
If you have any
questions as you're

7
00:00:17.420 --> 00:00:19.790
going through and
listening to the webinar,

8
00:00:19.790 --> 00:00:23.960
feel free to type them
into the questions section.

9
00:00:23.960 --> 00:00:27.770
We'll be addressing questions
at the end of the webinar,

10
00:00:27.770 --> 00:00:30.950
but feel free to ask
questions as we go along.

11
00:00:30.950 --> 00:00:32.990
For now, I'm going
to hand it off

12
00:00:32.990 --> 00:00:37.870
to Ross Perez from
Snowflake to get us started.

13
00:00:37.870 --> 00:00:39.130
Well, thanks so much, Erica.

14
00:00:39.130 --> 00:00:42.430
And a really big thank
you to Olivela, of course,

15
00:00:42.430 --> 00:00:45.850
for jumping on the
webinar as well.

16
00:00:45.850 --> 00:00:50.380
We really are appreciative of
them using both our products.

17
00:00:50.380 --> 00:00:52.780
And it'll be exciting to be
hearing a little bit more

18
00:00:52.780 --> 00:00:54.430
about their story today.

19
00:00:54.430 --> 00:00:56.440
So, yes, I'm Ross
Perez, Senior Director

20
00:00:56.440 --> 00:00:57.880
of Marketing at Snowflake.

21
00:00:57.880 --> 00:01:03.280
And what I'm here to talk
about is really the impact

22
00:01:03.280 --> 00:01:07.990
that a true first party Cloud
solution for data warehousing

23
00:01:07.990 --> 00:01:09.040
can have.

24
00:01:09.040 --> 00:01:14.740
And I really like the title
of this webinar and the theme

25
00:01:14.740 --> 00:01:19.360
of avoiding the hybrid
Cloud and going fully

26
00:01:19.360 --> 00:01:20.300
on the public Cloud.

27
00:01:20.300 --> 00:01:24.070
And if you look at the history
of data platforms and data

28
00:01:24.070 --> 00:01:28.840
warehouses, you can see that
this is really the trend

29
00:01:28.840 --> 00:01:31.900
and where the trend is
going for data warehousing,

30
00:01:31.900 --> 00:01:35.560
and certainly for
analytics as well.

31
00:01:35.560 --> 00:01:37.780
We've gone through
a lot of changes,

32
00:01:37.780 --> 00:01:42.520
really, in the past 30 years in
the world of data warehousing,

33
00:01:42.520 --> 00:01:46.060
where we went from the
first relational databases

34
00:01:46.060 --> 00:01:50.170
to the first data
warehousing appliances

35
00:01:50.170 --> 00:01:52.520
through a whole slew of
different technologies,

36
00:01:52.520 --> 00:01:55.240
including open
source technologies

37
00:01:55.240 --> 00:01:56.830
in the early 2000s.

38
00:01:56.830 --> 00:02:01.810
And now we've kind of arrived to
the place where it's certainly

39
00:02:01.810 --> 00:02:05.675
first party Cloud
and, we believe,

40
00:02:05.675 --> 00:02:07.300
the data warehouse
built for the Cloud,

41
00:02:07.300 --> 00:02:09.082
which of course is Snowflake.

42
00:02:13.710 --> 00:02:19.320
Now if we kind of take a look at
what the goals are for somebody

43
00:02:19.320 --> 00:02:21.300
who's using a data
warehouse for analytics,

44
00:02:21.300 --> 00:02:23.490
they're really quite
straightforward.

45
00:02:23.490 --> 00:02:25.890
Of course you want to
use your data warehouse

46
00:02:25.890 --> 00:02:28.920
to develop your own
customer experience,

47
00:02:28.920 --> 00:02:33.840
to look at what's going
on in whatever business

48
00:02:33.840 --> 00:02:36.030
that you may be running,
and understanding

49
00:02:36.030 --> 00:02:38.850
how your customers are
interacting with you.

50
00:02:38.850 --> 00:02:41.192
It's quality assurance,
knowing that the product

51
00:02:41.192 --> 00:02:42.900
that you're putting
forward, whether it's

52
00:02:42.900 --> 00:02:45.330
a service or an actual
product, is something

53
00:02:45.330 --> 00:02:50.330
that people can really
understand and use properly

54
00:02:50.330 --> 00:02:52.050
and integrate with properly.

55
00:02:52.050 --> 00:02:53.970
Operational efficiency,
making sure everything

56
00:02:53.970 --> 00:02:55.470
is running properly.

57
00:02:55.470 --> 00:02:57.870
And innovation,
actually taking the data

58
00:02:57.870 --> 00:03:00.300
that you have in
your data warehouse,

59
00:03:00.300 --> 00:03:03.430
using a powerful
PI tool like Sigma

60
00:03:03.430 --> 00:03:09.330
to understand what you can do
to improve what you're doing

61
00:03:09.330 --> 00:03:12.910
and come up with new ideas and
opportunities for the future

62
00:03:12.910 --> 00:03:13.410
as well.

63
00:03:17.140 --> 00:03:17.870
Next slide.

64
00:03:17.870 --> 00:03:18.370
Great.

65
00:03:18.370 --> 00:03:22.210
So the capabilities
that Snowflake delivers

66
00:03:22.210 --> 00:03:24.430
are actually really
quite straightforward.

67
00:03:24.430 --> 00:03:27.160
So we're a data warehouse
built for the Cloud.

68
00:03:27.160 --> 00:03:28.810
And what that means
is that, instead

69
00:03:28.810 --> 00:03:33.130
of utilizing older, 30
years old, data warehousing

70
00:03:33.130 --> 00:03:35.890
architecture, Snowflake is
actually built for the Cloud

71
00:03:35.890 --> 00:03:39.440
from the ground up with a
completely new architecture.

72
00:03:39.440 --> 00:03:42.550
This architecture utilizes the
power of Cloud infrastructure

73
00:03:42.550 --> 00:03:45.340
providers, such
as AWS and Azure,

74
00:03:45.340 --> 00:03:48.610
to completely
separate the storage

75
00:03:48.610 --> 00:03:51.970
and compute components
of the data warehouse

76
00:03:51.970 --> 00:03:55.240
and enable you to scale them
completely independently.

77
00:03:55.240 --> 00:03:58.390
What this means is that you
have a SAS first party service,

78
00:03:58.390 --> 00:04:01.300
so it's relatively
easy to manage.

79
00:04:01.300 --> 00:04:03.580
It enables you to store
limited amounts of data.

80
00:04:03.580 --> 00:04:06.880
So you can now have all of
your data in one system.

81
00:04:06.880 --> 00:04:08.560
Of course you can
scale on demand

82
00:04:08.560 --> 00:04:11.230
and use the Cloud
to basically match

83
00:04:11.230 --> 00:04:14.080
the scale of what you're
deploying with Snowflake

84
00:04:14.080 --> 00:04:15.790
to whatever your need is.

85
00:04:15.790 --> 00:04:19.240
And the kind of
great part about it

86
00:04:19.240 --> 00:04:23.500
is that this is not a open
source technology that's

87
00:04:23.500 --> 00:04:24.370
difficult to use.

88
00:04:24.370 --> 00:04:27.450
You can use standard SQL
to interact with Snowflake

89
00:04:27.450 --> 00:04:29.190
and work with it day to day.

90
00:04:33.630 --> 00:04:36.720
So let's dig into these
individual components

91
00:04:36.720 --> 00:04:37.930
in a little bit more detail.

92
00:04:37.930 --> 00:04:40.170
So what do we mean
by easy management?

93
00:04:40.170 --> 00:04:42.570
Well, with traditional
data warehouses,

94
00:04:42.570 --> 00:04:45.040
they were designed at
a time that was much,

95
00:04:45.040 --> 00:04:47.250
much further ago,
about 30 years ago,

96
00:04:47.250 --> 00:04:50.070
and sometimes you can
experience a significant amount

97
00:04:50.070 --> 00:04:51.878
of management overhead.

98
00:04:51.878 --> 00:04:53.670
And it's important to
point out that that's

99
00:04:53.670 --> 00:04:57.540
even true with data warehouses
that are technically

100
00:04:57.540 --> 00:05:02.220
in the Cloud, but in many
cases are older technologies

101
00:05:02.220 --> 00:05:05.460
have simply been kind of warmed
over and put into the Cloud.

102
00:05:05.460 --> 00:05:07.800
But really in the
background there's

103
00:05:07.800 --> 00:05:10.980
still older technologies that
require a lot of management.

104
00:05:10.980 --> 00:05:12.480
Well, with Snowflake,
you don't have

105
00:05:12.480 --> 00:05:13.897
to worry about
infrastructure, you

106
00:05:13.897 --> 00:05:15.360
don't have to
worry about tuning,

107
00:05:15.360 --> 00:05:17.460
there's no optimization
or indexing,

108
00:05:17.460 --> 00:05:20.130
you don't have to partition
and worry about how storage

109
00:05:20.130 --> 00:05:24.090
is going to work, certainly
no vacuuming or sorting,

110
00:05:24.090 --> 00:05:25.260
or workload management.

111
00:05:25.260 --> 00:05:27.690
So the tasks that you
would traditionally

112
00:05:27.690 --> 00:05:31.822
associate with managing and
optimizing a data warehouse

113
00:05:31.822 --> 00:05:33.030
aren't required in Snowflake.

114
00:05:33.030 --> 00:05:37.530
It's a SAS data warehouse that
really manages the optimization

115
00:05:37.530 --> 00:05:40.410
and tuning for you, so that you
don't have to worry about it.

116
00:05:45.440 --> 00:05:47.700
Another capability that
the Cloud really affords us

117
00:05:47.700 --> 00:05:51.860
is to account for newer
data types that are becoming

118
00:05:51.860 --> 00:05:53.653
significantly more common.

119
00:05:53.653 --> 00:05:55.070
For instance,
semi-structured data

120
00:05:55.070 --> 00:05:57.980
like JSON, Avro,
XML, Parquet, and ORC

121
00:05:57.980 --> 00:05:59.990
can be stored in
Snowflake natively,

122
00:05:59.990 --> 00:06:02.205
alongside CSV and text data.

123
00:06:02.205 --> 00:06:03.830
You also notice that
this aligns really

124
00:06:03.830 --> 00:06:05.640
well with the
capabilities in Sigma

125
00:06:05.640 --> 00:06:07.880
to query semi-structured data.

126
00:06:07.880 --> 00:06:09.830
So with Snowflake
and Sigma together,

127
00:06:09.830 --> 00:06:12.830
you can load semi-structured
data into a variant type

128
00:06:12.830 --> 00:06:14.690
column in Snowflake,
and then you

129
00:06:14.690 --> 00:06:17.780
can query that data, using
either Snowflake or Sigma.

130
00:06:17.780 --> 00:06:20.740
In Snowflake you can use dot
notation to create this data.

131
00:06:20.740 --> 00:06:23.030
In Sigma you can
natively connect and use

132
00:06:23.030 --> 00:06:26.720
the semi-structured
data through the UI.

133
00:06:26.720 --> 00:06:30.260
So regardless of how you want
to interact with your data,

134
00:06:30.260 --> 00:06:31.790
the really important
thing here is

135
00:06:31.790 --> 00:06:34.370
that you can use Snowflake
and Sigma together

136
00:06:34.370 --> 00:06:37.850
to query any type
of data that you may

137
00:06:37.850 --> 00:06:39.863
be collecting and utilizing.

138
00:06:39.863 --> 00:06:41.780
The other thing that's
important to point out,

139
00:06:41.780 --> 00:06:44.900
and really a huge advantage
to using the public Cloud

140
00:06:44.900 --> 00:06:48.295
for services like this, is
that you can store as much data

141
00:06:48.295 --> 00:06:48.920
as you need to.

142
00:06:48.920 --> 00:06:51.800
You never have to worry
about the capabilities

143
00:06:51.800 --> 00:06:54.260
or the actual infrastructure
or hardware sitting

144
00:06:54.260 --> 00:06:57.080
behind the data that you're
loading because you know

145
00:06:57.080 --> 00:07:00.350
that, at the end of the day,
you can store petabytes of data

146
00:07:00.350 --> 00:07:03.792
if you need to and you can
start loading it immediately.

147
00:07:07.540 --> 00:07:08.040
All right.

148
00:07:08.040 --> 00:07:09.810
So this slide builds.

149
00:07:09.810 --> 00:07:13.410
And I think you know it's really
important to get into here,

150
00:07:13.410 --> 00:07:18.570
is the reason why on demand
usage is so important.

151
00:07:18.570 --> 00:07:21.960
And this, again, is
a huge differentiator

152
00:07:21.960 --> 00:07:24.420
between the hybrid Cloud
and the public Cloud.

153
00:07:24.420 --> 00:07:28.260
So if we look at data
warehousing as a use case,

154
00:07:28.260 --> 00:07:30.930
and you can click next here.

155
00:07:30.930 --> 00:07:34.090
So data warehousing
usage varies.

156
00:07:34.090 --> 00:07:35.730
It goes up and down over time.

157
00:07:35.730 --> 00:07:37.710
In a traditional
warehouse, which

158
00:07:37.710 --> 00:07:41.460
we can see in the next
click, is completely

159
00:07:41.460 --> 00:07:44.950
inflexible to the change
in usage over time.

160
00:07:44.950 --> 00:07:49.470
In other words, if you have a
huge rush of people using Sigma

161
00:07:49.470 --> 00:07:52.620
on Monday morning, well
traditional data warehouses

162
00:07:52.620 --> 00:07:55.360
are unable to scale
to meet this demand.

163
00:07:55.360 --> 00:07:57.360
And if you look at hybrid
Cloud data warehouses,

164
00:07:57.360 --> 00:07:59.070
it's exactly the same.

165
00:07:59.070 --> 00:08:02.010
But Snowflake uses
the public Cloud

166
00:08:02.010 --> 00:08:06.370
to enable as much elasticity
as you need at any given time.

167
00:08:06.370 --> 00:08:09.300
So instead of matching compute
to whatever your highest

168
00:08:09.300 --> 00:08:12.630
amount of use could
be in the future,

169
00:08:12.630 --> 00:08:14.970
you can match your compute
at any given moment

170
00:08:14.970 --> 00:08:16.890
to exactly what you're using.

171
00:08:16.890 --> 00:08:19.650
This means that you don't have
to pay more for when you're not

172
00:08:19.650 --> 00:08:21.660
using the data warehouse.

173
00:08:21.660 --> 00:08:24.720
It also means that you can
avoid purchasing more capacity

174
00:08:24.720 --> 00:08:27.990
than you need, so
that you can support

175
00:08:27.990 --> 00:08:30.042
times of maximum demand.

176
00:08:30.042 --> 00:08:31.750
And, of course, it
allows you to scale up

177
00:08:31.750 --> 00:08:36.280
and down transparently
and automatically.

178
00:08:36.280 --> 00:08:38.350
The other advantage
here with Snowflake,

179
00:08:38.350 --> 00:08:40.210
and the last one that
I'll talk to today,

180
00:08:40.210 --> 00:08:43.280
is the ability to use
standard skills and tools.

181
00:08:43.280 --> 00:08:48.585
So, for instance, most
people are familiar with SQL,

182
00:08:48.585 --> 00:08:51.720
and with Snowflake you
can utilize standard SQL

183
00:08:51.720 --> 00:08:54.250
to be able to query and
interact with the data.

184
00:08:57.730 --> 00:08:59.830
You can also use
tools like Sigma,

185
00:08:59.830 --> 00:09:03.910
and other ETL tools that
are available to you,

186
00:09:03.910 --> 00:09:07.300
to be able to connect
with Snowflake.

187
00:09:07.300 --> 00:09:11.010
And because Snowflake
utilizes standard SQL,

188
00:09:11.010 --> 00:09:12.670
the compatibility
with other tools

189
00:09:12.670 --> 00:09:15.920
is really high, making it easier
to integrate with the systems

190
00:09:15.920 --> 00:09:17.420
and tools that
you're already using.

191
00:09:20.320 --> 00:09:22.780
And it's highly
available, meaning

192
00:09:22.780 --> 00:09:26.110
that you don't have to
worry about Snowflake.

193
00:09:26.110 --> 00:09:28.840
It basically, out of the
box, it's usable in the way

194
00:09:28.840 --> 00:09:31.330
that you need to be
and it's available

195
00:09:31.330 --> 00:09:32.950
and it's highly secure as well.

196
00:09:38.610 --> 00:09:39.420
All right.

197
00:09:39.420 --> 00:09:41.920
So at this point,
you may be wondering,

198
00:09:41.920 --> 00:09:46.500
hopefully you're wondering, how
to be able to not only utilize

199
00:09:46.500 --> 00:09:50.520
Snowflake, but also utilize
these other ecosystem

200
00:09:50.520 --> 00:09:53.070
tools that I've been
referring to, such as Sigma.

201
00:09:53.070 --> 00:09:56.880
And you may also be
interested in working

202
00:09:56.880 --> 00:09:59.820
with an ETL vendor that can
help you start getting data

203
00:09:59.820 --> 00:10:01.170
into Snowflake.

204
00:10:01.170 --> 00:10:03.750
Well we had a lot of customers
who were asking us you,

205
00:10:03.750 --> 00:10:05.850
"Well, hey we're
enjoying using Snowflake

206
00:10:05.850 --> 00:10:08.240
and we understand you
have this broad ecosystem,

207
00:10:08.240 --> 00:10:10.530
but how am I going
to get data into it?

208
00:10:10.530 --> 00:10:13.590
How am I going to
stage this data?

209
00:10:13.590 --> 00:10:17.520
And how am I going to be able
to start analyzing the data/"

210
00:10:17.520 --> 00:10:21.990
Well, of course, with Partner
Connect what you can do

211
00:10:21.990 --> 00:10:25.920
is log into Snowflake
and you can very easily

212
00:10:25.920 --> 00:10:29.760
start up a trial with
any of the partner tools

213
00:10:29.760 --> 00:10:32.430
that are available through
Partner Connect, including

214
00:10:32.430 --> 00:10:36.690
Sigma and ETL vendors
such as Fivetran.

215
00:10:36.690 --> 00:10:39.180
So you can go in to
Snowflake, and without having

216
00:10:39.180 --> 00:10:43.020
to integrate these
tools on your own,

217
00:10:43.020 --> 00:10:44.520
you can actually
use Partner Connect

218
00:10:44.520 --> 00:10:45.933
to get started right away.

219
00:10:45.933 --> 00:10:47.100
And it makes it really easy.

220
00:10:47.100 --> 00:10:50.190
And actually if you're
interested in seeing

221
00:10:50.190 --> 00:10:52.180
how you can connect
Snowflake with Sigma,

222
00:10:52.180 --> 00:10:57.330
there's a YouTube video
that's available online.

223
00:10:57.330 --> 00:11:00.870
You can search for Sigma and
Snowflake Partner Connect

224
00:11:00.870 --> 00:11:04.320
and it'll show you the actual
steps for connecting Snowflake

225
00:11:04.320 --> 00:11:06.960
to Sigma quite easily.

226
00:11:12.190 --> 00:11:14.260
And here you can
see the interface

227
00:11:14.260 --> 00:11:16.705
for actually going
through that connection.

228
00:11:16.705 --> 00:11:17.580
And that's it for me.

229
00:11:22.650 --> 00:11:24.040
Hi, everyone.

230
00:11:24.040 --> 00:11:25.140
My name is Ali Sayeed.

231
00:11:25.140 --> 00:11:28.680
I'm a Product Solutions
Engineer with Sigma.

232
00:11:28.680 --> 00:11:31.350
So I'm here to kind of
talk through the product,

233
00:11:31.350 --> 00:11:33.180
guide you through
it, give you a demo,

234
00:11:33.180 --> 00:11:35.478
then answer any
technical questions.

235
00:11:41.100 --> 00:11:45.360
So this new Cloud architecture,
that Ross has talked about,

236
00:11:45.360 --> 00:11:47.040
has really created
a new technology

237
00:11:47.040 --> 00:11:49.330
with that new opportunity.

238
00:11:49.330 --> 00:11:51.910
And now modern analytics
now takes advantage

239
00:11:51.910 --> 00:11:54.410
of your Cloud data warehouse.

240
00:11:54.410 --> 00:11:58.000
BI has now evolved to
be Cloud built, right.

241
00:11:58.000 --> 00:12:00.040
And now can handle
semi-structured data,

242
00:12:00.040 --> 00:12:01.750
such as JSON.

243
00:12:01.750 --> 00:12:04.480
Even allows you to query
billions of records of data

244
00:12:04.480 --> 00:12:09.350
and ask tough questions, all
without writing a line of SQL.

245
00:12:09.350 --> 00:12:12.040
What that really does,
it kind of provides

246
00:12:12.040 --> 00:12:15.258
that direct and secure
and lasting performance,

247
00:12:15.258 --> 00:12:17.050
that Ross just talked
about, but now anyone

248
00:12:17.050 --> 00:12:18.300
in [INAUDIBLE] can now use it.

249
00:12:23.910 --> 00:12:24.420
So, Sigma.

250
00:12:24.420 --> 00:12:28.170
Sigma has reinvented the IN
analytics for the Cloud, right.

251
00:12:28.170 --> 00:12:31.260
So now it's really easy
and powerful to use.

252
00:12:31.260 --> 00:12:34.920
It's secure and governed, right.

253
00:12:34.920 --> 00:12:38.130
It's got a usage
pricing based model.

254
00:12:38.130 --> 00:12:40.110
And, just like
Snowflake, it requires

255
00:12:40.110 --> 00:12:43.650
no upfront work for you and your
organization to get started.

256
00:12:43.650 --> 00:12:45.720
So really gone are
the days of having

257
00:12:45.720 --> 00:12:48.660
to model and map the data
beforehand, or ask the data

258
00:12:48.660 --> 00:12:50.915
team to work with the new data.

259
00:12:50.915 --> 00:12:52.290
The traditional
surveyor that was

260
00:12:52.290 --> 00:12:55.854
made famous by traditional
tools, such as [INAUDIBLE]..

261
00:13:00.330 --> 00:13:01.448
All right.

262
00:13:01.448 --> 00:13:02.490
So let's get to the demo.

263
00:13:14.250 --> 00:13:15.620
OK.

264
00:13:15.620 --> 00:13:17.950
So this is Sigma.

265
00:13:17.950 --> 00:13:21.190
It's a cloud native
app, so browser based.

266
00:13:21.190 --> 00:13:24.580
Simply log onto
app.sigmacomputing.com.

267
00:13:24.580 --> 00:13:27.310
This is the first screen
you see, the Welcome page.

268
00:13:27.310 --> 00:13:30.250
And we're going to use Sigma to
connect to data on a Snowflake

269
00:13:30.250 --> 00:13:31.260
instance.

270
00:13:31.260 --> 00:13:33.760
So the first thing I'm going
to show you is our team spaces.

271
00:13:33.760 --> 00:13:35.790
For example, when
you connect to data

272
00:13:35.790 --> 00:13:38.750
you can expose data and
dashboards at different levels.

273
00:13:38.750 --> 00:13:42.100
So, for example, I can expose it
at the team company wide level,

274
00:13:42.100 --> 00:13:44.680
here; at a team level,
for example, I'm

275
00:13:44.680 --> 00:13:47.660
part of the marketing and
product solutions teams.

276
00:13:47.660 --> 00:13:50.710
So what these team
spaces work as is they're

277
00:13:50.710 --> 00:13:52.210
kind of collective
work environments

278
00:13:52.210 --> 00:13:56.720
that your team can work on
specific data sets together,

279
00:13:56.720 --> 00:13:59.513
additionally to provide another
kind of permissions layer

280
00:13:59.513 --> 00:14:01.180
on top of your Snowflake
data warehouse.

281
00:14:01.180 --> 00:14:03.315
So for example, unless
I'm part of this team,

282
00:14:03.315 --> 00:14:04.690
I'm unable to see
the data that's

283
00:14:04.690 --> 00:14:06.273
been exposed at this level.

284
00:14:06.273 --> 00:14:07.690
Then again you're
kind of private,

285
00:14:07.690 --> 00:14:10.000
my documents, work area.

286
00:14:10.000 --> 00:14:12.580
I'm going to connect
to data and show you

287
00:14:12.580 --> 00:14:16.520
how you can build data and
visualizations in Sigma

288
00:14:16.520 --> 00:14:18.300
from scratch.

289
00:14:18.300 --> 00:14:21.582
So we're going to go ahead
and click New worksheet.

290
00:14:21.582 --> 00:14:23.290
So you'll notice
there's three ways I can

291
00:14:23.290 --> 00:14:25.690
get started with data in Sigma.

292
00:14:25.690 --> 00:14:29.037
So, for example, if I do have
a SQL query already written

293
00:14:29.037 --> 00:14:31.120
and I want to start from
that, I have that ability

294
00:14:31.120 --> 00:14:33.490
to input that into our editor.

295
00:14:33.490 --> 00:14:36.400
We also have the notion
of a reference worksheet.

296
00:14:36.400 --> 00:14:38.500
So that reference
worksheet is basically

297
00:14:38.500 --> 00:14:44.055
a non-materialized view that you
can curate and expose to users.

298
00:14:44.055 --> 00:14:45.430
For example, if
there's users you

299
00:14:45.430 --> 00:14:49.270
don't want directly
to use the database.

300
00:14:49.270 --> 00:14:50.730
And then we're
directly connected

301
00:14:50.730 --> 00:14:53.630
to a table in the database, and
this is what most of our users

302
00:14:53.630 --> 00:14:54.130
will use.

303
00:14:57.410 --> 00:14:59.160
So we're going to
connect to our demo data

304
00:14:59.160 --> 00:15:01.550
on our Snowflake instance.

305
00:15:01.550 --> 00:15:04.768
You'll see immediately the
schemas that I have access to.

306
00:15:04.768 --> 00:15:07.310
The data I want to connect to
is in this Insta Credit schema,

307
00:15:07.310 --> 00:15:09.600
it's called Orders JSON Large.

308
00:15:09.600 --> 00:15:12.220
Select that.

309
00:15:12.220 --> 00:15:15.360
It's going to show you a preview
of this data, some information

310
00:15:15.360 --> 00:15:16.200
about it.

311
00:15:16.200 --> 00:15:18.630
So over 18 million rows of data.

312
00:15:18.630 --> 00:15:20.970
Again, this is instantly
more than anything

313
00:15:20.970 --> 00:15:24.315
you could do as an
analyst in Excel.

314
00:15:24.315 --> 00:15:25.690
When we take a
look at this data,

315
00:15:25.690 --> 00:15:30.210
we see OK, it's actually older
data with a large nested JSON

316
00:15:30.210 --> 00:15:31.460
field in it.

317
00:15:31.460 --> 00:15:34.030
So we're going to
use Sigma to parse

318
00:15:34.030 --> 00:15:37.120
out the relevant fields from
this JSON so flatten them out.

319
00:15:37.120 --> 00:15:42.040
And then use that to
determine, basically create

320
00:15:42.040 --> 00:15:45.580
customer cohorts and determine
what those customer cohorts

321
00:15:45.580 --> 00:15:48.740
performance [INAUDIBLE].

322
00:15:48.740 --> 00:15:51.980
When I click get started,
that actually brings the data

323
00:15:51.980 --> 00:15:53.390
into our development UI.

324
00:15:53.390 --> 00:15:55.820
So it's a spreadsheet
based UI, so very

325
00:15:55.820 --> 00:15:57.820
easy to get familiar with.

326
00:15:57.820 --> 00:16:00.650
So you'll notice that there's
certain functions that

327
00:16:00.650 --> 00:16:03.740
[INAUDIBLE],, you have the
column headings, such as hiding

328
00:16:03.740 --> 00:16:06.550
columns, sorting, aggregating.

329
00:16:06.550 --> 00:16:07.842
There's this formula bar, here.

330
00:16:07.842 --> 00:16:10.300
This is where we're going to
write formulas, take advantage

331
00:16:10.300 --> 00:16:11.990
of our functions
library to do that.

332
00:16:11.990 --> 00:16:13.820
And on the right,
on the Inspector,

333
00:16:13.820 --> 00:16:15.278
this is where we're
going to create

334
00:16:15.278 --> 00:16:16.400
groupings and aggregates.

335
00:16:16.400 --> 00:16:20.770
They also can join to
the data sources as well.

336
00:16:20.770 --> 00:16:23.270
All right, so I'm going
to get started building.

337
00:16:23.270 --> 00:16:25.000
So maybe the first
thing I want to do

338
00:16:25.000 --> 00:16:29.870
is go ahead and sort this
data in an ascending manner.

339
00:16:29.870 --> 00:16:32.170
The oldest records appear first.

340
00:16:32.170 --> 00:16:33.290
We've done that.

341
00:16:33.290 --> 00:16:35.000
Again, next thing
I want to do, maybe

342
00:16:35.000 --> 00:16:36.535
I want to create a column.

343
00:16:36.535 --> 00:16:37.910
So we have quantity,
unit, price.

344
00:16:37.910 --> 00:16:39.700
I want to find the
total line amount.

345
00:16:39.700 --> 00:16:41.640
I can start to type
that in the formula bar,

346
00:16:41.640 --> 00:16:45.550
notice that there's
autocomplete built in, like so.

347
00:16:45.550 --> 00:16:48.380
And we'll go ahead
and call this amount.

348
00:16:48.380 --> 00:16:49.460
Just like that.

349
00:16:49.460 --> 00:16:53.160
Additionally, I can also
format this as a currency.

350
00:16:53.160 --> 00:16:54.960
So notice how fast
everything is happening.

351
00:16:54.960 --> 00:16:57.180
Right?

352
00:16:57.180 --> 00:16:59.010
And the reason that
it's happening so fast

353
00:16:59.010 --> 00:17:02.550
with such a large data set
is because Sigma is actually

354
00:17:02.550 --> 00:17:06.240
creating actual SQL and
executing it in the database.

355
00:17:06.240 --> 00:17:09.450
So we're taking a SQL and we're
taking advantage of Snowflake

356
00:17:09.450 --> 00:17:12.510
and it's elastic compute
to execute these queries.

357
00:17:12.510 --> 00:17:14.550
So I can actually
take a look at what

358
00:17:14.550 --> 00:17:17.369
that SQL looks like that
Sigma is generating,

359
00:17:17.369 --> 00:17:20.619
that's being pushed
to the database.

360
00:17:20.619 --> 00:17:26.140
Because Sigma is creating actual
SQL, there's no need to model

361
00:17:26.140 --> 00:17:28.450
or map the data out beforehand.

362
00:17:28.450 --> 00:17:31.570
You're up and running as soon
as you connect to your database.

363
00:17:31.570 --> 00:17:34.970
This also means that we're
never working on extracts

364
00:17:34.970 --> 00:17:37.502
and we're never pulling
out the data to process it.

365
00:17:37.502 --> 00:17:38.960
Processing it in
the database, that

366
00:17:38.960 --> 00:17:40.930
means it's never leaving
the secure environment

367
00:17:40.930 --> 00:17:42.597
and we're never storing
or caching data.

368
00:17:46.720 --> 00:17:48.820
The next thing I want
to do is flatten out

369
00:17:48.820 --> 00:17:51.730
some relevant fields
I want from this JSON.

370
00:17:51.730 --> 00:17:54.520
Sigma will recognize
that JSON in there

371
00:17:54.520 --> 00:17:57.480
and give you the option
to extract columns.

372
00:17:57.480 --> 00:17:59.050
So I'll see this
nested JSON here,

373
00:17:59.050 --> 00:18:01.300
I can actually choose the
columns that I want from it.

374
00:18:01.300 --> 00:18:04.000
For example, maybe I want
product ID, product name,

375
00:18:04.000 --> 00:18:04.967
and user ID.

376
00:18:08.550 --> 00:18:11.190
Then Sigma will parse, or
flatten those columns out.

377
00:18:11.190 --> 00:18:14.860
I can go ahead and hide this
JSON because I don't need it.

378
00:18:14.860 --> 00:18:21.570
And quickly rename
the fields, like so.

379
00:18:26.557 --> 00:18:28.890
The next thing I want to do
is I want to create a group.

380
00:18:28.890 --> 00:18:30.830
So I want to group my
data by the user ID

381
00:18:30.830 --> 00:18:34.880
so that I can then treat
my customer cohort.

382
00:18:34.880 --> 00:18:37.700
I can easily do that by
taking the user ID column

383
00:18:37.700 --> 00:18:40.916
and putting it on
a drooping level.

384
00:18:40.916 --> 00:18:43.140
So now we've created
an aggregate there.

385
00:18:43.140 --> 00:18:45.128
I can create a
column or calculation

386
00:18:45.128 --> 00:18:46.170
of aggregating be levels.

387
00:18:46.170 --> 00:18:48.480
For example, I want to know
the total amount generated

388
00:18:48.480 --> 00:18:49.100
per user.

389
00:18:49.100 --> 00:18:53.440
I can simply do sum of amount.

390
00:18:53.440 --> 00:18:57.970
Call this user
[INAUDIBLE] like so.

391
00:18:57.970 --> 00:19:01.460
So, yeah, you can easily do
a cross level calculation

392
00:19:01.460 --> 00:19:03.203
like that.

393
00:19:03.203 --> 00:19:04.370
I'll create one more column.

394
00:19:04.370 --> 00:19:08.660
So maybe I want put each
customer into a cohort,

395
00:19:08.660 --> 00:19:10.290
when did they become a customer?

396
00:19:10.290 --> 00:19:13.230
I could say, OK let's do that
based on their first purchase

397
00:19:13.230 --> 00:19:13.730
date.

398
00:19:13.730 --> 00:19:15.480
So I'll do minimum on
their purchase date.

399
00:19:20.400 --> 00:19:22.950
And then, additionally,
so you see the data

400
00:19:22.950 --> 00:19:24.780
at its most granular
level right now,

401
00:19:24.780 --> 00:19:28.220
I can go ahead and collapse
the data to the user

402
00:19:28.220 --> 00:19:31.430
ID grouping you see there.

403
00:19:31.430 --> 00:19:34.250
And I want to see it
at the quarter level.

404
00:19:34.250 --> 00:19:39.557
I can truncate the data to the
quarter, via the column heading

405
00:19:39.557 --> 00:19:40.640
because it's a JSON field.

406
00:19:40.640 --> 00:19:44.190
So we'll do that like so.

407
00:19:44.190 --> 00:19:45.740
And I can change
any kind of formats

408
00:19:45.740 --> 00:19:49.670
if I want to see it in a short
day, like that, I can do so.

409
00:19:49.670 --> 00:19:53.950
I can even say let's go ahead
and take a look at the month.

410
00:19:53.950 --> 00:19:57.450
Let's go ahead and
call this cohort.

411
00:19:57.450 --> 00:19:59.690
I've created the
customer cohort.

412
00:19:59.690 --> 00:20:02.000
Again, I wanted to
group my customers

413
00:20:02.000 --> 00:20:05.930
into each cohort to see how
cohorts perform individually.

414
00:20:05.930 --> 00:20:07.820
So I'll simply take
that cohort field

415
00:20:07.820 --> 00:20:13.460
and add it to new
aggregate level like so.

416
00:20:13.460 --> 00:20:16.500
Again, I can collapse the
data to the aggregate level,

417
00:20:16.500 --> 00:20:19.680
cohort level, so I can see
the quarters that are present.

418
00:20:19.680 --> 00:20:23.650
Let's go ahead and do some more
calculations at this level now.

419
00:20:23.650 --> 00:20:25.640
So for example,
maybe I want to know

420
00:20:25.640 --> 00:20:30.570
the total revenue or the
total number of users

421
00:20:30.570 --> 00:20:31.980
that were in each cohort.

422
00:20:31.980 --> 00:20:35.174
I can simply do a
count on the user ID.

423
00:20:46.160 --> 00:20:48.380
So what I've seen
so far was basically

424
00:20:48.380 --> 00:20:51.730
a lot of the simple functions
from our functions library,

425
00:20:51.730 --> 00:20:54.440
but we've pretty much covered
99% of all SQL functions,

426
00:20:54.440 --> 00:20:56.660
anything from
date/time functions

427
00:20:56.660 --> 00:21:00.270
such as these differences; text
functions, such as contains

428
00:21:00.270 --> 00:21:03.830
or user [INAUDIBLE];; and then
quite a few window functions,

429
00:21:03.830 --> 00:21:05.090
such as moving aggregates.

430
00:21:05.090 --> 00:21:07.010
I'm going to use a
window function now.

431
00:21:07.010 --> 00:21:10.070
I'm going to use cumulative
sum to see how our customers

432
00:21:10.070 --> 00:21:12.545
acquisition has kind of trade
on throughout the quarters.

433
00:21:12.545 --> 00:21:15.680
I'll start to type it in to
that find that function I want

434
00:21:15.680 --> 00:21:17.690
to use via the auto complete.

435
00:21:17.690 --> 00:21:22.350
I want to do this on the user
account, customer account,

436
00:21:22.350 --> 00:21:25.460
like so.

437
00:21:25.460 --> 00:21:27.862
We can now see how our
customer's acquisition has

438
00:21:27.862 --> 00:21:29.570
kind of progressed
throughout the quarter

439
00:21:29.570 --> 00:21:32.780
starting with 3,000 all the
way to 250,000 customers we

440
00:21:32.780 --> 00:21:34.110
have now.

441
00:21:34.110 --> 00:21:37.980
Let's just call
this cumulative sum.

442
00:21:37.980 --> 00:21:40.917
I'll create one
more metric, here.

443
00:21:40.917 --> 00:21:43.250
We'll call this, maybe you
want to do performance metric

444
00:21:43.250 --> 00:21:44.917
that we want to chart,
for example, what

445
00:21:44.917 --> 00:21:49.430
was the median of user revenue.

446
00:21:49.430 --> 00:21:51.360
Again, a cross
level calculation.

447
00:21:54.390 --> 00:21:58.420
Call this median revenue.

448
00:21:58.420 --> 00:22:01.120
So now I've arrived at the
data set that I wanted.

449
00:22:01.120 --> 00:22:04.160
I can go ahead and now
click the Publish button.

450
00:22:04.160 --> 00:22:07.240
And what that does is
actually create a new version

451
00:22:07.240 --> 00:22:09.010
of this worksheet for me.

452
00:22:09.010 --> 00:22:10.360
So version control is built in.

453
00:22:10.360 --> 00:22:12.110
If I want to revert
to a previous version,

454
00:22:12.110 --> 00:22:14.620
I can actually do so
by clicking over here.

455
00:22:14.620 --> 00:22:19.810
You have the ability to download
the data as a CSV or Excel.

456
00:22:19.810 --> 00:22:22.660
You can also send this
data on a scheduled basis.

457
00:22:22.660 --> 00:22:25.090
For example, a list of
emails, such as your boss,

458
00:22:25.090 --> 00:22:28.480
could get this data as an Excel
or PDF every Monday morning

459
00:22:28.480 --> 00:22:29.046
at 9AM.

460
00:22:32.268 --> 00:22:33.810
But I'm not done
yet, I actually want

461
00:22:33.810 --> 00:22:36.240
to start to build a
charter visualization.

462
00:22:36.240 --> 00:22:39.240
So I can go ahead and
click the visualization tab

463
00:22:39.240 --> 00:22:41.422
and open that up.

464
00:22:41.422 --> 00:22:44.350
So you notice it supports
quite a bit of charts,

465
00:22:44.350 --> 00:22:47.910
anything from bar charts,
line charts, scatter plots.

466
00:22:47.910 --> 00:22:50.500
And we make it very easy
to create those or fill it.

467
00:22:50.500 --> 00:22:52.960
So, for example, on my
x-axis I want the cohort.

468
00:22:52.960 --> 00:22:55.600
I can just drop it in there.

469
00:22:55.600 --> 00:22:59.080
I want to chart the
customer account.

470
00:22:59.080 --> 00:23:02.087
I can simply put it
in as a value there.

471
00:23:02.087 --> 00:23:03.670
If I want to add an
additional series,

472
00:23:03.670 --> 00:23:05.110
such as the median
revenue, I can

473
00:23:05.110 --> 00:23:08.060
go ahead and add that is well.

474
00:23:08.060 --> 00:23:10.980
And then I can decide, this
doesn't look quite right

475
00:23:10.980 --> 00:23:15.380
as a bar chart, let's see
it as a grouped bar chart.

476
00:23:15.380 --> 00:23:17.100
I want the revenue
value to show more

477
00:23:17.100 --> 00:23:24.000
so we can actually move that
to a second axis, on the right.

478
00:23:24.000 --> 00:23:27.870
And I can notice that what
this is maybe not the best

479
00:23:27.870 --> 00:23:29.400
chart for seeing trends.

480
00:23:29.400 --> 00:23:32.300
Let's make this a line chart.

481
00:23:32.300 --> 00:23:35.650
And I can see some kind of
clear divergence with our data.

482
00:23:35.650 --> 00:23:39.590
I can notice that
around this time frame,

483
00:23:39.590 --> 00:23:41.500
after the July
quarter, there seems

484
00:23:41.500 --> 00:23:44.500
to be a divergence between
median revenue and customer

485
00:23:44.500 --> 00:23:45.340
account.

486
00:23:45.340 --> 00:23:49.150
And I knew that was because
we had started a new marketing

487
00:23:49.150 --> 00:23:49.680
campaign.

488
00:23:49.680 --> 00:23:51.430
So looks like we're
getting new customers,

489
00:23:51.430 --> 00:23:52.888
but they're actually
spending less.

490
00:23:52.888 --> 00:23:55.220
So maybe not the campaign
that we want to address.

491
00:23:55.220 --> 00:23:56.890
So easily getting
insight from Sigma.

492
00:23:59.460 --> 00:24:01.870
And, additionally
while you're charting,

493
00:24:01.870 --> 00:24:04.630
you're still in the
data prep environment.

494
00:24:04.630 --> 00:24:07.490
So, for example, if I want
to filter my data down so

495
00:24:07.490 --> 00:24:07.990
we can use.

496
00:24:07.990 --> 00:24:10.650
We have a really robust
filtering console,

497
00:24:10.650 --> 00:24:13.750
where I can say I just want to
look at the 12 months of data.

498
00:24:13.750 --> 00:24:16.740
I can actually do a
relative data filter

499
00:24:16.740 --> 00:24:22.530
on months where I can say, show
me the previous 12 months data.

500
00:24:22.530 --> 00:24:24.530
That updates our data and
the accompanying chart

501
00:24:24.530 --> 00:24:27.670
as well, just like so.

502
00:24:27.670 --> 00:24:30.505
You can create as many
charts as you like.

503
00:24:30.505 --> 00:24:31.880
When you're done
creating charts,

504
00:24:31.880 --> 00:24:34.110
you can add them to a dashboard.

505
00:24:34.110 --> 00:24:36.350
Let me give you an example
of what a dashboard looks

506
00:24:36.350 --> 00:24:38.054
like in Snowflake.

507
00:24:41.400 --> 00:24:43.030
This is an example
of a dashboard.

508
00:24:43.030 --> 00:24:45.440
You notice they all
load up really quickly.

509
00:24:45.440 --> 00:24:47.440
And since the dashboard
and the charts

510
00:24:47.440 --> 00:24:50.880
are sitting on top of live data
on Snowflake, and not extracts,

511
00:24:50.880 --> 00:24:54.745
they're always live and the
data is always up to date.

512
00:24:54.745 --> 00:24:57.120
So you can see I can easily
format a pivot table, an area

513
00:24:57.120 --> 00:24:58.660
chart.

514
00:24:58.660 --> 00:25:02.510
Kind of an example of a chart
that I was working with before.

515
00:25:02.510 --> 00:25:05.380
Additionally, you can add
filters to your charts

516
00:25:05.380 --> 00:25:07.160
and have them applied
to specific charts.

517
00:25:07.160 --> 00:25:08.660
So, for example, I
only want to look

518
00:25:08.660 --> 00:25:10.490
at data that comes
from users that

519
00:25:10.490 --> 00:25:13.430
have at least 7,000 revenue.

520
00:25:13.430 --> 00:25:17.460
And you can notice how
some charts kind of get

521
00:25:17.460 --> 00:25:18.752
updated like that.

522
00:25:22.360 --> 00:25:23.760
That's the demo of Sigma.

523
00:25:35.650 --> 00:25:36.470
Great.

524
00:25:36.470 --> 00:25:37.610
I think I'm up.

525
00:25:37.610 --> 00:25:39.350
My name is Dominic Go.

526
00:25:39.350 --> 00:25:40.353
And I can get started.

527
00:25:40.353 --> 00:25:41.520
We can go to the next slide.

528
00:25:44.950 --> 00:25:47.150
So like I said, my
name is Dominic Go.

529
00:25:47.150 --> 00:25:49.260
And I'm the Director
of Analytics

530
00:25:49.260 --> 00:25:51.370
at Schoola and Olivela.

531
00:25:51.370 --> 00:25:53.880
And I thought I would
just give a quick overview

532
00:25:53.880 --> 00:25:56.700
of what these companies
do and how I got involved.

533
00:25:56.700 --> 00:26:01.080
So Schoola up was
actually the first brand

534
00:26:01.080 --> 00:26:03.150
and it was launched in 2013.

535
00:26:03.150 --> 00:26:08.040
And it's an e-commerce platform
that sells used clothing.

536
00:26:08.040 --> 00:26:13.200
I joined the company as the
first analytics hire in 2015.

537
00:26:13.200 --> 00:26:14.880
And I was asked to
basically set up

538
00:26:14.880 --> 00:26:17.250
the analytics infrastructure.

539
00:26:17.250 --> 00:26:19.800
Since then we've launched
a second brand, which

540
00:26:19.800 --> 00:26:22.860
deals with high
fashion goods, so it's

541
00:26:22.860 --> 00:26:25.590
a completely different
website, but sitting on

542
00:26:25.590 --> 00:26:29.010
the same existing
infrastructure and there's

543
00:26:29.010 --> 00:26:32.300
a lot of overlapping
business processes and stuff.

544
00:26:32.300 --> 00:26:39.420
And that launched in
the summer of 2017.

545
00:26:39.420 --> 00:26:40.920
So we can move on
to the next slide,

546
00:26:40.920 --> 00:26:43.230
but that's just a quick
overview of what we do.

547
00:26:46.770 --> 00:26:50.530
So I wanted to give a quick look
of where we were coming from.

548
00:26:50.530 --> 00:26:54.100
So essentially, when I joined
Schoola, as I mentioned,

549
00:26:54.100 --> 00:26:56.880
I was the first analytics hire.

550
00:26:56.880 --> 00:27:01.420
And so I had the invariable
question of build or buy.

551
00:27:01.420 --> 00:27:04.390
And given our low
resources, I decided

552
00:27:04.390 --> 00:27:08.320
that build was going to be
the easier avenue for us

553
00:27:08.320 --> 00:27:09.430
at that time.

554
00:27:09.430 --> 00:27:12.610
So I essentially built
our entire data warehouse

555
00:27:12.610 --> 00:27:17.050
on a single EC2
analytics server.

556
00:27:17.050 --> 00:27:21.340
As you can see here, we had a
number of local data sources.

557
00:27:21.340 --> 00:27:25.000
So you see the Schoola
database and Olivela databases,

558
00:27:25.000 --> 00:27:27.640
as well as our
operations databases.

559
00:27:27.640 --> 00:27:29.710
And then we had a
number of Cloud software

560
00:27:29.710 --> 00:27:36.040
as a service to SAS sources to
cover various other business

561
00:27:36.040 --> 00:27:40.690
needs, so things like Zendesk
for customer service, Silver

562
00:27:40.690 --> 00:27:45.280
Pop was our ESP at the
time, Quickbooks, et cetera.

563
00:27:45.280 --> 00:27:47.290
And so we integrated
all this data

564
00:27:47.290 --> 00:27:49.870
and brought it into a
single source, as you want

565
00:27:49.870 --> 00:27:51.250
to do for a data warehouse.

566
00:27:51.250 --> 00:27:54.100
And then sort it all
into a single schema.

567
00:27:54.100 --> 00:27:58.120
And then, on top of that,
we built a web application

568
00:27:58.120 --> 00:28:01.660
to basically surface
the data to our users.

569
00:28:01.660 --> 00:28:06.818
So you can go to the next
slide to show just a quick look

570
00:28:06.818 --> 00:28:07.610
at what that looks.

571
00:28:07.610 --> 00:28:11.700
So this was just a completely
homegrown application.

572
00:28:11.700 --> 00:28:14.000
As you can see, the
screenshot, to the left,

573
00:28:14.000 --> 00:28:15.590
there is kind of
like a repository

574
00:28:15.590 --> 00:28:18.260
of the existing reports and
things that we've built.

575
00:28:18.260 --> 00:28:20.360
And then an example
of one of the reports

576
00:28:20.360 --> 00:28:25.100
here from one of our
old revenue reports.

577
00:28:25.100 --> 00:28:27.110
So that's where we were.

578
00:28:27.110 --> 00:28:33.170
Now I want to just talk a little
bit about why this was painful.

579
00:28:33.170 --> 00:28:39.110
So we actually, as a
company, had a very good data

580
00:28:39.110 --> 00:28:41.060
driven culture.

581
00:28:41.060 --> 00:28:46.790
And what I mean by that is that
from our leadership team all

582
00:28:46.790 --> 00:28:49.340
the way down to the
warehouse associates,

583
00:28:49.340 --> 00:28:52.340
everyone was interested in
what the data could teach us

584
00:28:52.340 --> 00:28:58.070
about how to improve our
operations or our marketing

585
00:28:58.070 --> 00:28:59.700
initiatives, et cetera.

586
00:28:59.700 --> 00:29:04.130
But in the infrastructure
that I showed you,

587
00:29:04.130 --> 00:29:08.240
getting data to
the business users

588
00:29:08.240 --> 00:29:11.520
required significant
technical skills.

589
00:29:11.520 --> 00:29:16.400
So even if we were able to
get it to them in a raw CSV,

590
00:29:16.400 --> 00:29:21.500
they would need very strong
Excel skills to get around

591
00:29:21.500 --> 00:29:23.850
and do things with it.

592
00:29:23.850 --> 00:29:27.020
In addition to that, as
our data sources grew

593
00:29:27.020 --> 00:29:31.100
and as the company grew,
we had very large queries,

594
00:29:31.100 --> 00:29:35.270
very complex queries, that
stored a lot of business logic.

595
00:29:35.270 --> 00:29:39.950
And then, even after we
got those things developed,

596
00:29:39.950 --> 00:29:41.780
they were even
harder to maintain.

597
00:29:41.780 --> 00:29:44.750
And then on top of all those
things, all those business

598
00:29:44.750 --> 00:29:48.680
related activities, you had
things like server space

599
00:29:48.680 --> 00:29:53.120
and compute time and things
like that to worry about.

600
00:29:53.120 --> 00:29:55.520
So I love this analogy
of If You Give A Mouse A

601
00:29:55.520 --> 00:29:57.320
Cookie because
basically any time I

602
00:29:57.320 --> 00:29:59.330
would finish a report
for somebody else,

603
00:29:59.330 --> 00:30:01.310
there were about three
other questions that

604
00:30:01.310 --> 00:30:03.470
came out of that
report that required

605
00:30:03.470 --> 00:30:05.720
additional development.

606
00:30:05.720 --> 00:30:10.880
And so in the time leading up to
the launch of our second brand

607
00:30:10.880 --> 00:30:13.820
of Olivela, I saw a great
kind of turning point

608
00:30:13.820 --> 00:30:16.790
for the company
to make a change.

609
00:30:16.790 --> 00:30:20.480
So we can move on
to the next slide.

610
00:30:20.480 --> 00:30:21.930
And so it was at
that time that I

611
00:30:21.930 --> 00:30:24.840
decided we needed
something more sustainable,

612
00:30:24.840 --> 00:30:28.500
something more easy
to collaborate on,

613
00:30:28.500 --> 00:30:31.320
for both our business users
and for our growing data

614
00:30:31.320 --> 00:30:35.830
team to maintain and
grow our ETL jobs

615
00:30:35.830 --> 00:30:38.700
and set some more business
standards about how

616
00:30:38.700 --> 00:30:39.790
to do things.

617
00:30:39.790 --> 00:30:41.850
So there were a number
of things that I

618
00:30:41.850 --> 00:30:43.690
had to consider while
making this decision.

619
00:30:43.690 --> 00:30:47.610
So the first one, and the
biggest one, was cost.

620
00:30:47.610 --> 00:30:51.390
So there's tons of BI tools
out on the market, from ETL

621
00:30:51.390 --> 00:30:55.290
to user level tools
to choose from.

622
00:30:55.290 --> 00:30:56.910
I've had the
fortunate experience

623
00:30:56.910 --> 00:31:00.090
of working with a lot
of them firsthand.

624
00:31:00.090 --> 00:31:03.360
But we're a small company,
we're still a startup.

625
00:31:03.360 --> 00:31:05.940
I mean, even though
we started 2013,

626
00:31:05.940 --> 00:31:08.850
we're still very much
a startup, growing,

627
00:31:08.850 --> 00:31:12.630
and so we need to manage
costs very carefully.

628
00:31:12.630 --> 00:31:17.520
And so I wanted something that
could provide immediate value,

629
00:31:17.520 --> 00:31:20.320
but also be cost effective.

630
00:31:20.320 --> 00:31:22.260
In similar, and
related to that, is

631
00:31:22.260 --> 00:31:23.712
that I need I knew
that it needed

632
00:31:23.712 --> 00:31:25.170
to be able to grow
with the company

633
00:31:25.170 --> 00:31:28.410
because we have a very rapid
growth trajectory right now.

634
00:31:28.410 --> 00:31:30.638
And then, in addition
to that, I wanted

635
00:31:30.638 --> 00:31:33.180
to make sure it was something
that people would actually use.

636
00:31:33.180 --> 00:31:36.060
I've seen, far too
many times, people

637
00:31:36.060 --> 00:31:39.540
will be introduced to a BI
tool that they use for a week

638
00:31:39.540 --> 00:31:45.050
and then just forget about and
switch back to downloading CSV,

639
00:31:45.050 --> 00:31:47.130
so that they can
put it into Excel.

640
00:31:47.130 --> 00:31:49.320
And then finally I
wanted something,

641
00:31:49.320 --> 00:31:51.240
like I mentioned
earlier, that was easier

642
00:31:51.240 --> 00:31:54.960
to maintain after we get put
all that complex business

643
00:31:54.960 --> 00:31:56.640
logic into place.

644
00:31:56.640 --> 00:32:01.240
So we can move on
to the next slide.

645
00:32:01.240 --> 00:32:03.660
So this is where we moved to.

646
00:32:03.660 --> 00:32:06.510
So you've got the same sort
of list of data sources

647
00:32:06.510 --> 00:32:10.290
and a number of others
working there on the left.

648
00:32:10.290 --> 00:32:13.110
And then for our
ETL tool we actually

649
00:32:13.110 --> 00:32:15.360
chose one of the
other Partner Connect

650
00:32:15.360 --> 00:32:19.500
tools, that wasn't listed
above, but it's called Matilion.

651
00:32:19.500 --> 00:32:24.750
And then we put all of that
into a Snowflake data warehouse.

652
00:32:24.750 --> 00:32:31.110
And then on top of that, we have
Sigma is our primary business

653
00:32:31.110 --> 00:32:32.670
user tool.

654
00:32:32.670 --> 00:32:34.420
So we can move to
our next slide.

655
00:32:37.600 --> 00:32:40.450
So, again, it addresses
all the main issues here

656
00:32:40.450 --> 00:32:41.650
that I was talking about.

657
00:32:41.650 --> 00:32:48.550
So Snowflake and Sigma are
both completely usage based

658
00:32:48.550 --> 00:32:49.780
and compute usage based.

659
00:32:49.780 --> 00:32:52.270
So that meant as I
was rolling it out

660
00:32:52.270 --> 00:32:57.340
there were really only me and
one other user, an analyst,

661
00:32:57.340 --> 00:33:00.040
working on the platforms as
we set up the infrastructure.

662
00:33:00.040 --> 00:33:04.372
Which meant that our monthly
costs were very, very low

663
00:33:04.372 --> 00:33:06.080
as we were getting
things up and running,

664
00:33:06.080 --> 00:33:07.622
which was great
because that meant we

665
00:33:07.622 --> 00:33:09.070
had time to validate the data.

666
00:33:09.070 --> 00:33:14.220
And then we could really set
our business standards well.

667
00:33:14.220 --> 00:33:18.190
And that allowed us then to also
onboard people, one at a time

668
00:33:18.190 --> 00:33:21.820
or a small team at a time,
so that we could ensure

669
00:33:21.820 --> 00:33:27.160
that the people received the
training and the knowledge

670
00:33:27.160 --> 00:33:30.430
that they need to actually
make the tool useful for them.

671
00:33:30.430 --> 00:33:33.460
And then, because we had
all of that time leading up

672
00:33:33.460 --> 00:33:35.170
before we had to
scale and introduce

673
00:33:35.170 --> 00:33:38.500
to the entire
organization, that meant

674
00:33:38.500 --> 00:33:41.620
we could set up standards to
make maintenance very simple

675
00:33:41.620 --> 00:33:43.030
for us.

676
00:33:43.030 --> 00:33:46.630
So we can move on
to the next one.

677
00:33:46.630 --> 00:33:54.480
So as I was
preparing for this, I

678
00:33:54.480 --> 00:33:58.140
was thinking about this specific
example of a win for us.

679
00:33:58.140 --> 00:34:00.540
And one really big
one was we have

680
00:34:00.540 --> 00:34:03.030
a dashboard that
we call the Cohort

681
00:34:03.030 --> 00:34:06.240
Dashboard at our company,
which allows us to look at--

682
00:34:06.240 --> 00:34:08.610
actually very similar
to the example

683
00:34:08.610 --> 00:34:12.600
they ran through earlier,
we cohort our customers

684
00:34:12.600 --> 00:34:16.770
by the month that they joined
and through the acquisition

685
00:34:16.770 --> 00:34:18.179
channel that they joined.

686
00:34:18.179 --> 00:34:24.420
And so before this
cohort dashboard

687
00:34:24.420 --> 00:34:27.690
was over 700 lines
of code and PHP

688
00:34:27.690 --> 00:34:33.210
and another 500 lines of code
and manually maintained SQL.

689
00:34:33.210 --> 00:34:37.110
Now it's a Matillion job
with about five nodes.

690
00:34:37.110 --> 00:34:39.810
And then a Sigma
dashboard, which

691
00:34:39.810 --> 00:34:43.949
took a little bit, maybe
a week, to put together

692
00:34:43.949 --> 00:34:46.800
after collecting all the
business requirements,

693
00:34:46.800 --> 00:34:50.520
but then an update is as
simple as five minutes.

694
00:34:50.520 --> 00:34:52.080
So, yeah.

695
00:34:52.080 --> 00:34:56.070
So we can move on
to the next slide.

696
00:34:56.070 --> 00:35:02.190
So as I was thinking about
my experience with the tool,

697
00:35:02.190 --> 00:35:06.210
Sigma in particular has
grown and adapted a lot

698
00:35:06.210 --> 00:35:08.818
in there they're constantly
making great improvements

699
00:35:08.818 --> 00:35:09.360
to the tools.

700
00:35:09.360 --> 00:35:12.090
But some things
that I thought I'd

701
00:35:12.090 --> 00:35:13.840
share that I've
learned along the way.

702
00:35:13.840 --> 00:35:17.640
So the biggest thing is
that technology is never

703
00:35:17.640 --> 00:35:20.610
going to-- well maybe someday
I can't speak forever,

704
00:35:20.610 --> 00:35:23.070
but right now technology
is not solving

705
00:35:23.070 --> 00:35:27.720
for the good organizational
and logical conventions.

706
00:35:27.720 --> 00:35:31.020
So those sorts of things
still very much matter.

707
00:35:31.020 --> 00:35:36.150
So while traditional things in
data warehousing, like really

708
00:35:36.150 --> 00:35:40.860
clearly defining dimensions
and facts and things like that,

709
00:35:40.860 --> 00:35:45.060
there's no longer the technical
need for those things to exist.

710
00:35:45.060 --> 00:35:46.890
As they mentioned
earlier, Snowflake

711
00:35:46.890 --> 00:35:49.950
is and no index environment.

712
00:35:49.950 --> 00:35:54.150
But the logical organization
that those traditional data

713
00:35:54.150 --> 00:35:56.820
warehouse standards
provided really

714
00:35:56.820 --> 00:36:00.600
helped me create a good
environment where our business

715
00:36:00.600 --> 00:36:04.770
users could do
something directly

716
00:36:04.770 --> 00:36:07.380
without a ton of guidance.

717
00:36:07.380 --> 00:36:11.850
So I list out a few
things here as just

718
00:36:11.850 --> 00:36:15.630
like really precise
things that I've done.

719
00:36:15.630 --> 00:36:19.285
One in particular is using
explicit ID column names

720
00:36:19.285 --> 00:36:20.160
and things like that.

721
00:36:20.160 --> 00:36:22.470
Because that just helps
your business users,

722
00:36:22.470 --> 00:36:26.730
if they decide to build their
own worksheet and do a join,

723
00:36:26.730 --> 00:36:30.700
it will help them
to be successful.

724
00:36:30.700 --> 00:36:33.000
The second big
thing that I learned

725
00:36:33.000 --> 00:36:36.540
was that you need to train
the users with relevant data.

726
00:36:36.540 --> 00:36:40.080
So when I was first
starting with Sigma,

727
00:36:40.080 --> 00:36:41.850
I was very, very
excited about the tool.

728
00:36:41.850 --> 00:36:44.790
I wanted to get it out there
as quickly as possible.

729
00:36:44.790 --> 00:36:49.740
But, as we mentioned earlier,
on our merchandising team,

730
00:36:49.740 --> 00:36:51.600
I showed her some
sales data, which

731
00:36:51.600 --> 00:36:55.300
is not very relevant
to what she is doing.

732
00:36:55.300 --> 00:36:59.950
She was much more interested
in our inventory levels.

733
00:36:59.950 --> 00:37:01.950
And so it just didn't stick.

734
00:37:01.950 --> 00:37:04.980
I hadn't cleaned up the data
warehouse columns or anything

735
00:37:04.980 --> 00:37:05.520
yet.

736
00:37:05.520 --> 00:37:08.850
And so it ended up
being a wasted effort

737
00:37:08.850 --> 00:37:11.620
until a few months
later when I had

738
00:37:11.620 --> 00:37:16.940
set those good
conventions and such so

739
00:37:16.940 --> 00:37:18.990
that it was more
relevant to her and could

740
00:37:18.990 --> 00:37:20.940
provide immediate value.

741
00:37:20.940 --> 00:37:25.500
And then the final piece
here is that, unlike other BI

742
00:37:25.500 --> 00:37:28.260
tools on the market right
now, there's not really

743
00:37:28.260 --> 00:37:32.760
a data modeling layer.

744
00:37:32.760 --> 00:37:36.240
Which is a really, really
important consideration

745
00:37:36.240 --> 00:37:38.970
as you decide to roll
this out because a lot

746
00:37:38.970 --> 00:37:43.710
of the kind of traditional
guardrails are not there.

747
00:37:43.710 --> 00:37:46.020
What it does allow and
why I think it's better

748
00:37:46.020 --> 00:37:51.270
is that it allows for
the business users

749
00:37:51.270 --> 00:37:57.120
to be much closer to the actual
data and how it's produced.

750
00:37:57.120 --> 00:37:58.770
And the reason
that's important is

751
00:37:58.770 --> 00:38:01.650
because they're the ones
often that are producing it

752
00:38:01.650 --> 00:38:03.600
or that know it best.

753
00:38:03.600 --> 00:38:05.640
And so I think
that this actually

754
00:38:05.640 --> 00:38:10.410
is a great innovation
by Sigma, but it also

755
00:38:10.410 --> 00:38:12.720
means that it does lack
those guardrails that

756
00:38:12.720 --> 00:38:17.030
used to be there with kind
of more traditional modeling

757
00:38:17.030 --> 00:38:17.530
pieces.

758
00:38:17.530 --> 00:38:20.790
So that's just one other thing
to consider as you're rolling

759
00:38:20.790 --> 00:38:23.040
this out to your organization.

760
00:38:23.040 --> 00:38:25.170
And I think that's
it for me, so.

761
00:38:29.090 --> 00:38:29.590
All right.

762
00:38:29.590 --> 00:38:33.420
So now is when we're going
to jump into some questions.

763
00:38:33.420 --> 00:38:36.720
So if you have any questions,
feel free to type it in

764
00:38:36.720 --> 00:38:41.910
and I will communicate those
questions to our panelists.

765
00:38:41.910 --> 00:38:46.850
We have one question
to get us started.

766
00:38:46.850 --> 00:38:48.860
So this is a
question about Sigma.

767
00:38:48.860 --> 00:38:51.890
And the question is, how does
the functionality in Sigma

768
00:38:51.890 --> 00:38:52.715
compare to Excel?

769
00:38:59.270 --> 00:39:00.060
Hi, Erica.

770
00:39:02.488 --> 00:39:04.030
So this question
was kind of like how

771
00:39:04.030 --> 00:39:08.380
does the functionality in
Sigma compared to Excel.

772
00:39:08.380 --> 00:39:11.210
So it's very similar.

773
00:39:11.210 --> 00:39:13.000
So we definitely
try to model what

774
00:39:13.000 --> 00:39:16.630
we do around that
spreadsheet interface.

775
00:39:16.630 --> 00:39:20.270
So you can find all the
typical arithmetic functions.

776
00:39:20.270 --> 00:39:23.360
For example, if you want to do
if statements, fee supportive

777
00:39:23.360 --> 00:39:27.640
statements, then you can nest
any of our other functions

778
00:39:27.640 --> 00:39:31.130
within an if statement to make
a more complex if statement just

779
00:39:31.130 --> 00:39:32.110
like that.

780
00:39:37.500 --> 00:39:40.620
And a question for Snowflake.

781
00:39:40.620 --> 00:39:44.400
Why do people tend to move from
their current data warehouse

782
00:39:44.400 --> 00:39:45.570
over to Snowflake?

783
00:39:45.570 --> 00:39:47.680
Like what's the main difference?

784
00:39:47.680 --> 00:39:50.500
Like what's the pain
point that brings?

785
00:39:50.500 --> 00:39:53.320
Well, I'd say that
the most common pain

786
00:39:53.320 --> 00:39:55.180
point that people
are running into

787
00:39:55.180 --> 00:39:57.400
is that they just
don't have flexibility.

788
00:39:57.400 --> 00:40:02.140
So let's say that they have a
couple hundred or maybe even

789
00:40:02.140 --> 00:40:07.210
just a couple dozen BI users
who are accessing the data

790
00:40:07.210 --> 00:40:11.200
warehouse through their BI
implementation through Sigma.

791
00:40:11.200 --> 00:40:15.070
Well, if they're using a
traditional data warehouse,

792
00:40:15.070 --> 00:40:17.170
even when it's in the
Cloud, oftentimes you'll

793
00:40:17.170 --> 00:40:19.997
find that they have a
lot of concurrency issues

794
00:40:19.997 --> 00:40:22.330
where you have too many people
trying to access the data

795
00:40:22.330 --> 00:40:24.100
warehouse at the same time.

796
00:40:24.100 --> 00:40:25.900
And you simply don't
have the throughput

797
00:40:25.900 --> 00:40:29.460
that you'd need to be able
to respond to that problem.

798
00:40:29.460 --> 00:40:31.450
Well, with Snowflake you
can scale up and down

799
00:40:31.450 --> 00:40:34.270
to meet that demand
and very easily satisfy

800
00:40:34.270 --> 00:40:37.510
all of those people who
are trying to access data.

801
00:40:37.510 --> 00:40:39.310
With traditional data
warehouses you really

802
00:40:39.310 --> 00:40:42.962
can't do that, even data
warehouses in the Cloud.

803
00:40:42.962 --> 00:40:44.920
Another thing that's
really nice with Snowflake

804
00:40:44.920 --> 00:40:46.253
is that it's really predictable.

805
00:40:46.253 --> 00:40:48.970
So you know exactly how much
you're going to be paying.

806
00:40:48.970 --> 00:40:51.280
You know exactly how long
a query is going to take.

807
00:40:51.280 --> 00:40:55.930
You know exactly what you can
do to be able to get that query

808
00:40:55.930 --> 00:40:58.420
to be satisfied quicker
by using more resources

809
00:40:58.420 --> 00:41:00.790
or using resources
in a different way.

810
00:41:00.790 --> 00:41:03.220
So it's predictable
and it's something

811
00:41:03.220 --> 00:41:07.080
that you can very easily
adapt to your situation.

812
00:41:11.860 --> 00:41:14.170
This next question
is about Sigma.

813
00:41:14.170 --> 00:41:17.020
And the question is, where does
the data processing in Sigma

814
00:41:17.020 --> 00:41:17.530
occur?

815
00:41:17.530 --> 00:41:21.160
Are you moving the data into
Sigma and then working on it?

816
00:41:21.160 --> 00:41:23.470
Or is it staying
in the warehouse?

817
00:41:39.190 --> 00:41:40.180
OK.

818
00:41:40.180 --> 00:41:42.810
It looks like we
may have lost Ali,

819
00:41:42.810 --> 00:41:47.520
so we can come back
to that question.

820
00:41:47.520 --> 00:41:49.850
The next question is for--

821
00:41:52.857 --> 00:41:53.940
the next question is for--

822
00:41:53.940 --> 00:41:54.440
Sorry.

823
00:41:54.440 --> 00:41:55.640
I can get to that question.

824
00:41:55.640 --> 00:41:56.765
I can get to that question.

825
00:41:56.765 --> 00:41:58.460
I'm sorry.

826
00:41:58.460 --> 00:42:02.150
Yeah, so basically
all of the data

827
00:42:02.150 --> 00:42:03.590
as processed within
the database,

828
00:42:03.590 --> 00:42:06.470
it's processed within Snowflake.

829
00:42:06.470 --> 00:42:10.340
So all that Sigma does is
it creates a SQL query,

830
00:42:10.340 --> 00:42:12.257
pushes that to Snowflake
and executes it

831
00:42:12.257 --> 00:42:13.090
within the database.

832
00:42:13.090 --> 00:42:15.382
So we're not we're actually
pulling a full data set out

833
00:42:15.382 --> 00:42:17.552
of Snowflake to be executed.

834
00:42:21.480 --> 00:42:21.980
All right.

835
00:42:21.980 --> 00:42:24.080
Thank you.

836
00:42:24.080 --> 00:42:29.230
The next question is for
Dominic about Olivela.

837
00:42:29.230 --> 00:42:32.500
The question was about
the data modeling

838
00:42:32.500 --> 00:42:37.840
and how Sigma doesn't
have the data model.

839
00:42:37.840 --> 00:42:41.910
How does that work in
your implementation?

840
00:42:41.910 --> 00:42:43.550
Yeah, great question.

841
00:42:43.550 --> 00:42:48.320
So in our
implementation, it really

842
00:42:48.320 --> 00:42:53.120
is dependent on the level of
expertise of the business user,

843
00:42:53.120 --> 00:42:57.320
if it was like an Excel power
user I like to think about,

844
00:42:57.320 --> 00:43:02.480
then I would expose more of
the data access capabilities

845
00:43:02.480 --> 00:43:06.440
to the end users and
explain how the joins work.

846
00:43:06.440 --> 00:43:08.740
They do a really good
job of explaining,

847
00:43:08.740 --> 00:43:11.930
in the interface, how
joining tables work

848
00:43:11.930 --> 00:43:16.670
and what you can expect
from the resulting set.

849
00:43:16.670 --> 00:43:19.220
So, for me, it was
always a training choice

850
00:43:19.220 --> 00:43:22.550
to assess the level
of technical expertise

851
00:43:22.550 --> 00:43:25.040
and to know whether
or not I could expose

852
00:43:25.040 --> 00:43:28.260
that area of the tool to them.

853
00:43:28.260 --> 00:43:31.400
In a lot of cases, I
was able to do that.

854
00:43:31.400 --> 00:43:35.270
And then for others, who are
more like consumers or very

855
00:43:35.270 --> 00:43:40.010
light manipulators , people
who will filter, sort columns,

856
00:43:40.010 --> 00:43:43.460
those sorts of things, we would
tend to have the data team

857
00:43:43.460 --> 00:43:47.150
build pre-existing worksheets or
dashboards that they could then

858
00:43:47.150 --> 00:43:52.000
save a copy for themselves
to manipulate et cetera.

859
00:43:52.000 --> 00:43:55.190
And so it is it's always a very
kind of calculated training

860
00:43:55.190 --> 00:43:59.360
choice, depending on
the team and everything,

861
00:43:59.360 --> 00:44:01.190
knowing that those
guardrails aren't there.

862
00:44:01.190 --> 00:44:03.110
But like I mentioned
earlier, I think

863
00:44:03.110 --> 00:44:08.780
that it's very valuable to have
that closeness of the business

864
00:44:08.780 --> 00:44:12.470
user to the actual data that
is output by the systems

865
00:44:12.470 --> 00:44:14.780
that they're using
because they'll understand

866
00:44:14.780 --> 00:44:16.490
better what they're doing.

867
00:44:16.490 --> 00:44:18.650
When there are data
quality issues,

868
00:44:18.650 --> 00:44:22.520
it's not some data analysts
trying to put in some business

869
00:44:22.520 --> 00:44:23.720
logic to solve for that.

870
00:44:23.720 --> 00:44:25.850
Instead, it gets
exposed to the end user

871
00:44:25.850 --> 00:44:29.510
and they can see what
happened with the data quality

872
00:44:29.510 --> 00:44:33.080
and why they need to
improve their processes

873
00:44:33.080 --> 00:44:34.650
or whatever the case may be.

874
00:44:34.650 --> 00:44:37.910
So, in my experience,
it's been a win.

875
00:44:37.910 --> 00:44:44.060
But it does mean that it
shifts a lot of priority

876
00:44:44.060 --> 00:44:45.065
on that training aspect.

877
00:44:49.680 --> 00:44:50.180
All right.

878
00:44:50.180 --> 00:44:50.680
Thank you.

879
00:44:50.680 --> 00:44:54.570
This dovetails well
into the next question,

880
00:44:54.570 --> 00:44:56.960
which is for Sigma, so for Ali.

881
00:44:56.960 --> 00:45:01.070
And the question was,
do all Sigma users

882
00:45:01.070 --> 00:45:03.670
have access to the
entire database

883
00:45:03.670 --> 00:45:08.220
or are there levels of
access that can be set up?

884
00:45:08.220 --> 00:45:11.530
Yes, so there's definitely
levels of access

885
00:45:11.530 --> 00:45:14.280
that can be set up.

886
00:45:14.280 --> 00:45:16.108
So the user is part
of a specific team,

887
00:45:16.108 --> 00:45:17.900
like I showed at the
beginning of the demo,

888
00:45:17.900 --> 00:45:19.950
as part of the product
solutions team.

889
00:45:19.950 --> 00:45:25.500
You can determine which database
or data connection, if you want

890
00:45:25.500 --> 00:45:27.923
them to have a authoring
usage, meaning they

891
00:45:27.923 --> 00:45:29.340
can connect directly
to the table.

892
00:45:29.340 --> 00:45:31.255
Or if you want to just
be a reader, meaning

893
00:45:31.255 --> 00:45:32.880
they can only work
with worksheets that

894
00:45:32.880 --> 00:45:34.880
have been curated for them.

895
00:45:34.880 --> 00:45:40.280
And so it is possible that
for one data connection

896
00:45:40.280 --> 00:45:44.190
they can have author access to
user tables and for another one

897
00:45:44.190 --> 00:45:46.555
they can only have reader
access where they only

898
00:45:46.555 --> 00:45:47.680
work with curated versions.

899
00:45:54.310 --> 00:45:54.810
All right.

900
00:45:54.810 --> 00:45:58.460
And it looks like that
is all of the questions

901
00:45:58.460 --> 00:46:01.350
that we've gotten
from our attendees.

902
00:46:01.350 --> 00:46:06.320
So thank you so much to our
panelists for working us

903
00:46:06.320 --> 00:46:09.390
through everything
with Snowflakes,

904
00:46:09.390 --> 00:46:13.310
Sigma, and how Olivela
has implemented.

905
00:46:13.310 --> 00:46:15.920
And that's the end
of our webinar.

906
00:46:15.920 --> 00:46:19.490
So have a great day everyone.

907
00:46:19.490 --> 00:46:21.120
Thank you.
