resources.observepoint.com Open in urlscan Pro
3.98.63.202  Public Scan

URL: https://resources.observepoint.com/validate-2021-a-marketing-analytics-conference/journey-maintenance-test-monitor-critical-user-pa...
Submission: On January 20 via api from SG — Scanned from CA

Form analysis 1 forms found in the DOM

Name: wf-form-OP-Generic-FormPOST https://ops.observepoint.com/op-webform-submit-gdpr.php

<form id="op-form" name="wf-form-OP-Generic-Form" data-name="OP Generic Form" method="post" action="https://ops.observepoint.com/op-webform-submit-gdpr.php" offer="offer" _lpchecked="1" data-strala-form="">
  <div class="form-first-name-div">
    <label for="field" class="form-headings-2">First Name</label>
    <input type="text" class="form-text-field-2 w-input" autofocus="true" maxlength="256" name="field-001" data-name="field-001" id="field-001" required="">
  </div>
  <div class="form-last-name-div">
    <label for="field" class="form-headings-2">Last Name</label>
    <input type="text" maxlength="256" name="field-002" data-name="field-002" required="" id="field-002" class="form-text-field-2 w-input">
  </div>
  <div class="form-email-div">
    <label for="field" class="form-headings-2">Email</label>
    <input type="email" maxlength="256" name="field-003" data-name="field-003" required="" id="field-003" class="form-text-field-2 w-input"
      pattern="[a-z0-9._%+-]+@(?!(?:live|gmx|mailtopi|icloud|timevod|devinetrinitypch|test|testsdfsdf|mailtopi|banglemail|yahoo|mailrez|yopmail|outlook|msn|icloud|aol|zoho|yandex|lycox|inbox|myway|aim|goowy|juno|(?:hot|[gy]|short|at|proton|hush|lycos|fast)?mail)\.\w+$)[a-z0-9.-]+\.[a-z]{2,4}"
      title="Please provide a valid business email.">
  </div>
  <div class="form-company-div">
    <label for="field" class="form-headings-2">Company</label>
    <input type="text" maxlength="256" name="field-004" data-name="field-004" required="" id="field-004" class="form-text-field-2 w-input">
  </div>
  <div class="form-title-div">
    <label for="field-8" class="form-headings-2">Job Title</label>
    <input type="text" maxlength="256" name="field-006" data-name="field-006" required="" id="field-006" class="form-text-field-2 w-input">
  </div>
  <div class="form-country-div">
    <label for="field-12" class="form-headings-2">Country</label>
    <select id="country" name="field-010" required="" data-name="field-010" class="w-select" onchange="showDiv()">
      <option value="United States">United States of America</option>
      <option value="United Kingdom">United Kingdom</option>
      <option value="Australia">Australia</option>
      <option value="Canada">Canada</option>
      <option value="Belgium">Belgium</option>
      <option value="Brazil">Brazil</option>
      <option value="Bulgaria">Bulgaria</option>
      <option value="Croatia">Croatia</option>
      <option value="Cyprus">Cyprus</option>
      <option value="Czech Republic">Czech Republic</option>
      <option value="Denmark">Denmark</option>
      <option value="Estonia">Estonia</option>
      <option value="Finland">Finland</option>
      <option value="France">France</option>
      <option value="Germany">Germany</option>
      <option value="Greece">Greece</option>
      <option value="Hungary">Hungary</option>
      <option value="Ireland">Ireland</option>
      <option value="Italy">Italy</option>
      <option value="India">India</option>
      <option value="Japan">Japan</option>
      <option value="Latvia">Latvia</option>
      <option value="Lithuania">Lithuania</option>
      <option value="Luxenbourg">Luxenbourg</option>
      <option value="Netherlands">Netherlands</option>
      <option value="Malta">Malta</option>
      <option value="Poland">Poland</option>
      <option value="Portugal">Portugal</option>
      <option value="Romania">Romania</option>
      <option value="Slovakia">Slovakia</option>
      <option value="Slovenia">Slovenia</option>
      <option value="Spain">Spain</option>
      <option value="Sweden">Sweden</option>
      <option value="Russia">Russia</option>
      <option value="China">China</option>
      <option value="Zambia">Zambia</option>
      <option value="Zimbabwe">Zimbabwe</option>
      <option value="United Arab Emirates">United Arab Emirates</option>
      <option value="Asia/Pacific Region">Asia/Pacific Region</option>
      <option value="Other">Other</option>
    </select>
  </div>
  <div id="gdpr" style="display:none">
    <label id="field-11" class="w-checkbox"><input type="checkbox" id="field-011" name="field-011" data-name="field-011" class="w-checkbox-input">
      <span for="field-14" class="checkbox-label-2 w-form-label">I would like to receive marketing communications from ObservePoint, and I consent to the processing of the personal data that I provide ObservePoint in accordance with and as described
        in the &nbsp;<a href="https://www.observepoint.com/privacy-policy" target="_blank">privacy policy.</a></span>
    </label>
  </div>
  <input type="submit" value="Access Content" data-wait="Please wait..." class="yellow-btn form-button w-button">
  <div class="w-embed">
    <input type="hidden" id="offer" name="offer" value="Content - Journey Maintenance: Test &amp;amp; Monitor Critical User Paths for Functionality - Lucas Pafume Silva, Vivo">
    <input type="hidden" id="product-of-interest" name="productOfInterest" value="WEB">
    <input type="hidden" id="content-stage" name="contentStage" value="Awareness">
    <input id="redirect-url" name="redirectURL"
      value="https://resources.observepoint.com/validate-2021-a-marketing-analytics-conference/journey-maintenance-test-monitor-critical-user-paths-for-functionality?utm_campaign=ValidateJourneyMaintenance&amp;utm_medium=email&amp;utm_source=marketo&amp;mkt_tok=NDQyLU1EUi0zNTkAAAGCFSQMCNXgB-K0XK2QYHUDIH_NpJEmRj0KutUt8vXzoi9rOExWn_c6UjK2_6iMUPvC9T5WDebs_pBjjvFnrwooQEPVtVVJXHAXcJwEfgI?confirmation=true"
      type="hidden">
    <!-- Populated by GUP function -->
    <input type="hidden" id="utm-source" name="utm_source" value="marketo">
    <input type="hidden" id="utm-medium" name="utm_medium" value="email">
    <input type="hidden" id="utm-campaign" name="utm_campaign" value="ValidateJourneyMaintenance">
    <input type="hidden" id="utm-term" name="utm_term" value="">
    <input type="hidden" id="utm-content" name="utm_content" value="">
    <!-- populated by tracking script -->
    <input type="hidden" id="offer-url" name="offerURL"
      value="https://resources.observepoint.com/validate-2021-a-marketing-analytics-conference/journey-maintenance-test-monitor-critical-user-paths-for-functionality?utm_campaign=ValidateJourneyMaintenance&amp;utm_medium=email&amp;utm_source=marketo&amp;mkt_tok=NDQyLU1EUi0zNTkAAAGCFSQMCNXgB-K0XK2QYHUDIH_NpJEmRj0KutUt8vXzoi9rOExWn_c6UjK2_6iMUPvC9T5WDebs_pBjjvFnrwooQEPVtVVJXHAXcJwEfgI">
    <input type="hidden" id="referrer-url" name="referralURL" value="">
    <input type="hidden" id="lead-source" name="leadSource" value="Email - ValidateJourneyMaintenance">
    <!-- values populated below -->
    <input type="hidden" id="recommended-title-1" name="recommended_title_1" value="What's New in Digital Governance?">
    <input type="hidden" id="recommended-title-2" name="recommended_title_2" value="5 Trends Shaping Digital Experience - Tanu Javeri of IBM, Daryl Acumen of Adobe, Adam Greco of Amplitude">
    <input type="hidden" id="recommended-title-3" name="recommended_title_3" value="Technology Governance: Identify &amp; Validate MarTech for Accurate Data &amp; Actionable Insights - Arthur Engelhard, Newfold Digital">
    <input type="hidden" id="recommended-url-1" name="recommended_url_1" value="https://resources.observepoint.com/validate-2021-a-marketing-analytics-conference/whats-new-in-digital-governance">
    <input type="hidden" id="recommended-url-2" name="recommended_url_2" value="https://resources.observepoint.com/validate-2021-a-marketing-analytics-conference/5-trends-shaping-digital-experience">
    <input type="hidden" id="recommended-url-3" name="recommended_url_3"
      value="https://resources.observepoint.com/validate-2021-a-marketing-analytics-conference/technology-governance-identify-and-validate-martech-for-accurate-data-and-actionable-insights">
    <!--strala tracking-->
    <input type="hidden" name="strala_uuid" value="cf21a5fd-8da1-4635-b06c-6432ae4ac5ca">
  </div>
  <input type="hidden" name="strala_uuid">
</form>

Text Content

JOURNEY MAINTENANCE: TEST & MONITOR CRITICAL USER PATHS FOR FUNCTIONALITY -
LUCAS PAFUME SILVA, VIVO

Your users traverse critical paths within your site or app on a regular basis.
Are you sure those paths are functioning properly and that your MarTech
solutions are tracking correctly? Solving these issues as soon as they happen
will help you maintain a good customer experience and ensure that you’re able to
track it. Learn how you can:

 * Replicate your site’s user flows, such as shopping carts or user logins, from
   start to finish.
 * Test if anything prevents the path from completing or if the analytics are
   not tracking.
 * Configure items such as browser, location, and consent preferences to detect
   any issues a user may experience.

 

Lucas Pafume Silva

Digital Transformation Consultant, Vivo

Lucas graduated in Systems Analysis and Development at Mackenzie, with an
emphasis in digital marketing. Lucas has worked in the web analytics area for
over 5 years, analyzing user behavior in their online journey via desktop, app,
and physical stores, in order to generate insights into experiences and improve
purchase flows. Lucas specializes in implementing analytics, optimizing
environments, and extracting insights from digital analytics tools, audience
management, SEO, CRO and CRM. He also has expertise in Data Governance, Data
Validation, Automation Testing, Data Quality Assurance, Validation Plan, Tag
Implementation Plan, Data Visualization, Marketing Cloud, Digital Analytics,
Data Management Platform (DMP), Media Attribution, A/B tests, and APP Journey.

 

Dylan Sellers

Director of Customer Success, ObservePoint

As Director of Customer Success, Dylan oversees a team of technical success
managers focused on understanding, supporting, and solving the data quality and
privacy problems digital analysts and marketers face. Previously, as a Customer
Solutions Engineer for ObservePoint, Dylan built custom solutions to meet
client’s data collection and reporting needs while also acting as an internal
technical implementation specialist. Before his current role, he was head of UX
Research and Customer Education and the OP Community Slack Admin. His background
is in Product & Project Management with a Bachelor’s in Electrical Engineering.

 

--------------------------------------------------------------------------------

Dylan Sellers: (00:02)
Hello, everyone. Welcome to another session of VALIDATE. Hope the previous
sessions have been awesome. I am excited to be presenting today with my friend
Lucas Pafume. I will dive into our content today here, just in a second. But
it's an honor to present with Lucas, and kind of a fun fact, actually Lucas is
based out of Brazil. We've actually had more conversations in Portuguese, that
today will be pretty unique for us to speak in English and present, but excited
nonetheless. And we have a lot of cool content to share around web journeys, as
we'll talk about today. So I'm going to go ahead and start sharing my screen and
we'll get this kicked off.

Dylan Sellers: (00:50)
So for those of you who are joining now, again, welcome to VALIDATE. We're going
to dive in. And my name is Dylan Sellers. I'm the Director of Customer Success
here at ObservePoint, and I'm joined by Lucas Pafume Silva, who is a Digital
Transformation Consultant at Vivo. Vivo is one of the largest telecommunications
companies in Brazil. In fact, is the largest. They provide internet services,
cellular services for a good portion of population down there in Brazil. Today
we're going to be discussing Journey Maintenance, and specifically, we're going
to talk about how to Test & Monitor Critical User Paths for Functionality.
Before we dive in just want to say Lucas has been a long time, he's got some
cool content he's going to share. He's going to cover some use cases that
they're using a Vivo with ObservePoint. I'm going to focus on some best
practices and some principles of what journeys to cover.

Dylan Sellers: (01:45)
And then I want to point out though as well, that Lucas is a long time
ObservePoint user. So, previous to working at Vivo, he worked at a company
called Via Varejo, and he actually used ObservePoint so much, loved it, came to
Vivo and said, "Hey, this is one of the first technologies we need to bring on
board." So long time user champion of ObservePoint, and a good friend. So,
excited to present with Lucas today.

(02:10)
I'm going to go ahead and dive into our topic today. So when we talk about web
journeys in the solution of journeys, you first talk about the problem. So we'll
talk about the why, or the problem that we're solving with web journeys. The
problem that we solve is that conversion event testing is painfully manual, yet
critical to creating confidence in the reporting across your business. So maybe
you've experienced this, or your boss comes and you say, "Hey, we just launched
this new product last week. We need to understand how well it's performing." And
you go back to your reporting. You say, "Oh, actually I haven't been looking at
this very closely. Just realized there seems to be some kind of discrepancy, I
need to go and troubleshoot." And then at the end of that troubleshooting road,
you find out and discover that, sure enough, there was some kind of analytics
issue with the deployment of that product. And you have to go back to your boss
and have that kind awkward conversation about how we weren't capturing the
traffic for those particular conversion events. And we're going to have to
basically refresh and start from today.

Dylan Sellers: (03:10)
So in order to avoid that kind of uncomfortable conversation with your boss, in
order to give you really visibility and confidence, your analytics that's where
we start to apply web journeys. So let's talk about what a web journey is,
first. A web journey is basically a solution that allows us to simulate a user
path on a website, on a digital property. And so when you look at this image on
the right side, you see these users reaching certain milestones that drive to
specific targets, specific revenue based goals for the business. And so these
users are on this path on their way to conversion on their way to driving
revenue for you. And oftentimes it can be while we're simulating this
experience, we're going to be testing the analytics, we're going to see what's
firing, we're gonna see what's not firing, we're going to see the variables and
values, we're going to see cookies, we're going to see a lot of data. And we're
going to create rules to validate and ensure that that data is collected
properly. But where should we start? And specifically, what should we be testing
is the first question we'll discuss today.

(04:12)
So, what should we test? Usually when we get this question, asked to the
customer success team, we usually return the question, what matters most to you?
What are your priorities? What matters most to the company or to the
organization? You can ask this question a lot of ways. You can ask what metrics
have the most visibility or impact at your company? What data collection issues
have occurred recently? What's the most recent story that really led you to
start to think twice about your data governance strategy, to start thinking
twice about how you're preventing these issues from occurring? What was that
painful experience that drove you here in the first place to find to the
solution? And then last but not least, if all else fails and you're still
uninspired on what to test, then you should ask what conversion events drive
revenue for your business? Follow the money and you'll find that those
conversion events are typically top priority and the highest focus. So, that's
what we should be testing. And we're going to be simulating, we're going to
create web journeys that simulate users executing these experiences and going
down this path.

(05:21)
Let's talk about where we should be testing. There's two different strategies,
primarily there's lower environment testing, and then there's production
environment testing. And we'll talk about both since there are pros and cons to
each. Some of the pros for low environment testing is proactive. You can solve
the problem before it happens, which is pretty amazing, pretty rare that we can
have that kind of preventative protection with the data governance. And then we
also have fewer obstacles. Typically in a production environment, you might
require test credentials. You can't go and buy products on your website live
without some kind of special test credentials in order to complete that
purchase. Same for bot detection. Typically we don't run into reCAPTCHA barriers
in lower environments. So it's very, very convenient that when we do lower
environment testing, we can kind of play with it and not worry about the
consequences so much.

(06:15)
The trade-off is that it's less stable, and that you're more prone to have
server errors. Maybe that test environment isn't always live and available.
Maybe you can't even get in, maybe we need a VPN or some kind of credentials and
it's a pain, and it's just more hoops for you to jump through. So consider that.
And that's one of the barriers with learner testing, but note that if you can
overcome these barriers, it's a great strategic solution. I'll also mentioned
that it's not always in sync with production. Most engineering teams try to keep
an environment as close to parody with production as possible, but they're not
always going to be identical environments. They might have different backend
systems, different servers that are supporting them. A lot of variables come
into play here. So just keep that in mind that it's not fool-proof.

(06:57)
Now with production environment testing, this reflects reality, this is live,
this is real customer data. And so it's the real experience your customers are
having. So if you're having server errors or issues with your testing, it could
be that you're truly identifying stability issues on your website. So when we
have more stability typically in production, and Lucas is going to show some of
this in his part of the presentation, he's going to talk about how we can
identify maybe some stability issues, or issues where our backend wasn't
providing the data we needed. And so, just keep that in mind that typically it's
more stable, but we can still detect those issues if they do exist. And then
last but not least it's easy to access. So most production websites I can access
most of the content by simply visiting the page, it's publicly accessible. I may
need to log in order to access some content, but the vast majority of it is
usually publicly accessible.

(07:48)
The trade-off here, it's reactive. And inevitably your reporting is going to be
impacted. The question is, can we reduce that response time, that resolution
time, solve the problems so that it doesn't impact and further hurt our
reporting, and our analytics quality? It is technically reactive, but we can do
a lot of things to reduce that. We have obstacles like test credentials that are
required in order to purchase something, maybe in order to submit a form. We
also have bot detection hurdles. So maybe we need to white list ObservePoint's
IP addresses when we go and crawl your site and simulate this user experience.
Or maybe there's a reCAPTCHA that we have to find a workaround in order to
complete a form and test that conversion event, and simulate it. So that's where
we should be testing kind of the two different perspectives of lower environment
testing in production.

(08:42)
Next, I want to ask, when should we be testing? We should be asking this too,
when do you need to know about issues, how quickly? And most of you, if I asked
this, you're going to say, "Well, Dylan, I needed to know about this yesterday.
I needed to know about last week, today is too late." And that's where we've
really pushed you towards lower environment testing. That's the best solution.
Now for those that are in production, you're still gonna want to have a really
quick response, and a rapid response time and resolution rate. ObservePoint has
web journey frequencies that you can configure, and you can configure those as
quickly and as frequently as to every 15 minutes, out to once a month. And it's
up to you and I really recommend that you decide and make the call of which
frequency you should be testing on, based on the prioritization. If it's maybe
your checkout flow, if it's the bread and butter of your business, it's worth
driving all of your revenue towards. Like an e-commerce platform where you have
to purchase something, then that might be high priority, something you need
always up and running all day. But if it's something of a lower priority, maybe
it's a form, a newsletter, signup fields, hourly might be sufficient, or daily
might be sufficient.

(09:53)
Next the other school thought besides frequency, the other strategy I should
say, not just school thought, but a strategy, is trigger based. And so there's
kind of two ways we can do trigger based testing. There's integrating with a
CI/CD pipeline. So making sure that we integrate with the development cycle,
that your engineers, IT teams use to deploy code. And then there's TMS based
triggering, so when I make a publication of a new version of my tag management
system, I might want to trigger a series of web journeys based on that. As you
can see in this little diagram here we have the development cycle spelled out
here, and specifically after we've deployed code to assess test environment, and
we are doing this automated test, that's a great place to do lower environment
testing. And then after we deployed to production, we measure and validate. And
that's the second place where you'd want to, the second key point in this
development life cycle, where you'd want to test and run and ObservePoint
Journeys. So that's the best practice here, and we recommend taking one of these
two approaches.

(10:56)
Now last but not least, I want to talk about just a couple of best practices,
and then we'll turn time over to Lucas. But I want to make sure we hit the
target, we hit the bullseye, and we follow these best practices. The first one
is prioritizing rules. Not all conversion events are created equal, and we need
to remember that some conversion events are more critical of the business than
others. And so the rules that we create to validate these conversion events
should also be prioritized, and we should be focusing on what matters most.

(11:22)
Another pro tip is to start with Action Sets. Action Sets are groups of actions
that we create, and they can be used as components or modules that can be
repeatedly used in a series of journeys. And so, as you think about, and start
planning out the different test cases you're going to use, maybe you have a
series of actions that log a user into the platform, or maybe they complete a
purchase, or maybe they fill out payment information, and you want to reuse
those components multiple times to be efficient. That is a best practice and
will save you time. So start with action sets and plan to use them when you're
mapping out your test cases.

(11:54)
Next is integrate, make sure you're leveraging teams, Microsoft teams, slack,
email, JIRA, SMS, one of these other communication channels. And then more
importantly, make sure that you are integrating ObservePoint into your process.
Meaning that you are taking these messages, these notes, these notifications,
and you are have a process to react, respond, resolve the issues. Too many
people overlook that.

(12:18)
Leverage comparisons. So comparisons, web journey comparison specifically,
allows you to see the difference between tags and the respective variables and
values in between runs. And so maybe I run a test today, or maybe we run at this
hour, and I run next hour, and suddenly there's a failure. I can then see a
side-by-side comparison of what changed, which is pretty powerful.

(12:37)
Completing our web journey tutorials, make sure to not skip training. Most of
you like to dive in and get hands-on, but you'll find that you can always learn
a thing or two by going through our academy. Academy.observepoint.com, there'll
be other tutorials there as well. Help.observepoint.com is another great
resource with tons of materials. And last but not least talk to your success
manager. They do this every day. We love serving our customers. We love working
with you and excited to share success with you guys. I'm going to go ahead and
pass the Baton over to my good friend Lucas, and I'll stop sharing my screen. So
Lucas, the floor is yours.

Lucas Parfume Silva: (13:12)
Thank you so much, Dylan. So before we start, thank you so much for inviting me
to be here. I'm really glad to participate it here to VALIDATE 2021. So let's
talk about our first journey. We're talking about here, the VivoStore
FullJourney. We have this journey early, because we need to understand. When
they use it, have the fiber internet connection in your house, or no? So we have
the full journey, but on the step seven, we have the type of the consult to
understand the zip codes. For example, if we didn't have the technology to give
the internet connection. So when we click here in action details and request
log, we have the API and this API sometimes give us the status called the 200,
but sometimes the 400. And when we have these cases, we try understand the hour
that this happened and pass this type of problem to the IT team and understand
what has happened. Because we don't know because we have the internet connection
in your house, but on this time, this has not happened correctly. So this is the
first user case. And here we validated our rules to media tags and Google
Analytics, or another tool that we have here. And ObservePoint saved us too here
because we understand all problems that we have the, the API.

(15:06)
So talking about the second user case, we're talking about the Vivo Shopping.
The user started on product page and go to the cart page. And sometimes we have
problems with the Google Tag Manager download. So when the user clicks
add-to-cart the product, and go to the cart, we don't know what has happened,
but the Google Tag Manager stopped to load it on this page. And we lose all the
conversions and metrics to the end of the funnel. And when we select here in
action details and console log, we can see here the logs with the application
and understand what has happened. So here everything's okay, we don't have this
error now, but ObservePoint helped us to understand when does it hurts of
course, and try to understand how. It's a little problematic because we can lose
the conversions here. So this is the second user case.

(16:24)
And the last user case that we have here is Vivo Shopping 2, but we selected the
all carousels on home page. We understand what has happened when the user
selected the category page, or product page, go to the next step, then come back
home. Okay, Lucas, we understand, the user goes to the product page, the catalog
page, but sometimes we see too many time on the load page. So we created this
journey early too. And understand what time of the day the user have more time
to load this next page or no. So for example select the step three on this
journey, selected the request to log, and filter here for the most expensive
time. And we can see here this file. So it's a CSS file, and the launch time is
a six seconds. So too much time to load a page or a request. And we get this
data saying that you, or IT team to try to solve this problem. And with
ObservePoints we can understand what has happened and try to compare before and
after the problems of it. So this is our user cases.

Dylan Sellers: (18:05)
Awesome. Thanks Lucas, for walking through that. I want to remind people that we
do have a chat here. If you have any questions about web journey use cases for
Lucas and I, feel free to provide them in the chat. I want to just comment on
some of Lucas's use cases. They're really powerful. I think the first one we
talked about, we have this form that allows us to verify or check if we have
internet access and you're able to use ObservePoint, to troubleshoot and verify.
We're basically identifying that services were not necessarily, truly reported
on when they provide their zip code, which I thought was pretty powerful. That's
more of like a functional test, right? Not necessarily analytics driven, but
you're able to use ObservePoint to identify that issue, which I think is super
cool.

Dylan Sellers: (18:49)
And even Lucas, I think if you, if you share your screen again, there was you,
if you show the date picker, I think you showed this to me the other day, where
you can see so many runs, all the different runs that executed in a day, and,
you know, Lucas's run hourly. And so being able to see those running hourly and
say, okay, in this particular day, we have 23 out of the 24 executions were
successful, but here we go, we have a couple of anomalies where it was not
successful. I like this example, this visual example Lucas has where we can say,
okay this inner service is supposed to be available at all hours of the day, but
four hours of the day when we ran this test or website was not telling them was
available. So I thought that was a super cool use case there. Thanks for sharing
that Lucas.

(19:37)
And then the second one is pretty core with products details and validating that
particular funnel is working, and that the analytics that are critical to
measuring those conversion events are covered. I thought that was an awesome
example, Lucas, and truly a bread and butter ObservePoint use case. And then the
last one is really creative where you talked about the using ObservePoint
screenshots. So we take screenshots after every action is executed. And you can
tell that some of the screenshots that it takes time for that those screens to
render into load. And here you're seeing that it took, and you even dove into
not just the screenshots, but like the load time of each request. And you're
able to identify the slowest loading requests, the ones that are impacting
customer experience. So overall, amazing job, Lucas.

(20:30)
Go ahead, you don't have to share anymore I'll let you come back to just the
camera. But I wanted to just congratulate users, like Lucas that are really
pioneering with ObservePoint. That you're finding new use cases, exploring new
opportunities and sharing these with other people in this industry. So
congratulations to you, Lucas, and the team at vivo for your awesome
implementation. And I just want to say, and put another plug for customer
success and just say, we're here, we're listening. We want to get your feedback
on not only your experience with us on the product and we're ready and always
willing to help. So please reach out and leverage those resources.

Dylan Sellers: (21:10)
I'm gonna go ahead and just take a look at the questions here. We have Prashanth
Reddi said, are there any possibility of running a single step in the journey
instead of the full journey? I think that's great product feedback, Prashanth.
We don't necessarily have a functionality that says, hey, stop at step or action
two or three, but that's awesome feedback, especially for those longer journeys.
It would be nice to be able to say, let's just run the first five steps for this
next execution. So fantastic product feedback. We'll make sure to capture that
those in the chat moderating can help us capture that feedback and we'll make
sure passing on our product team. I think I've heard it before not just
Prashanth from other customers, fantastic feedback.

(21:50)
Dillon Batenik mentioned custom JS as a workaround using try-catch and other
solutions. So this is a really popular use case where, because we support
JavaScript, you can use try-catch kind of scripts and you can attempt something.
And if it doesn't succeed, maybe the element is there sometimes not always. You
can make sure it doesn't interrupt your journey, it can kind of move on. So
that's one method, but again, Dillon, I think validated Prashanth's feedback
about being able to replay and retry certain steps to the whole journey. So
fantastic feedback, another vote from Dylan. If there's others we'd love to hear
it, send that feedback our way. There's a feedback portal in the app, and we
have your success manager is always available. So reach out excited to be a part
of validate this year. Thanks for participating. And we'll go ahead and wrap up
there.

(22:41)
So, we have one more question. We'll take one more question. I think moderator's
just sending it our way. Can we assign rules to a particular tag? Good question.
Yes, you absolutely can assign roles to specific tag. So actually I guess the
way we would define it is we would just apply it to an action. Then we would, in
that action, we would define the tags and the required variables and values that
are expected. So I guess Karthik, if you, maybe if, hopefully that answers your
question, if not, you can reply back in the chat, but we will go to the action
in the journey and we'll say, hey, at this particular action, step five, we
expect this tag to fire and these variables to be captured in that tag. You can
also apply rules globally to the journey. So you can say use it in the global
rules that we use, the web journey rules are use at least once logic. So we can
say, okay, at least one time in the series of five or six actions, I need to see
this event fire within this tag. So that's, there's kind of two different
strategies there. Again, if it's not something that quite fits your use case
Karthik, or if you have additional questions, just reach out to your success
manager, or you can reach out to support at observepoint.com, happy to help.

(23:54)
Any more questions in the chat? If not, we'll wrap up. Okay. We got another one
from Dillon, in a journey, can you look at the results after executing at least
once? For example, variable summary, this is where I build many of the rules. I
think that's additional feedback, right? And to the point of executing just
certain steps and limiting to the steps and then find seeing those results
before we move on. So I think Dillon, that's another request for a feedback team
and appreciate your feedback.

Dylan Sellers: (24:33)
Awesome. Another one from Prashanth. So based on Karthik's question that we had
earlier we the question is if we have three or four tags fire in the same call,
okay. So firing the same call, that can be a little confusing. So let me make
sure I'm interpreting that correctly. So we have maybe three or four tags firing
within a single request, right? So I think there are solutions like one call and
a lot of the server-side type technologies are sending data in a single call in
batches, and then the server then parses them out and distributes them to their
correct servers. So that's a little more of a service side support question, but
I think the, what we can support today, Prashanth is we can create a tag on our
database for that batch of tags. Then as long as the data that you want to
validate is available in the request's body, and we can parse out those
variables as it's being sent out, then we can validate that and we can create
rules around it.

Lucas Parfume Silva: (25:32)
Dylan, just a comment, I don't know if it's possible, but we had here, for
example, two who want to lead and we have two rules, one for page view and
another one for, even in this call it in the same moment for example, have page
view, promotion view, or add-to-cart, to fire in the same time for, two
different UIs. So I dunno if the he's talking about here.

Dylan Sellers: (26:03)
Gotcha. Yeah, there could be, could be batches events. I know the Google
analytics for you can kind of batch different events together in a single call.
So definitely we'd happy to discuss that more offline, anything that's awesome.
Karthik followed up with it, he said for example, action step containing four
tags, but I need to assign a rule to tank three in the call. Okay. So that's
fantastic comment. So if all those tags assuming that they're all the same tag,
then you'd have to create a condition that looks for, let's say they're all
Adobe, for example. And we say, if we're looking at Adobe and that Adobe tag
contains a specific variable, then we set an expectation to evaluate all those
different variables, and this can be difficult. It's not quite the best
solution. The most elegant solution I think would be to be able to check all
four and say, okay, at least one of these four meets this expectation. And so a
great product feedback we'll pass on. And then if they're not the same tags and
it's actually put simple and we're going to simply create an individual line in
our rule for the tag, and the variable within that tag, that we're expecting so
much simpler. Great questions. You guys are quizzing me well today. This is
awesome.

Dylan Sellers: (27:17)
Okay. We'll go ahead and wrap up with questions then and very grateful for your
participation today. Grateful for you Lucas again, an honor as always. And
Abraço, as we say in Portuguese and we'll talk soon, so thank you everyone.

Lucas Parfume Silva: (27:31)
Thank you guys, thanks ObservePoint team. Thank you, Dylan. And obrigado
pessoal, valeu!

Dylan Sellers: (27:37)
All right, I'll see you the next session. Bye.


First Name
Last Name
Email
Company
Job Title
Country United States of America United Kingdom Australia Canada Belgium Brazil
Bulgaria Croatia Cyprus Czech Republic Denmark Estonia Finland France Germany
Greece Hungary Ireland Italy India Japan Latvia Lithuania Luxenbourg Netherlands
Malta Poland Portugal Romania Slovakia Slovenia Spain Sweden Russia China Zambia
Zimbabwe United Arab Emirates Asia/Pacific Region Other
I would like to receive marketing communications from ObservePoint, and I
consent to the processing of the personal data that I provide ObservePoint in
accordance with and as described in the  privacy policy.

Products
Resources
Company
Login
Pricing
Request a Demo
Free Trial


PRODUCTS

DIGITAL DATA GOVERNANCE SUITE

Gain trusted insights & actionable data

TECHNOLOGY GOVERNANCE

Validate MarTech for better data

PRIVACY COMPLIANCE

Comply with customer data regulations

CAMPAIGN PERFORMANCE

Standardize tracking across journeys

TAGDEBUGGER

Spot check tags, fast and free

v


TOP FEATURES

AUDITS

Identify all tech on your site

JOURNEYS

Test and monitor critical paths

RULES

Create requirements for all tags

COMPARISON REPORTS

Compare tagging changes over time

TAG INITIATORS

Visualize your tagging architecture

Ready to Get Started?


RESOURCES

CONTENT LIBRARY

eBooks, white papers, reports, webinars

BLOG

Digital analytics and tag governance

TAGDEBUGGER

Spot check tags, fast and free

EVENTS

Join us at our live events


FEATURED CONTENT

WHAT WE DO

Ensure accurate & secure data collection

FORRESTER REPORT

Total Economic Impact of ObservePoint

ANNUAL STUDY

Digital Analytics & Governance Report

PRIVACY REPORT

A study of cookies across 300 websites

Ready to Get Started?


COMPANY

ABOUT

Why we do what we do

NEWS & MEDIA

The latest ObservePoint News

CAREERS

Check out your next job


GET TO KNOW US BETTER

CONTACT

Questions? Let's chat

CUSTOMERS

A few of our world-class customers

PARTNERS

Help us help you

Ready to Get Started?

Products
Digital Data Governance Suite
Technology Governance
Privacy Compliance
Campaign Performance
TagDebugger
Resources
Content Library
Blog
TagDebugger
Events
Company
Contact
About
News & Media
Customers
Partners
Careers

ObservePoint


RESOURCE LIBRARY


ObservePoint
 * Solutions
   
   * Solutions
   * Govern Tags More Efficiently
   * Ensure ROI on MarTech
   * Improve User Experiences
   * Validate Tags Before & After Release
   * Deploy New Marketing Tech
   * Comply with Data Privacy Regulations
   * QA Tag Implementations
   * Plan & Document Implementations
   * Monitor Tags Over Time
 * Resource Type
   
   * Resource Type
   * Product & Service Summaries
   * Case Studies
   * Tip Sheets & eBooks
   * Reports
   * Webinars & Virtual Panels
   * Product Videos
   * Blog
 * Virtual Events
   
   * Virtual Events
   * Validate 2021
   * DataChat LIVE!
   * Validate 2020: Marketing & Analytics Summit
   * Marketing Attribution Symposium 2020
   * Virtual Analytics Summit 2020
   * Virtual Analytics Summit 2018
   * Virtual Analytics Summit 2017
   * Virtual Analytics Summit 2016

×

 * Share this Video
 * Facebook
 * Twitter
 * Work Email
 * LinkedIn

Content Library » Validate 2021: A Marketing & Analytics Conference » Journey
Maintenance: Test & Monitor Critical User Paths for Functionality - Lucas Pafume
Silva, Vivo
× Share this Video
 * Facebook
 * Twitter
 * Work Email
 * LinkedIn


JOURNEY MAINTENANCE: TEST & MONITOR CRITICAL USER PATHS FOR FUNCTIONALITY -
LUCAS PAFUME SILVA, VIVO



Your users traverse critical paths within your site or app on a regular basis.
Are you sure those paths are functioning properly and that your MarTech
solutions are tracking correctly? Solving these issues as soon as they happen
will help you maintain a good customer experience and ensure that you’re able to
track it. Learn how you can:

 * Replicate your site’s user flows, such as shopping carts or user logins, from
   start to finish.
 * Test if anything prevents the path from completing or if the analytics are
   not tracking.
 * Configure items such as browser, location, and consent preferences to detect
   any issues a user may experience.

 

Lucas Pafume Silva

Digital Transformation Consultant, Vivo

Lucas graduated in Systems Analysis and Development at Mackenzie, with an
emphasis in digital marketing. Lucas has worked in the web analytics area for
over 5 years, analyzing user behavior in their online journey via desktop, app,
and physical stores, in order to generate insights into experiences and improve
purchase flows. Lucas specializes in implementing analytics, optimizing
environments, and extracting insights from digital analytics tools, audience
management, SEO, CRO and CRM. He also has expertise in Data Governance, Data
Validation, Automation Testing, Data Quality Assurance, Validation Plan, Tag
Implementation Plan, Data Visualization, Marketing Cloud, Digital Analytics,
Data Management Platform (DMP), Media Attribution, A/B tests, and APP Journey.

 

Dylan Sellers

Director of Customer Success, ObservePoint

As Director of Customer Success, Dylan oversees a team of technical success
managers focused on understanding, supporting, and solving the data quality and
privacy problems digital analysts and marketers face. Previously, as a Customer
Solutions Engineer for ObservePoint, Dylan built custom solutions to meet
client’s data collection and reporting needs while also acting as an internal
technical implementation specialist. Before his current role, he was head of UX
Research and Customer Education and the OP Community Slack Admin. His background
is in Product & Project Management with a Bachelor’s in Electrical Engineering.

 

--------------------------------------------------------------------------------

Dylan Sellers: (00:02)
Hello, everyone. Welcome to another session of VALIDATE. Hope the previous
sessions have been awesome. I am excited to be presenting today with my friend
Lucas Pafume. I will dive into our content today here, just in a second. But
it's an honor to present with Lucas, and kind of a fun fact, actually Lucas is
based out of Brazil. We've actually had more conversations in Portuguese, that
today will be pretty unique for us to speak in English and present, but excited
nonetheless. And we have a lot of cool content to share around web journeys, as
we'll talk about today. So I'm going to go ahead and start sharing my screen and
we'll get this kicked off.

Dylan Sellers: (00:50)
So for those of you who are joining now, again, welcome to VALIDATE. We're going
to dive in. And my name is Dylan Sellers. I'm the Director of Customer Success
here at ObservePoint, and I'm joined by Lucas Pafume Silva, who is a Digital
Transformation Consultant at Vivo. Vivo is one of the largest telecommunications
companies in Brazil. In fact, is the largest. They provide internet services,
cellular services for a good portion of population down there in Brazil. Today
we're going to be discussing Journey Maintenance, and specifically, we're going
to talk about how to Test & Monitor Critical User Paths for Functionality.
Before we dive in just want to say Lucas has been a long time, he's got some
cool content he's going to share. He's going to cover some use cases that
they're using a Vivo with ObservePoint. I'm going to focus on some best
practices and some principles of what journeys to cover.

Dylan Sellers: (01:45)
And then I want to point out though as well, that Lucas is a long time
ObservePoint user. So, previous to working at Vivo, he worked at a company
called Via Varejo, and he actually used ObservePoint so much, loved it, came to
Vivo and said, "Hey, this is one of the first technologies we need to bring on
board." So long time user champion of ObservePoint, and a good friend. So,
excited to present with Lucas today.

(02:10)
I'm going to go ahead and dive into our topic today. So when we talk about web
journeys in the solution of journeys, you first talk about the problem. So we'll
talk about the why, or the problem that we're solving with web journeys. The
problem that we solve is that conversion event testing is painfully manual, yet
critical to creating confidence in the reporting across your business. So maybe
you've experienced this, or your boss comes and you say, "Hey, we just launched
this new product last week. We need to understand how well it's performing." And
you go back to your reporting. You say, "Oh, actually I haven't been looking at
this very closely. Just realized there seems to be some kind of discrepancy, I
need to go and troubleshoot." And then at the end of that troubleshooting road,
you find out and discover that, sure enough, there was some kind of analytics
issue with the deployment of that product. And you have to go back to your boss
and have that kind awkward conversation about how we weren't capturing the
traffic for those particular conversion events. And we're going to have to
basically refresh and start from today.

Dylan Sellers: (03:10)
So in order to avoid that kind of uncomfortable conversation with your boss, in
order to give you really visibility and confidence, your analytics that's where
we start to apply web journeys. So let's talk about what a web journey is,
first. A web journey is basically a solution that allows us to simulate a user
path on a website, on a digital property. And so when you look at this image on
the right side, you see these users reaching certain milestones that drive to
specific targets, specific revenue based goals for the business. And so these
users are on this path on their way to conversion on their way to driving
revenue for you. And oftentimes it can be while we're simulating this
experience, we're going to be testing the analytics, we're going to see what's
firing, we're gonna see what's not firing, we're going to see the variables and
values, we're going to see cookies, we're going to see a lot of data. And we're
going to create rules to validate and ensure that that data is collected
properly. But where should we start? And specifically, what should we be testing
is the first question we'll discuss today.

(04:12)
So, what should we test? Usually when we get this question, asked to the
customer success team, we usually return the question, what matters most to you?
What are your priorities? What matters most to the company or to the
organization? You can ask this question a lot of ways. You can ask what metrics
have the most visibility or impact at your company? What data collection issues
have occurred recently? What's the most recent story that really led you to
start to think twice about your data governance strategy, to start thinking
twice about how you're preventing these issues from occurring? What was that
painful experience that drove you here in the first place to find to the
solution? And then last but not least, if all else fails and you're still
uninspired on what to test, then you should ask what conversion events drive
revenue for your business? Follow the money and you'll find that those
conversion events are typically top priority and the highest focus. So, that's
what we should be testing. And we're going to be simulating, we're going to
create web journeys that simulate users executing these experiences and going
down this path.

(05:21)
Let's talk about where we should be testing. There's two different strategies,
primarily there's lower environment testing, and then there's production
environment testing. And we'll talk about both since there are pros and cons to
each. Some of the pros for low environment testing is proactive. You can solve
the problem before it happens, which is pretty amazing, pretty rare that we can
have that kind of preventative protection with the data governance. And then we
also have fewer obstacles. Typically in a production environment, you might
require test credentials. You can't go and buy products on your website live
without some kind of special test credentials in order to complete that
purchase. Same for bot detection. Typically we don't run into reCAPTCHA barriers
in lower environments. So it's very, very convenient that when we do lower
environment testing, we can kind of play with it and not worry about the
consequences so much.

(06:15)
The trade-off is that it's less stable, and that you're more prone to have
server errors. Maybe that test environment isn't always live and available.
Maybe you can't even get in, maybe we need a VPN or some kind of credentials and
it's a pain, and it's just more hoops for you to jump through. So consider that.
And that's one of the barriers with learner testing, but note that if you can
overcome these barriers, it's a great strategic solution. I'll also mentioned
that it's not always in sync with production. Most engineering teams try to keep
an environment as close to parody with production as possible, but they're not
always going to be identical environments. They might have different backend
systems, different servers that are supporting them. A lot of variables come
into play here. So just keep that in mind that it's not fool-proof.

(06:57)
Now with production environment testing, this reflects reality, this is live,
this is real customer data. And so it's the real experience your customers are
having. So if you're having server errors or issues with your testing, it could
be that you're truly identifying stability issues on your website. So when we
have more stability typically in production, and Lucas is going to show some of
this in his part of the presentation, he's going to talk about how we can
identify maybe some stability issues, or issues where our backend wasn't
providing the data we needed. And so, just keep that in mind that typically it's
more stable, but we can still detect those issues if they do exist. And then
last but not least it's easy to access. So most production websites I can access
most of the content by simply visiting the page, it's publicly accessible. I may
need to log in order to access some content, but the vast majority of it is
usually publicly accessible.

(07:48)
The trade-off here, it's reactive. And inevitably your reporting is going to be
impacted. The question is, can we reduce that response time, that resolution
time, solve the problems so that it doesn't impact and further hurt our
reporting, and our analytics quality? It is technically reactive, but we can do
a lot of things to reduce that. We have obstacles like test credentials that are
required in order to purchase something, maybe in order to submit a form. We
also have bot detection hurdles. So maybe we need to white list ObservePoint's
IP addresses when we go and crawl your site and simulate this user experience.
Or maybe there's a reCAPTCHA that we have to find a workaround in order to
complete a form and test that conversion event, and simulate it. So that's where
we should be testing kind of the two different perspectives of lower environment
testing in production.

(08:42)
Next, I want to ask, when should we be testing? We should be asking this too,
when do you need to know about issues, how quickly? And most of you, if I asked
this, you're going to say, "Well, Dylan, I needed to know about this yesterday.
I needed to know about last week, today is too late." And that's where we've
really pushed you towards lower environment testing. That's the best solution.
Now for those that are in production, you're still gonna want to have a really
quick response, and a rapid response time and resolution rate. ObservePoint has
web journey frequencies that you can configure, and you can configure those as
quickly and as frequently as to every 15 minutes, out to once a month. And it's
up to you and I really recommend that you decide and make the call of which
frequency you should be testing on, based on the prioritization. If it's maybe
your checkout flow, if it's the bread and butter of your business, it's worth
driving all of your revenue towards. Like an e-commerce platform where you have
to purchase something, then that might be high priority, something you need
always up and running all day. But if it's something of a lower priority, maybe
it's a form, a newsletter, signup fields, hourly might be sufficient, or daily
might be sufficient.

(09:53)
Next the other school thought besides frequency, the other strategy I should
say, not just school thought, but a strategy, is trigger based. And so there's
kind of two ways we can do trigger based testing. There's integrating with a
CI/CD pipeline. So making sure that we integrate with the development cycle,
that your engineers, IT teams use to deploy code. And then there's TMS based
triggering, so when I make a publication of a new version of my tag management
system, I might want to trigger a series of web journeys based on that. As you
can see in this little diagram here we have the development cycle spelled out
here, and specifically after we've deployed code to assess test environment, and
we are doing this automated test, that's a great place to do lower environment
testing. And then after we deployed to production, we measure and validate. And
that's the second place where you'd want to, the second key point in this
development life cycle, where you'd want to test and run and ObservePoint
Journeys. So that's the best practice here, and we recommend taking one of these
two approaches.

(10:56)
Now last but not least, I want to talk about just a couple of best practices,
and then we'll turn time over to Lucas. But I want to make sure we hit the
target, we hit the bullseye, and we follow these best practices. The first one
is prioritizing rules. Not all conversion events are created equal, and we need
to remember that some conversion events are more critical of the business than
others. And so the rules that we create to validate these conversion events
should also be prioritized, and we should be focusing on what matters most.

(11:22)
Another pro tip is to start with Action Sets. Action Sets are groups of actions
that we create, and they can be used as components or modules that can be
repeatedly used in a series of journeys. And so, as you think about, and start
planning out the different test cases you're going to use, maybe you have a
series of actions that log a user into the platform, or maybe they complete a
purchase, or maybe they fill out payment information, and you want to reuse
those components multiple times to be efficient. That is a best practice and
will save you time. So start with action sets and plan to use them when you're
mapping out your test cases.

(11:54)
Next is integrate, make sure you're leveraging teams, Microsoft teams, slack,
email, JIRA, SMS, one of these other communication channels. And then more
importantly, make sure that you are integrating ObservePoint into your process.
Meaning that you are taking these messages, these notes, these notifications,
and you are have a process to react, respond, resolve the issues. Too many
people overlook that.

(12:18)
Leverage comparisons. So comparisons, web journey comparison specifically,
allows you to see the difference between tags and the respective variables and
values in between runs. And so maybe I run a test today, or maybe we run at this
hour, and I run next hour, and suddenly there's a failure. I can then see a
side-by-side comparison of what changed, which is pretty powerful.

(12:37)
Completing our web journey tutorials, make sure to not skip training. Most of
you like to dive in and get hands-on, but you'll find that you can always learn
a thing or two by going through our academy. Academy.observepoint.com, there'll
be other tutorials there as well. Help.observepoint.com is another great
resource with tons of materials. And last but not least talk to your success
manager. They do this every day. We love serving our customers. We love working
with you and excited to share success with you guys. I'm going to go ahead and
pass the Baton over to my good friend Lucas, and I'll stop sharing my screen. So
Lucas, the floor is yours.

Lucas Parfume Silva: (13:12)
Thank you so much, Dylan. So before we start, thank you so much for inviting me
to be here. I'm really glad to participate it here to VALIDATE 2021. So let's
talk about our first journey. We're talking about here, the VivoStore
FullJourney. We have this journey early, because we need to understand. When
they use it, have the fiber internet connection in your house, or no? So we have
the full journey, but on the step seven, we have the type of the consult to
understand the zip codes. For example, if we didn't have the technology to give
the internet connection. So when we click here in action details and request
log, we have the API and this API sometimes give us the status called the 200,
but sometimes the 400. And when we have these cases, we try understand the hour
that this happened and pass this type of problem to the IT team and understand
what has happened. Because we don't know because we have the internet connection
in your house, but on this time, this has not happened correctly. So this is the
first user case. And here we validated our rules to media tags and Google
Analytics, or another tool that we have here. And ObservePoint saved us too here
because we understand all problems that we have the, the API.

(15:06)
So talking about the second user case, we're talking about the Vivo Shopping.
The user started on product page and go to the cart page. And sometimes we have
problems with the Google Tag Manager download. So when the user clicks
add-to-cart the product, and go to the cart, we don't know what has happened,
but the Google Tag Manager stopped to load it on this page. And we lose all the
conversions and metrics to the end of the funnel. And when we select here in
action details and console log, we can see here the logs with the application
and understand what has happened. So here everything's okay, we don't have this
error now, but ObservePoint helped us to understand when does it hurts of
course, and try to understand how. It's a little problematic because we can lose
the conversions here. So this is the second user case.

(16:24)
And the last user case that we have here is Vivo Shopping 2, but we selected the
all carousels on home page. We understand what has happened when the user
selected the category page, or product page, go to the next step, then come back
home. Okay, Lucas, we understand, the user goes to the product page, the catalog
page, but sometimes we see too many time on the load page. So we created this
journey early too. And understand what time of the day the user have more time
to load this next page or no. So for example select the step three on this
journey, selected the request to log, and filter here for the most expensive
time. And we can see here this file. So it's a CSS file, and the launch time is
a six seconds. So too much time to load a page or a request. And we get this
data saying that you, or IT team to try to solve this problem. And with
ObservePoints we can understand what has happened and try to compare before and
after the problems of it. So this is our user cases.

Dylan Sellers: (18:05)
Awesome. Thanks Lucas, for walking through that. I want to remind people that we
do have a chat here. If you have any questions about web journey use cases for
Lucas and I, feel free to provide them in the chat. I want to just comment on
some of Lucas's use cases. They're really powerful. I think the first one we
talked about, we have this form that allows us to verify or check if we have
internet access and you're able to use ObservePoint, to troubleshoot and verify.
We're basically identifying that services were not necessarily, truly reported
on when they provide their zip code, which I thought was pretty powerful. That's
more of like a functional test, right? Not necessarily analytics driven, but
you're able to use ObservePoint to identify that issue, which I think is super
cool.

Dylan Sellers: (18:49)
And even Lucas, I think if you, if you share your screen again, there was you,
if you show the date picker, I think you showed this to me the other day, where
you can see so many runs, all the different runs that executed in a day, and,
you know, Lucas's run hourly. And so being able to see those running hourly and
say, okay, in this particular day, we have 23 out of the 24 executions were
successful, but here we go, we have a couple of anomalies where it was not
successful. I like this example, this visual example Lucas has where we can say,
okay this inner service is supposed to be available at all hours of the day, but
four hours of the day when we ran this test or website was not telling them was
available. So I thought that was a super cool use case there. Thanks for sharing
that Lucas.

(19:37)
And then the second one is pretty core with products details and validating that
particular funnel is working, and that the analytics that are critical to
measuring those conversion events are covered. I thought that was an awesome
example, Lucas, and truly a bread and butter ObservePoint use case. And then the
last one is really creative where you talked about the using ObservePoint
screenshots. So we take screenshots after every action is executed. And you can
tell that some of the screenshots that it takes time for that those screens to
render into load. And here you're seeing that it took, and you even dove into
not just the screenshots, but like the load time of each request. And you're
able to identify the slowest loading requests, the ones that are impacting
customer experience. So overall, amazing job, Lucas.

(20:30)
Go ahead, you don't have to share anymore I'll let you come back to just the
camera. But I wanted to just congratulate users, like Lucas that are really
pioneering with ObservePoint. That you're finding new use cases, exploring new
opportunities and sharing these with other people in this industry. So
congratulations to you, Lucas, and the team at vivo for your awesome
implementation. And I just want to say, and put another plug for customer
success and just say, we're here, we're listening. We want to get your feedback
on not only your experience with us on the product and we're ready and always
willing to help. So please reach out and leverage those resources.

Dylan Sellers: (21:10)
I'm gonna go ahead and just take a look at the questions here. We have Prashanth
Reddi said, are there any possibility of running a single step in the journey
instead of the full journey? I think that's great product feedback, Prashanth.
We don't necessarily have a functionality that says, hey, stop at step or action
two or three, but that's awesome feedback, especially for those longer journeys.
It would be nice to be able to say, let's just run the first five steps for this
next execution. So fantastic product feedback. We'll make sure to capture that
those in the chat moderating can help us capture that feedback and we'll make
sure passing on our product team. I think I've heard it before not just
Prashanth from other customers, fantastic feedback.

(21:50)
Dillon Batenik mentioned custom JS as a workaround using try-catch and other
solutions. So this is a really popular use case where, because we support
JavaScript, you can use try-catch kind of scripts and you can attempt something.
And if it doesn't succeed, maybe the element is there sometimes not always. You
can make sure it doesn't interrupt your journey, it can kind of move on. So
that's one method, but again, Dillon, I think validated Prashanth's feedback
about being able to replay and retry certain steps to the whole journey. So
fantastic feedback, another vote from Dylan. If there's others we'd love to hear
it, send that feedback our way. There's a feedback portal in the app, and we
have your success manager is always available. So reach out excited to be a part
of validate this year. Thanks for participating. And we'll go ahead and wrap up
there.

(22:41)
So, we have one more question. We'll take one more question. I think moderator's
just sending it our way. Can we assign rules to a particular tag? Good question.
Yes, you absolutely can assign roles to specific tag. So actually I guess the
way we would define it is we would just apply it to an action. Then we would, in
that action, we would define the tags and the required variables and values that
are expected. So I guess Karthik, if you, maybe if, hopefully that answers your
question, if not, you can reply back in the chat, but we will go to the action
in the journey and we'll say, hey, at this particular action, step five, we
expect this tag to fire and these variables to be captured in that tag. You can
also apply rules globally to the journey. So you can say use it in the global
rules that we use, the web journey rules are use at least once logic. So we can
say, okay, at least one time in the series of five or six actions, I need to see
this event fire within this tag. So that's, there's kind of two different
strategies there. Again, if it's not something that quite fits your use case
Karthik, or if you have additional questions, just reach out to your success
manager, or you can reach out to support at observepoint.com, happy to help.

(23:54)
Any more questions in the chat? If not, we'll wrap up. Okay. We got another one
from Dillon, in a journey, can you look at the results after executing at least
once? For example, variable summary, this is where I build many of the rules. I
think that's additional feedback, right? And to the point of executing just
certain steps and limiting to the steps and then find seeing those results
before we move on. So I think Dillon, that's another request for a feedback team
and appreciate your feedback.

Dylan Sellers: (24:33)
Awesome. Another one from Prashanth. So based on Karthik's question that we had
earlier we the question is if we have three or four tags fire in the same call,
okay. So firing the same call, that can be a little confusing. So let me make
sure I'm interpreting that correctly. So we have maybe three or four tags firing
within a single request, right? So I think there are solutions like one call and
a lot of the server-side type technologies are sending data in a single call in
batches, and then the server then parses them out and distributes them to their
correct servers. So that's a little more of a service side support question, but
I think the, what we can support today, Prashanth is we can create a tag on our
database for that batch of tags. Then as long as the data that you want to
validate is available in the request's body, and we can parse out those
variables as it's being sent out, then we can validate that and we can create
rules around it.

Lucas Parfume Silva: (25:32)
Dylan, just a comment, I don't know if it's possible, but we had here, for
example, two who want to lead and we have two rules, one for page view and
another one for, even in this call it in the same moment for example, have page
view, promotion view, or add-to-cart, to fire in the same time for, two
different UIs. So I dunno if the he's talking about here.

Dylan Sellers: (26:03)
Gotcha. Yeah, there could be, could be batches events. I know the Google
analytics for you can kind of batch different events together in a single call.
So definitely we'd happy to discuss that more offline, anything that's awesome.
Karthik followed up with it, he said for example, action step containing four
tags, but I need to assign a rule to tank three in the call. Okay. So that's
fantastic comment. So if all those tags assuming that they're all the same tag,
then you'd have to create a condition that looks for, let's say they're all
Adobe, for example. And we say, if we're looking at Adobe and that Adobe tag
contains a specific variable, then we set an expectation to evaluate all those
different variables, and this can be difficult. It's not quite the best
solution. The most elegant solution I think would be to be able to check all
four and say, okay, at least one of these four meets this expectation. And so a
great product feedback we'll pass on. And then if they're not the same tags and
it's actually put simple and we're going to simply create an individual line in
our rule for the tag, and the variable within that tag, that we're expecting so
much simpler. Great questions. You guys are quizzing me well today. This is
awesome.

Dylan Sellers: (27:17)
Okay. We'll go ahead and wrap up with questions then and very grateful for your
participation today. Grateful for you Lucas again, an honor as always. And
Abraço, as we say in Portuguese and we'll talk soon, so thank you everyone.

Lucas Parfume Silva: (27:31)
Thank you guys, thanks ObservePoint team. Thank you, Dylan. And obrigado
pessoal, valeu!

Dylan Sellers: (27:37)
All right, I'll see you the next session. Bye.

PREVIOUS VIDEO


Technology Governance: Identify & Validate MarTech for Accurate Data &
Actionable Insights - Arthur Engelhard, Newfold Digital

This session focuses on empowering your team with data governance best practices
including auditing and mon...

NEXT VIDEO


Privacy Compliance: Reveal All Data Collection, Collectors, and Destinations -
Craig Ferguson, Lloyds Banking Group

See how Privacy Compliance enables you to inventory your tags and cookies,
identify server call geolocation...




MOST RECENT CONTENT

 * ‹
 * ›

30:00
Virtual Event
What's New in Digital Governance?

SEE DEVELOPMENTS IN DIGITAL GOVERNANCE THAT EMPOWER YOU TO: USE ACCURATE DATA TO
UNDERSTAND YOUR AUDIENCE, DESIGN A FUNCTIONAL GOVERNANCE ECOSYSTEM, AND MEET
EVER-INCREASING DEMANDS OF YOUR AUDIENCE.

Register Now
30:01
5 Trends Shaping Digital Experience - Tanu Javeri of IBM, Daryl Acumen of Adobe,
Adam Greco of Amplitude

THIS PANEL WILL DISCUSSES WHAT THE BIGGEST CHANGES HAVE BEEN, WHAT CONTRIBUTES
TO FAILURE OR SUCCESS, AND WHAT MARKETING PROFESSIONALS NEED TO UNDERSTAND FOR
THE FUTURE.

Watch Video
29:17
Technology Governance: Identify & Validate MarTech for Accurate Data &
Actionable Insights - Arthur Engelhard, Newfold Digital

THIS SESSION FOCUSES ON EMPOWERING YOUR TEAM WITH DATA GOVERNANCE BEST PRACTICES
INCLUDING AUDITING AND MONITORING FOR TAGGING ERRORS, SETTING UP RULES TO TEST
AGAINST, AND REPLICATING USER JOURNEYS.

Watch Video
26:22
Privacy Compliance: Reveal All Data Collection, Collectors, and Destinations -
Craig Ferguson, Lloyds Banking Group

SEE HOW PRIVACY COMPLIANCE ENABLES YOU TO INVENTORY YOUR TAGS AND COOKIES,
IDENTIFY SERVER CALL GEOLOCATIONS, WATCH OUT FOR UNAUTHORIZED TAGS, AND MONITOR
CONSENT CATEGORIES.

Watch Video
26:42
Campaign Validation: Pitfalls in the Campaign Lifecycle and How to Overcome Them
- Cameron Cowan, ObservePoint

TO TRUST CAMPAIGN ROI, YOU NEED TO TRUST THE DATA. SEE HOW OBSERVEPOINT IS
BRINGING TOGETHER TOUCHPOINT & METADATA MANAGEMENT, TECHNOLOGY GOVERNANCE, AND
USER EXPERIENCE VALIDATION FOR ACCURATE ROI.

Watch Video
Return to Home
 
© ObservePoint
× Streams
 * Solutions
    * Govern Tags More Efficiently
    * Ensure ROI on MarTech
    * Improve User Experiences
    * Validate Tags Before & After Release
    * Deploy New Marketing Tech
    * Comply with Data Privacy Regulations
    * QA Tag Implementations
    * Plan & Document Implementations
    * Monitor Tags Over Time

 * Resource Type
    * Product & Service Summaries
    * Case Studies
    * Tip Sheets & eBooks
    * Reports
    * Webinars & Virtual Panels
    * Product Videos
    * Blog

 * Virtual Events
    * Validate 2021
    * DataChat LIVE!
    * Validate 2020: Marketing & Analytics Summit
    * Marketing Attribution Symposium 2020
    * Virtual Analytics Summit 2020
    * Virtual Analytics Summit 2018
    * Virtual Analytics Summit 2017
    * Virtual Analytics Summit 2016


 * Share this Hub
 * Facebook
 * Twitter
 * Work Email
 * LinkedIn



×




AddThis Sharing Sidebar
Share to LinkedInLinkedInShare to TwitterTwitterShare to FacebookFacebook
, Number of shares
Share to EmailEmailMore AddThis Share optionsAddThis
, Number of shares
Hide
Show
Close

AddThis

AddThis Sharing
SHARESLinkedInTwitterFacebookEmailAddThis