Wednesday, 25 April 2018

4/25/2018 02:27:00 pm

Informatica High- availability of Domain

Terminology

  • Gatewa​y Node: A node configured as a gateway can become the master of the domain.
  • Master Gateway Node: A gateway node that is currently acting as master.
  • Worker Node: A node marked as a worker node will not become the master of the domain but report to the current master.
  • Master Election: A routine run by nodes to find the current master or unanimously elect a master.
  • Domain database: Database schema where domain metadata is stored. This database acts as an arbiter during master election process.
  • Database heartbeat: A heartbeat (update query) run periodically by the master node on the domain database to broadcast its liveness
  • Node heartbeat: A heartbeat message sent by all non-master nodes to master node updating its liveness 

Design

The availability of the Informatica domain is based on the availability of an elected master node. When the first gateway node is started, it runs the master election routine and becomes the master node. When the next gateway nodes are started, they discover the first node as the master (as a part of master election) and register with it. These nodes periodically send a (node) heartbeat to the master node to express its aliveness.

When the master node terminates, the other live gateway nodes detect the unavailability of the master node by the failed Node heartbeat. At once, they re-run the master election (master re-election) to elect a new master. After a new master is identified, the remaining nodes registers with the new master.

Clients may not be able to connect to Informatica Services temporarily when there is no elected master in the domain.
Note
​Worker nodes do not provide domain High-availability. During startup, it attempts to connect to every gateway node in the domain to identify the current master and registers on identifying the new master.

Database Heartbeat

The master node runs a heartbeat update query on the domain database periodically to persist its aliveness. This is critical for other gateway nodes to be aware of a live master node.
All Gateway nodes perform this heartbeat during the master election routine. This heartbeat informs any electing node of other participating nodes.
This heartbeat is used by master node as a test on availability of domain database.
Database heartbeat periodicity is controlled by the domain-level custom property MasterDBRefreshInterval. The default (and minimum) value is 8 seconds.

Database heartbeat timeout

When master node terminates unexpectedly, the other gateway nodes wait for a timeout before attempting to become the new master. This timeout is 4 x MasterDBRefreshInterval (32 seconds by default). In case of a temporary glitch causing delay/error in the master’s heartbeat, the higher timeout value increases tolerance and does not trigger spurious re-election.

When the master node fails to update the database within the timeout, it gives up the master role and terminates itself. This ensures consistent behavior with other nodes that could be electing master (especially in situations when master node loses network connectivity)

Note

In a domain with single-gateway node (where domain HA & master election does not apply), the heartbeat timeout is 12 x MasterDBRefreshInterval. Here, this heartbeat is only used as a test for availability of domain.

Node heartbeat

The master node has to know which nodes are alive to maintain service(s) status and availability. All non-master nodes update the master node periodically to express its liveness.

Node heartbeat is controlled by node command line option (configured via INFA_JAVA_OPTS environment variable)    “-Dinfa.masterUpdateTimeInterval” and defaults to 15000 milliseconds.
​Note

This has to be configured to the same value on all nodes (gateways and workers alike).

Node heartbeat timeouts

Master node behavior              

If the master does not receive heartbeat from a node within than 6xNodeHeartBeat seconds (90 seconds default), it will mark the node status (as well as services running on that node) as inactive/dead and attempt to start services on other live nodes (as configured).

In the event of a non-master node termination (unexpected), it will take 6xNodeHeartBeat seconds before marking the node as dead.

Non-Master node behavior

If a non-master node failed to send its heartbeat to the master (error/timeout), then it assumes that the master node is not available and re-runs master election. This no-wait model ensures that the other gateway nodes do not waste time before re-electing a master.

In case of a temporary glitch causing the heartbeat message to error/time out, the node will discover the same old master node to be alive and continue reporting to it. 

Co​mmon Database heartbeat failures

As explained in the previous section, the master node terminates itself if it fails to run heartbeat query within the timeout. This is observed in node.log as follows:

This occurs in the following situations:

Error in accessing domain database

When domain database is not available (planned or unplanned) for longer than the timeout period, this will cause errors during heartbeat in the node and errors will be logged in node.log & exceptions.log.

Errors (SQLException) will be logged in node.log and exceptions.log before the termination message.

Network/communication errors

Network errors such as Connection refused (when database server is not accessible), No route to host (when the Informatica host loses network connectivity), Connection timed out (when a TCP timeout occurs) etc. will be logged in node.log and exceptions.log

Timeouts

When the heartbeat thread in the master node did not update the database within the timeout, the node will terminate.
A fatal message will be logged in node.log when the query was not run within the timeout.

Following situations can cause the timeout:
  • ​Network issues
    Network at either ends (Informatica host or Database host), or any network element in-between can cause timeout errors.
  • Resource  crunch (and/or) Node process starvation
    Saturated utilization of system resource on the Informatica host (such as CPU, memory, disk, network) can cause starvation in node process causing heartbeat threads to timeout.
  • Java Garbage collection pauses
    The Informatica node is a java process, where Java’s Garbage collection threads might suspend the application involuntarily for a longer time than the timeout.

Common Node heartbeat failures

By design, node heartbeat is interpreted differently by master node and non-master nodes. Master node gets to know the status of all other nodes in the domain so that it can ensure service availability. Non-master nodes piggyback on node heartbeat to be aware of the availability of master gateway node.
Similarly, failure of a heartbeat message (by non-master nodes) & heartbeat timeout error (by master node) trigger different routines. Below are common situations:

Node heartbeat timeout failure

When the master node fails to get a heartbeat from a non-master node within the timeout, it marks the node as inactive and starts failing over services that were running on the non-master to other available nodes (if applicable). The state of the non-master node process (if it was alive or dead) does not matter, as if it is not able to update the master, it is as good as dead.
  • ​Logs on the non-master node will help identify if the node was terminated for some reason.
  • Logs on master node and domain logs, tells the story of heartbeat failure.
  • In case of the non-master node was alive yet marked as inactive, infa9dump on the non-master node & master node helps identify if there’s any blocked communication between the 2 nodes

Node heartbeat failure

When a single heartbeat message from non-master node is not delivered on time, the non-master questions the availability of the master and starts master re-election (in case of gateway node) or searches for a new master (worker node). However, it continues to run the services running on the node during this period.

In case of a temporary network glitch, the heartbeat failure will not have any impact on domain/services.
Node.log shows an error as follows:

ERROR [Domain Monitor] [DOM_10025] The node cannot send heartbeat messages to the master gateway node. 

Troubleshooting

Heartbeat failure/timeout can happen due to difference causes such as Informatica configuration or related issues, and system-level causes such as resource utilization, environment, and network issues. 
Following are some common steps to isolate the issue to system-level causes.

Troubleshooting Informatica application related issues

InfaLogs

Analyzing logs from different components/nodes helps put parts into the full root-cause story. Collecting InfaLogs including domain and service logs from all nodes in the domain help understand the complete situation.
Note

Logs from an unaffected node also help in understanding the pattern of affected vs. unaffected parts of the domain

JDBC Spy logging can be enabled to debug queries executed and relevant errors/delays for database heartbeat failures (however, this does not help in database connectivity issues)

InfaDump

InfaDump on the node process collects diagnostics such as thread & heap dump that helps perform deeper analysis. This is useful to debug situations when threads are unexpectedly blocked/stuck when performing heartbeat causing timeouts/failures though processes are still running.

Java GC logging

Enabling Java’s GC log (command-line option) for the node process will help identify if GC pause is the cause of the issue. Typically, high GC pause is due to invalid configuration of –Xmx for the node process. The default value of 512m is the minimum for a node, but necessary size could be higher based on number of services & users in the domain.

In rare situations, this can be due to memory leak in the node process. Collecting infa9ump on the node process will help isolate memory leaks

Troubleshooting system & network

System resource crunch (and/or) process starvation

Monitoring system resources at granular-level and identifying resource-saturation & process-starvation during the window of timeout will help identify as the cause of the heartbeat failure. Including all resource utilization, and load-average will show any anomalies. In the case of virtualized environment, it will be critical to monitor resources granted to the guest and not just the provisioned resources. For instance, metrics such as CPU ready%, memory balloon in VMWare, help identify starvation caused by virtualization.

Network errors

Network errors such as dropped packets or delayed transfers at either ends of communication (Master Host ÃŸÃ  Non-master host, Master host ÃŸÃ Database host), or any network element in-between such as switches, routers, firewalls, virtualization layer can cause timeout. This can be due to network congestion, broken hardware, configuration issues etc.

Comparing TCP packet dumps captured at both ends of communication will help isolate packet delivery.
Note
A filter with IP addresses of host and port number (database’s port in case of database heartbeat failure, and node’s Service Manager port in case of node heartbeat failure) can reduce the size of captured packets

Tuning

The default configuration of heartbeat intervals & timeouts in Informatica are expected to function in robust environments. However, depending on system load & related configuration, there could be a lot of unexpected heartbeat failures (followed by unexpected service unavailability/failover). The heartbeat intervals can be increased to higher values to be more tolerant to temporary failures.

Note

Increasing heartbeat intervals has a side-effect of delaying failover when a real failure happens. For example, master re-election will take longer when a master node terminates unexpectedly on increasing database heartbeat interval

Following is a list of tunables relevant to Informatica heartbeats:
4/25/2018 03:38:00 am

How to stabilize Informatica domain in a non optimal environment

There are four types of settings that influence performance, and can help stabilize a domain:
  1. Resilience: Settings that influence the keeping alive of processes during (temporary) network failures
  2. Timeouts: Values that govern if or when a connection is lost
  3. Intervals: Values that govern how often to check whether a process or service is alive
  4. Performance




Resilience

Resilience is the ability of PowerCenter service clients to tolerate temporary network failures until the timeout period expires or you resolve the system failure. Clients that are resilient to a temporary failure can maintain connection to a service for the duration of the timeout.

There are two variables associated with resilience:

Limit on Resilience Timeouts: 180 (default). The amount of time a service waits for a client to connect or reconnect to the service. This limit can override the client resilience timeouts configured for a connecting client. This is available for the domain and Application Services.

Resilience Timeout: 30 (Domain), and 180 (Repository and Integration Services). The amount of time a client attempts to connect or reconnect to a service. A limit on resilience timeouts can override the timeout.
In general, it is advisable to keep all limits at the same level, to keep the interdependency even.

Timeouts
Connection Timeout: 40000 (server/tomcat/conf/server.xml). The number of milliseconds the Connector (https) waits, after accepting a connection, for the request URI line to be presented. This parameter is a general tomcat server parameter and is to do with how long the server leaves a connection between the browser and server open after a response is complete.
Repository Service: Database Properties: DatabaseConnectionTimeout: 180 Period of time that the Repository Service attempts to establish or reestablish a connection to the database system. Default is 180 seconds.

Refresh intervals
Domain: Properties: Custom Properties: MasterDBRefreshInterval: 8 (effective 28 seconds with factor 3.5). This is the interval the domain master gateway uses to write its entry in the domain database, table PCSF_MASTER_ELECTION, determining it is alive. If this update fails four consecutive times, the master gateway node shuts down. If there are multiple gateway nodes in the domain, a new master gateway will be elected, assuming the domain database is alive.
/server/tomcat/bin/infaservice.sh -Dinfa.masterUpdateTimeInterval=15000.
The domain service marks a node inactive if the master gateway node does not gets a notification from the node within six times 15 seconds. When the CPU load on the machine is high, the worker node cannot send the notification in time. The master gateway node will mark the node as inactive and all the services on the node will not be available, despite the node is actually up and running. Change the master gateway timeout by adding the following system property to INFA_JAVA_OPTS: setenv INFA_JAVA_OPTS="${INFA_JAVA_OPTS} -Dinfa.masterUpdateTimeInterval=X"
Repository Service: Properties: Advanced Properties: HeartBeatInterval: 60 Interval at which the Repository Service verifies its connections with clients of the service.

Performance
Repository Service: Properties: Database properties: Optimize Database Schema: No. Enables optimization of repository database schema when you create repository contents or back up and restore an IBM DB2 or Microsoft SQL Server repository. When you enable this option, the Repository Service creates repository tables using Varchar(2000) columns instead of CLOB columns  wherever possible. Using Varchar columns improves repository performance because it reduces disk input and output and because the database buffer cache can cache Varchar columns.

To use this option, the repository database must meet the following page size requirements:

IBM DB2: Database page size 4 KB or greater. At least one temporary tablespace with page size 16 KB or greater.
Microsoft SQL Server: Database page size 8 KB or greater.

Thursday, 29 March 2018

3/29/2018 08:04:00 am

Informatica ERROR: Failed to allocate memory (out of virtual memory)


Hello !!! out of blue you got below virtual memory error in informatica and have no clue what's going on ,Don't worry you have landed at right page


Problem 
Severity Timestamp Node Thread Message Code Message
*********** FATAL ERROR : Failed to allocate memory (out of virtual memory). ***********
Severity Timestamp Node Thread Message Code Message
*********** FATAL ERROR : Aborting the DTM process due to memory allocation failure. ***********

As you can see error message is very generic and doesnt give much clue,I have summarised the possible causes and solution for same issue below

Possible Causes 
  1. The issue happens when the column precision value is set to a very high value which informatica does not support. for example , while importing a table metadata from MSSQL Server database in PowerCenter Source Analyzer, the Column precision is set to 1073741823 whereas PowerCenter only allows a maximum of 104857600 as precision
  2. The commit interval is set to a very high value and hence the memory calculation for the session is also very high exceeding the available memory that can be allocated and hence the session fails to initialize.
  3. This issue occurs when the source (or) target fields precision set to a high value, DTM Buffer size set to Zero and Enabled the Connection Retry Period property in either Source (or) Target connection.When this Connection Retry Period set to non-zero value, the session requires a much larger amount of memory as compared to when this property is set to zero. 

Possible Solutions
  1. Try to reduce precision for column to maximum value allowed by informatica for particular datatype
  2. Try to reduce commit interval 
  3. Try to put Connection Retry Period as zero but it will make your session as non resilient
Hope this have helped you if not please contact us using contact form on home page.



Wednesday, 28 March 2018

3/28/2018 03:03:00 am

Informatica How to Series (Useful Tips)



In this post we will give you few useful how to tips

How to merge two target files in same structure  session/mapping 
#########################################################################
  1. Make sure that at the session level both the targets are having the same name.
  2. Choose Merge Type as sequential merge for both targets
  3. For second target (as per  Target load plan at the designer ), select button "Append if Exists".
#########################################################################

How to Validate validate all mappings in a particular folder  
#########################################################################
Please refer to below link for details
How to validate all-mappings in folder

#########################################################################

How to create target file name with time stamp
#########################################################################
There are there option to achieve this
  1. Rename the file in post session command using unix script
  2. Create a workflow variable with timestamp 
  3. Using Output type as command in session properties
I will explain three methods in detail soon
#########################################################################

Tuesday, 27 March 2018

3/27/2018 02:30:00 pm

Strange !!! Mapping Variable Value Taking Persistent value from Repository


Today we are going to discuss behavior of mapping variables persistent values.Most of experienced informatica developers have used the concept of assigning mapping variable value to workflow variable using post session variable assignment feature.Passing mapping variables  is quite handy in many scenarios where you want to perform some action on basis of value passed from mapping i.e. you might to send an email or start another session.

Normally it is done in below way 
  1. Define a Mapping variable 
  2. Assign value to variable using setvar functions
  3. Pass value to workflow parameter is using postsession variable assignementAll is good till we encounter error VAR_27049 : Mapping variable name: [$$MAP_VAR], persisted value: [9], run instance name []. 

Most of us are confused about this issue and try different way to overcome this by assigning value on parameter file etc.
Mapping variables are different from normal programming language variables as they have below features
  1. Mapping variables have an aggregation set, which is either Maximum or Minimum. (See Informatica Designer >Mappings > Parameters and Variables).
  2. Mapping variables are assigned to a variety of values during the session run; the final value of the variable is either the maximum or the minimum of the values. 
  3. In the related mapping, if the aggregation is set to Max, the final value is the maximum value. That means that if the value given in the parameter file is lower than the value already in the repository, it will be ignored. 
Below example will make it more clear
Example
Consider that in the we are using max type of variable , the value is *1000* for the VAR1 mapping variable. 
**********************************************************************************************************
2018-02-23 18:02:59 : INFO : (6615 | DIRECTOR) : (IS | myinteg_IS) : node01 : TM_6962 : The variable [$$WFVAR1] will be assigned from [$$VAR1] 
2018-02-23 18:02:59 : INFO : (6615 | DIRECTOR) : (IS | myinteg_IS) : node01 : VAR_27048 : Persisting mapping variable values to the repository. 
2018-02-23 18:02:59 : INFO : (6615 | DIRECTOR) : (IS | myinteg_IS) : node01 : VAR_27049 : Mapping variable name: [$$VAR1], persisted value: [1000], run instance name []. 
**********************************************************************************************************

If mapping variable comes out greater than 1000 in mapping then it will work fine.But if Value  comes less than 1000, it keeps the persistent value and only gets reset if the Parameter file value is Reset via Workflow Manager. 
You can reset mapping variable in the Workflow session gets "reset" (right-click the Session task and select View Persistent Values. Click Reset Values to delete existing variable values),This is a manual step and not feasible with automated runs.
Solution :Apparently there is no direct solution to avoid this,you might need to design your mapping in such a way that it gets value more than value saved in repository (or less in case of aggregation type defined as minimum),You can try to prefix it with running number







Thursday, 15 March 2018

3/15/2018 12:03:00 pm

Autosys Difference Between On Hold and On Ice Jobs



On HoldOn Ice
Job is put on hold when JOB_ON_HOLD_event is raisedJob is put on hold when JOB_ON_ICE_event is raised
Indicates that the job is on hold and cannot run until you take it off hold.Indicates that the job is removed from the job stream but is still defined
All dependent jobs do not run when a job is on “on hold”—nothing downstream from this job will run.When one of the child jobs in it is placed on ice then jobs downstream from the job that is “on ice” will run as though the job succeeded. This job is removed from all conditions and logic, but is still defined. Operationally, this condition is like deactivating the job. It will remain on ice until it receives the JOB_OFF_ICE even
An ON_ICE job doesn't run, when it puts OFF ICE and starting condition metON_HOLD jobs run when you put it OFF HOLD and it's starting conditions met

Below are the commands used for putting job on/off Hold/ICe

Put a job On HOLD
$sendevent -E JOB_ON_HOLD -J

Put a job Off HOLD
$sendevent -E JOB_OFF_HOLD -J

Put a job On Ice
$sendevent -E JOB_ON_ICE -J

Put a job Off Ice
$sendevent -E JOB_OFF_ICE -J

Saturday, 10 March 2018

3/10/2018 10:31:00 pm

More Informatica Interview questions (26-50)


Please go to our previous set of question by using link <<<Informatica Interview Question 1-25>>>

Question 26

What are  the different dynamic partitioning configurations which can make a session to run with one partition

You set dynamic partitioning to the number of nodes in the grid, and the session does not run on a grid.
You create a user-defined SQL statement or a user-defined source filter.
You use dynamic partitioning with an Application Source Qualifier.

Question 27
Under which situations you  will use the Source Filter, Select Distinct and Number of Sorted Ports properties of Source Qualifier transformation.
  • Source Filter option is used basically to reduce the number of rows the Integration Service queries, so as to improve performance.
  • Select Distinct option is used when we want the Integration Service to select unique values from a source filtering out unnecessary data earlier in the data flow, will improve performance.
  • Number Of Sorted Ports option is used when we want the source data to be in a sorted fashion, so as to use the same in some following transformations like Aggregator or Joiner, those when configured for sorted input will improve the performance.

Question 28
What is Persistent Lookup Cache?
If the cache generated for a Lookup needs to be preserved for subsequent use then persistent cache is used.It will not delete the index and data files. It is useful only if the lookup table remains constant.Lookups are cached by default in Informatica. Lookup cache can be either non-persistent or persistent. The Integration Service saves or deletes lookup cache files after a successful session run based on, whether the Lookup cache is checked as persistent or not.
Question 29
What are the restrictions of Union Transformation?


  1. All input groups and the output group must have matching ports. The precision, data type, and scale must be identical across all groups.
  2. We can create multiple input groups, but only one default output group.
  3. The Union transformation does not remove duplicate rows.
  4. We cannot use a Sequence Generator or Update Strategy transformation upstream from a Union transformation.
  5. The Union transformation does not generate transactions
Question 30 
What is difference between copy and shortcut?

CopyShortcut
Copy an object to another folderDynamic link to an object in the folder
Changes to original object doesn’t reflectDynamically reflects the changes
Duplicate’s the spacePreserves the space
Created from unshared foldersCreated from shared folders


Question 31 
Suppose we do not group by on any ports of the aggregator what will be the output.?
If we do not group values, the Integration Service will return only the last row for the input rows
Question 32 
What is the difference between variable port and  mapping variable?

Variable PortMapping Variable
Local to transformationLocal to mapping
Values are not persistentValues are persistent
It can’t be used with SQL overrideCan be used with SQL override

Question 33
What happens to a mapping if we alter the data types between Source and its corre
sponding Source Qualifier?
The Source Qualifier transformation displays the Informatica data types. The transformation data types determine how the source database binds data when the Integration Service reads it.N
ow if we alter the data types in the Source Qualifier transformation or the data types in the Source definition and Source Qualifier transformation do not match, the Designer marks the mapping as invalid when we save the mapping.

Question 34 
How to make the persistent lookup cache in sync with lookup table?
To make the persistent cache in sync with the lookup table simply enable Re-cache option of the lookup transformation to rebuild the lookup cache from lookup table again.
Question 35
Will session fail if the SELECT list COLUMNS in the Custom override SQL Query and the
Output Ports order in Source Qualifier transformation do not match?
Mismatch or changing the order of the list of selected columns in the SQL Query override of Source Qualifier to that of the connected transformation output ports may result is unexpected value result for ports if data types matches by chance, else will lead to session failure.
Question 36 
Can we use mapping/workflow variables in standalone email task?
No, we can only use pre-defined built in variables
Question 37
What are the characteristics of active transformation?
A transformation is active when it satisfies either of the following 4 conditions:

1. No of rows @Source is not equal to No of rows @Target.
2. Transactions boundaries for the input change.
3. Row type changes.
4. It changes the order of data
Question 38
Can we use $PMTargetName@numAffectedRows/@numAppliedRows/$PMTargetName@numRejectedRows in post session variable assignment?
We cant use these variable in post session variable assignment but we can use these variable in post session command tasks

Question 39 
Can we connect to Microsoft SQL Server database with Kerberos authentication from Informatica?
Yes we can connect to Microsoft SQL Server database with Kerberos authentication from Informatica 10 ,It requires few configuration changes

Question 40 
What is informatica domain ?
When you install and run the Informatica services, the installation is known as a node. The node becomes part of an Informatica domain. A domain is a grouping of one or more nodes. The domain forms the environment upon which the Informatica service processes run. A gateway node can also be a master gateway node.

Question 41 
What is informatica EBF ?
Emergency Bug Fix
Question 42
What is purpose of  option Synchronize Dynamic Cache in lookup?

It can be used when running multiple sessions updating the same target simultaneously to avoid integrity issue.This synchronization behavior is different only in case of insert as compared to Dynamic cache.For update it is same as Dynamic cache that is when rows marked for update are received the dynamic cache will be updated and NewLookupRow= 2.

Question 43 
How can you limit use of parameter files in informatica ?
It can be done by defining parameters in metadata table and then create a mapping to assign variable using setvariable and then these values can be passed to different session using post session assignment .

Question 44
How can you issue Operating system command in middle of mapping?
This can be achieved by using java transformation and using process builder java built in

Question 45
How does Sorted Input improves  Aggregator Transformation Performance?
Integration Service creates the index and data caches files in memory to process the Aggregator transformation. If the Integration Service requires more space as allocated for the index and data cache sizes in the transformation properties, it stores overflow values in cache files i.e. paging to disk.One way to increase session performance is to increase the index and data cache sizes in the transformation properties.But when we check Sorted Input the Integration Service uses memory to process an Aggregator transformation it does not use cache files.

Question 46
Can we call Stored Procedure in Source Qualifier query?
It is possible to use a stored procedure as a source as long as it returns a result set.
The result set returned to the PowerCenter by Microsoft SQL Server stored procedures are the last select statement executed in the procedure source code.Stored procedure of SQL server called directly in SQL Override at SQ transformation.
It is not possible to call a stored procedure of Oracle in the Source Qualifier SQL override. When you call the stored procedure a variable needs to be passed to collect the OUT TYPE of REF CURSOR,which is not possible to do in a Source Qualifier.A workaround is possible for oracle stored procedure using pre session task.

Question 47
What is the difference between $ and $$
$ represents session parameter/variable, you can declare this in the session level

$$ represents mapping parameter/variable, you can declare this under mapping designer tool

Question 48
What is the priority in source qualifier if we give filter condition (EMPNAME =10) and also sql override (EMPNO=20)
If we double click on source qualifier, we can see both the properties filter condition and sql override. The highest priority is sql override, it takes the condition EMPNO=20. If we don't provide sql override then it will take value from the filter condition.

Question 49
Can we connect more than one source to a single source qualifier?
we can connect more than one source to a single source qualifier. When you drag multiple sources,for each source you can see one source qualifier, you need manually delete all source qualifier except one and then all other sources ports you can connect to one source qualifier

Question 50
What is impacted mapping?
If any of the components in the mapping changed, example sources or targets or reusable transformations or mapplets then mapping becomes impacted. It gives in yellow color triangle symbol. We have to validate the mapping to disappear impacted symbol.